Skip to content

Instantly share code, notes, and snippets.

@feiskyer
Last active March 24, 2016 09:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save feiskyer/eadd29fa7e481df89f0b to your computer and use it in GitHub Desktop.
Save feiskyer/eadd29fa7e481df89f0b to your computer and use it in GitHub Desktop.
E2e test for Kubernetes v1.2 with hyper runtime.
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1458804623 - Will randomize all specs
Will run 90 of 268 specs

Mar 24 07:30:23.955: INFO: >>> testContext.KubeConfig: /root/.kube/config

Mar 24 07:30:23.962: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 24 07:30:23.966: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 24 07:30:23.966: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
[BeforeEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:30:23.966: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:30:23.970: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-nmnrw
Mar 24 07:30:23.972: INFO: Service account default in ns e2e-tests-container-probe-nmnrw had 0 secrets, ignoring for 2s: <nil>
Mar 24 07:30:25.975: INFO: Service account default in ns e2e-tests-container-probe-nmnrw with secrets found. (2.004397354s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:30:25.975: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-nmnrw
Mar 24 07:30:25.976: INFO: Service account default in ns e2e-tests-container-probe-nmnrw with secrets found. (964.555µs)
[BeforeEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:45
[It] with readiness probe should not be ready before initial delay and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
Mar 24 07:30:27.988: INFO: pod is not yet ready; pod has phase "Pending".
Mar 24 07:30:29.989: INFO: pod is not yet ready; pod has phase "Pending".
Mar 24 07:30:31.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:33.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:35.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:37.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:39.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:41.990: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:43.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:45.989: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:47.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:49.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:51.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:53.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:55.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:57.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:30:59.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:31:01.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:31:03.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:31:05.988: INFO: pod is not yet ready; pod has phase "Running".
Mar 24 07:31:07.990: INFO: Container started at 2016-03-24 07:30:26 +0000 UTC, pod became ready at 2016-03-24 07:31:06 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:31:07.990: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nmnrw" for this suite.

• [SLOW TEST:64.032 seconds]
[k8s.io] Probing container
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  with readiness probe should not be ready before initial delay and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
------------------------------
S
------------------------------
[k8s.io] Services 
  should provide secure master service [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:31:27.998: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:31:28.001: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-bg697
Mar 24 07:31:28.002: INFO: Get service account default in ns e2e-tests-services-bg697 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:31:30.003: INFO: Service account default in ns e2e-tests-services-bg697 with secrets found. (2.002606351s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:31:30.003: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-bg697
Mar 24 07:31:30.004: INFO: Service account default in ns e2e-tests-services-bg697 with secrets found. (869.118µs)
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 24 07:31:30.004: INFO: >>> testContext.KubeConfig: /root/.kube/config

[It] should provide secure master service [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
[AfterEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:31:30.006: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-bg697" for this suite.

• [SLOW TEST:7.015 seconds]
[k8s.io] Services
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide secure master service [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
------------------------------
SSSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:161
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:31:35.013: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:31:35.016: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8rwcf
Mar 24 07:31:35.016: INFO: Get service account default in ns e2e-tests-kubectl-8rwcf failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:31:37.018: INFO: Service account default in ns e2e-tests-kubectl-8rwcf with secrets found. (2.002366309s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:31:37.018: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8rwcf
Mar 24 07:31:37.019: INFO: Service account default in ns e2e-tests-kubectl-8rwcf with secrets found. (818.653µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:130
[It] should do a rolling update of a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:161
STEP: creating the initial replication controller
Mar 24 07:31:37.019: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:37.107: INFO: stderr: ""
Mar 24 07:31:37.107: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:31:37.108: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:37.145: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:37.145: INFO: stdout: "update-demo-nautilus-c4xqt update-demo-nautilus-gpps6"
Mar 24 07:31:37.145: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4xqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:37.175: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:37.175: INFO: stdout: ""
Mar 24 07:31:37.175: INFO: update-demo-nautilus-c4xqt is created but not running
Mar 24 07:31:42.178: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:42.200: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:42.200: INFO: stdout: "update-demo-nautilus-c4xqt update-demo-nautilus-gpps6"
Mar 24 07:31:42.200: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4xqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:42.222: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:42.222: INFO: stdout: ""
Mar 24 07:31:42.222: INFO: update-demo-nautilus-c4xqt is created but not running
Mar 24 07:31:47.222: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:47.238: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:47.238: INFO: stdout: "update-demo-nautilus-c4xqt update-demo-nautilus-gpps6"
Mar 24 07:31:47.238: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4xqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:47.252: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:47.252: INFO: stdout: "true"
Mar 24 07:31:47.252: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4xqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:47.266: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:47.266: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:31:47.266: INFO: validating pod update-demo-nautilus-c4xqt
Mar 24 07:31:47.322: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:31:47.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:31:47.322: INFO: update-demo-nautilus-c4xqt is verified up and running
Mar 24 07:31:47.322: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpps6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:47.338: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:47.338: INFO: stdout: "true"
Mar 24 07:31:47.338: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpps6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:31:47.351: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:31:47.351: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:31:47.351: INFO: validating pod update-demo-nautilus-gpps6
Mar 24 07:31:47.402: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:31:47.402: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:31:47.402: INFO: update-demo-nautilus-gpps6 is verified up and running
STEP: rolling-update to new replication controller
Mar 24 07:31:47.402: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/kitten-rc.yaml --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.540: INFO: stderr: ""
Mar 24 07:32:19.540: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting update-demo-nautilus\nreplicationcontroller \"update-demo-nautilus\" rolling updated to \"update-demo-kitten\""
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:32:19.540: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.556: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:32:19.556: INFO: stdout: "update-demo-kitten-gkj4d update-demo-kitten-yqqo8"
Mar 24 07:32:19.556: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-kitten-gkj4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.571: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:32:19.571: INFO: stdout: "true"
Mar 24 07:32:19.571: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-kitten-gkj4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.585: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:32:19.585: INFO: stdout: "gcr.io/google_containers/update-demo:kitten"
Mar 24 07:32:19.585: INFO: validating pod update-demo-kitten-gkj4d
Mar 24 07:32:19.638: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 24 07:32:19.638: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 24 07:32:19.638: INFO: update-demo-kitten-gkj4d is verified up and running
Mar 24 07:32:19.638: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-kitten-yqqo8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.653: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:32:19.653: INFO: stdout: "true"
Mar 24 07:32:19.653: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-kitten-yqqo8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-8rwcf'
Mar 24 07:32:19.667: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:32:19.667: INFO: stdout: "gcr.io/google_containers/update-demo:kitten"
Mar 24 07:32:19.667: INFO: validating pod update-demo-kitten-yqqo8
Mar 24 07:32:19.720: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 24 07:32:19.721: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 24 07:32:19.721: INFO: update-demo-kitten-yqqo8 is verified up and running
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:32:19.721: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8rwcf" for this suite.

• [SLOW TEST:64.716 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should do a rolling update of a replication controller [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:161
------------------------------
S
------------------------------
[k8s.io] Proxy version v1 
  should proxy logs on node [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:32:39.730: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:32:39.733: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-m28u6
Mar 24 07:32:39.734: INFO: Get service account default in ns e2e-tests-proxy-m28u6 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:32:41.736: INFO: Service account default in ns e2e-tests-proxy-m28u6 with secrets found. (2.002492098s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:32:41.736: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-m28u6
Mar 24 07:32:41.737: INFO: Service account default in ns e2e-tests-proxy-m28u6 with secrets found. (858.95µs)
[It] should proxy logs on node [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
Mar 24 07:32:41.766: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 28.235363ms)
Mar 24 07:32:41.768: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.317236ms)
Mar 24 07:32:41.771: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.681019ms)
Mar 24 07:32:41.776: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 4.992005ms)
Mar 24 07:32:41.778: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.203399ms)
Mar 24 07:32:41.781: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.206802ms)
Mar 24 07:32:41.783: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.24774ms)
Mar 24 07:32:41.785: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.186694ms)
Mar 24 07:32:41.787: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.107343ms)
Mar 24 07:32:41.789: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.105834ms)
Mar 24 07:32:41.792: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.150863ms)
Mar 24 07:32:41.794: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.181709ms)
Mar 24 07:32:41.796: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.167995ms)
Mar 24 07:32:41.798: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.121873ms)
Mar 24 07:32:41.800: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.334055ms)
Mar 24 07:32:41.803: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.185976ms)
Mar 24 07:32:41.805: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.142101ms)
Mar 24 07:32:41.807: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.104194ms)
Mar 24 07:32:41.809: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.121543ms)
Mar 24 07:32:41.811: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.087322ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:32:41.811: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-m28u6" for this suite.

• [SLOW TEST:7.089 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy logs on node [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:32:46.818: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:32:46.820: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-cdt71
Mar 24 07:32:46.821: INFO: Get service account default in ns e2e-tests-containers-cdt71 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:32:48.823: INFO: Service account default in ns e2e-tests-containers-cdt71 with secrets found. (2.002510915s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:32:48.823: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-cdt71
Mar 24 07:32:48.824: INFO: Service account default in ns e2e-tests-containers-cdt71 with secrets found. (932.797µs)
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should use the image defaults if command and args are blank [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
STEP: Creating a pod to test use defaults
Mar 24 07:32:48.827: INFO: Waiting up to 5m0s for pod client-containers-9beddc0d-f192-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:32:48.830: INFO: No Status.Info for container 'test-container' in pod 'client-containers-9beddc0d-f192-11e5-8186-064a4ed57913' yet
Mar 24 07:32:48.830: INFO: Waiting for pod client-containers-9beddc0d-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-cdt71' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.905409ms elapsed)
Mar 24 07:32:50.832: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-9beddc0d-f192-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-cdt71' so far
Mar 24 07:32:50.832: INFO: Waiting for pod client-containers-9beddc0d-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-cdt71' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004837829s elapsed)
Mar 24 07:32:52.835: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-9beddc0d-f192-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-cdt71' so far
Mar 24 07:32:52.835: INFO: Waiting for pod client-containers-9beddc0d-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-cdt71' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007560928s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-9beddc0d-f192-11e5-8186-064a4ed57913 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments]


[AfterEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:32:54.855: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-cdt71" for this suite.

• [SLOW TEST:13.044 seconds]
[k8s.io] Docker Containers
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should use the image defaults if command and args are blank [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:32:59.862: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:32:59.864: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-0q18w
Mar 24 07:32:59.865: INFO: Get service account default in ns e2e-tests-kubectl-0q18w failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:33:01.867: INFO: Service account default in ns e2e-tests-kubectl-0q18w with secrets found. (2.002560041s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:33:01.867: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-0q18w
Mar 24 07:33:01.868: INFO: Service account default in ns e2e-tests-kubectl-0q18w with secrets found. (963.671µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should add annotations for pods in rc [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
STEP: creating Redis RC
Mar 24 07:33:01.868: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-0q18w'
Mar 24 07:33:01.919: INFO: stderr: ""
Mar 24 07:33:01.919: INFO: stdout: "replicationcontroller \"redis-master\" created"
STEP: patching all pods
Mar 24 07:33:03.923: INFO: Waiting up to 5m0s for pod redis-master-yszsb status to be running
Mar 24 07:33:03.924: INFO: Waiting for pod redis-master-yszsb in namespace 'e2e-tests-kubectl-0q18w' status to be 'running'(found phase: "Pending", readiness: false) (1.322584ms elapsed)
Mar 24 07:33:05.926: INFO: Waiting for pod redis-master-yszsb in namespace 'e2e-tests-kubectl-0q18w' status to be 'running'(found phase: "Pending", readiness: false) (2.003064651s elapsed)
Mar 24 07:33:07.928: INFO: Waiting for pod redis-master-yszsb in namespace 'e2e-tests-kubectl-0q18w' status to be 'running'(found phase: "Pending", readiness: false) (4.005090733s elapsed)
Mar 24 07:33:09.930: INFO: Found pod 'redis-master-yszsb' on node '127.0.0.1'
Mar 24 07:33:09.930: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config patch pod redis-master-yszsb --namespace=e2e-tests-kubectl-0q18w -p {"metadata":{"annotations":{"x":"y"}}}'
Mar 24 07:33:09.951: INFO: stderr: ""
Mar 24 07:33:09.951: INFO: stdout: "\"redis-master-yszsb\" patched"
STEP: checking annotations
Mar 24 07:33:09.953: INFO: Waiting up to 5m0s for pod redis-master-yszsb status to be running
Mar 24 07:33:09.954: INFO: Found pod 'redis-master-yszsb' on node '127.0.0.1'
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:33:09.954: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-0q18w" for this suite.

• [SLOW TEST:30.099 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl patch
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should add annotations for pods in rc [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:492
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:33:29.962: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:33:29.964: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ctd8y
Mar 24 07:33:29.965: INFO: Get service account default in ns e2e-tests-pods-ctd8y failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:33:31.966: INFO: Service account default in ns e2e-tests-pods-ctd8y with secrets found. (2.002474266s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:33:31.966: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ctd8y
Mar 24 07:33:31.967: INFO: Service account default in ns e2e-tests-pods-ctd8y with secrets found. (879.523µs)
[It] should be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:492
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 24 07:33:31.970: INFO: Waiting up to 5m0s for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 status to be running
Mar 24 07:33:31.973: INFO: Waiting for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ctd8y' status to be 'running'(found phase: "Pending", readiness: false) (3.087226ms elapsed)
Mar 24 07:33:33.975: INFO: Waiting for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ctd8y' status to be 'running'(found phase: "Pending", readiness: false) (2.005123515s elapsed)
Mar 24 07:33:35.977: INFO: Waiting for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ctd8y' status to be 'running'(found phase: "Pending", readiness: false) (4.006975807s elapsed)
Mar 24 07:33:37.979: INFO: Waiting for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ctd8y' status to be 'running'(found phase: "Pending", readiness: false) (6.009033725s elapsed)
Mar 24 07:33:39.981: INFO: Found pod 'pod-update-b5a50054-f192-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 24 07:33:40.486: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-b5a50054-f192-11e5-8186-064a4ed57913" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Mar 24 07:33:40.990: INFO: Successfully updated pod
Mar 24 07:33:40.990: INFO: Waiting up to 5m0s for pod pod-update-b5a50054-f192-11e5-8186-064a4ed57913 status to be running
Mar 24 07:33:40.992: INFO: Found pod 'pod-update-b5a50054-f192-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: verifying the updated pod is in kubernetes
Mar 24 07:33:40.993: INFO: Pod update OK
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:33:40.999: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ctd8y" for this suite.

• [SLOW TEST:16.048 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:492
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default commmand (docker entrypoint) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:33:46.010: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:33:46.012: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-r20q0
Mar 24 07:33:46.013: INFO: Get service account default in ns e2e-tests-containers-r20q0 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:33:48.015: INFO: Service account default in ns e2e-tests-containers-r20q0 with secrets found. (2.002618582s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:33:48.015: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-r20q0
Mar 24 07:33:48.016: INFO: Service account default in ns e2e-tests-containers-r20q0 with secrets found. (856.635µs)
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default commmand (docker entrypoint) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
STEP: Creating a pod to test override command
Mar 24 07:33:48.018: INFO: Waiting up to 5m0s for pod client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:33:48.020: INFO: No Status.Info for container 'test-container' in pod 'client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913' yet
Mar 24 07:33:48.020: INFO: Waiting for pod client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-r20q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.206288ms elapsed)
Mar 24 07:33:50.022: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-r20q0' so far
Mar 24 07:33:50.022: INFO: Waiting for pod client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-r20q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004073553s elapsed)
Mar 24 07:33:52.024: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-r20q0' so far
Mar 24 07:33:52.024: INFO: Waiting for pod client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-r20q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006114281s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-bf35c8b8-f192-11e5-8186-064a4ed57913 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2]


[AfterEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:33:54.037: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-r20q0" for this suite.

• [SLOW TEST:13.034 seconds]
[k8s.io] Docker Containers
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be able to override the image's default commmand (docker entrypoint) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:778
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:33:59.044: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:33:59.046: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-va1eb
Mar 24 07:33:59.048: INFO: Get service account default in ns e2e-tests-kubectl-va1eb failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:34:01.049: INFO: Service account default in ns e2e-tests-kubectl-va1eb with secrets found. (2.002709364s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:34:01.049: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-va1eb
Mar 24 07:34:01.050: INFO: Service account default in ns e2e-tests-kubectl-va1eb with secrets found. (806.895µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl label
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:754
STEP: creating the pod
Mar 24 07:34:01.050: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:01.103: INFO: stderr: ""
Mar 24 07:34:01.103: INFO: stdout: "pod \"nginx\" created"
Mar 24 07:34:01.103: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx]
Mar 24 07:34:01.103: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Mar 24 07:34:01.105: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-va1eb' status to be 'running and ready'(found phase: "Pending", readiness: false) (1.44675ms elapsed)
Mar 24 07:34:03.107: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-va1eb' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.003310679s elapsed)
Mar 24 07:34:05.109: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-va1eb' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.005266497s elapsed)
Mar 24 07:34:07.111: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should update the label on a resource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:778
STEP: adding the label testing-label with value testing-label-value to a pod
Mar 24 07:34:07.111: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config label pods nginx testing-label=testing-label-value --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.131: INFO: stderr: ""
Mar 24 07:34:07.131: INFO: stdout: "pod \"nginx\" labeled"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar 24 07:34:07.131: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.144: INFO: stderr: ""
Mar 24 07:34:07.144: INFO: stdout: "NAME      READY     STATUS    RESTARTS   AGE       TESTING-LABEL\nnginx     1/1       Running   0          6s        testing-label-value"
STEP: removing the label testing-label of a pod
Mar 24 07:34:07.144: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config label pods nginx testing-label- --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.163: INFO: stderr: ""
Mar 24 07:34:07.163: INFO: stdout: "pod \"nginx\" labeled"
STEP: verifying the pod doesn't have the label testing-label
Mar 24 07:34:07.163: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.176: INFO: stderr: ""
Mar 24 07:34:07.176: INFO: stdout: "NAME      READY     STATUS    RESTARTS   AGE       TESTING-LABEL\nnginx     1/1       Running   0          6s        <none>"
[AfterEach] [k8s.io] Kubectl label
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:757
STEP: using delete to clean up resources
Mar 24 07:34:07.176: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete --grace-period=0 -f /root/src/k8s.io/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.204: INFO: stderr: ""
Mar 24 07:34:07.204: INFO: stdout: "pod \"nginx\" deleted"
Mar 24 07:34:07.204: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-va1eb'
Mar 24 07:34:07.222: INFO: stderr: ""
Mar 24 07:34:07.222: INFO: stdout: ""
Mar 24 07:34:07.222: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-va1eb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:34:07.239: INFO: stderr: ""
Mar 24 07:34:07.239: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:34:07.239: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-va1eb" for this suite.

• [SLOW TEST:13.203 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl label
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should update the label on a resource [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:778
------------------------------
S
------------------------------
[k8s.io] Pods 
  should *not* be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:805
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:34:12.248: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:34:12.250: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-0od4e
Mar 24 07:34:12.251: INFO: Get service account default in ns e2e-tests-pods-0od4e failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:34:14.253: INFO: Service account default in ns e2e-tests-pods-0od4e with secrets found. (2.003079161s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:34:14.253: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-0od4e
Mar 24 07:34:14.254: INFO: Service account default in ns e2e-tests-pods-0od4e with secrets found. (868.717µs)
[It] should *not* be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:805
STEP: Creating pod liveness-http in namespace e2e-tests-pods-0od4e
Mar 24 07:34:14.257: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 24 07:34:14.260: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-0od4e' status to be '!pending'(found phase: "Pending", readiness: false) (2.497969ms elapsed)
Mar 24 07:34:16.262: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-0od4e' status to be '!pending'(found phase: "Pending", readiness: false) (2.004514171s elapsed)
Mar 24 07:34:18.264: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-0od4e' status to be '!pending'(found phase: "Pending", readiness: false) (4.006538844s elapsed)
Mar 24 07:34:20.266: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-0od4e' out of pending state (found '"Running"')
Mar 24 07:34:20.266: INFO: Started pod liveness-http in namespace e2e-tests-pods-0od4e
STEP: checking the pod's current state and verifying that restartCount is present
Mar 24 07:34:20.267: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:36:20.408: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-0od4e" for this suite.

• [SLOW TEST:133.170 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should *not* be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:805
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0644,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:36:25.418: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:36:25.420: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-b7th3
Mar 24 07:36:25.421: INFO: Get service account default in ns e2e-tests-emptydir-b7th3 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:36:27.423: INFO: Service account default in ns e2e-tests-emptydir-b7th3 with secrets found. (2.002874721s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:36:27.423: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-b7th3
Mar 24 07:36:27.424: INFO: Service account default in ns e2e-tests-emptydir-b7th3 with secrets found. (881.76µs)
[It] should support (root,0644,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 24 07:36:27.425: FAIL: Failed to create pod: pods "pod-1e398f51-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-b7th3".
Mar 24 07:36:27.433: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 07:36:27.434: INFO: 
Mar 24 07:36:27.435: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:36:27.436: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 319 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:36:20 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:36:20 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:36:27.436: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:36:27.437: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:36:27.464: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:36:27.464: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.473686s}
Mar 24 07:36:27.464: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:25.484152s}
Mar 24 07:36:27.464: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b7th3" for this suite.

• Failure [7.053 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0644,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92

  Mar 24 07:36:27.425: Failed to create pod: pods "pod-1e398f51-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Services 
  should serve multiport endpoints from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:36:32.471: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:36:32.474: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-b97ht
Mar 24 07:36:32.475: INFO: Get service account default in ns e2e-tests-services-b97ht failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:36:34.476: INFO: Service account default in ns e2e-tests-services-b97ht with secrets found. (2.002612196s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:36:34.476: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-b97ht
Mar 24 07:36:34.477: INFO: Service account default in ns e2e-tests-services-b97ht with secrets found. (843.998µs)
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 24 07:36:34.477: INFO: >>> testContext.KubeConfig: /root/.kube/config

[It] should serve multiport endpoints from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-b97ht
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b97ht to expose endpoints map[]
Mar 24 07:36:34.485: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b97ht exposes endpoints map[] (1.500462ms elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-b97ht
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b97ht to expose endpoints map[pod1:[100]]
Mar 24 07:36:38.504: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.014154724s elapsed, will retry)
Mar 24 07:36:40.509: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b97ht exposes endpoints map[pod1:[100]] (6.019497085s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-b97ht
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b97ht to expose endpoints map[pod1:[100] pod2:[101]]
Mar 24 07:36:44.545: INFO: Unexpected endpoints: found map[226f527e-f193-11e5-a8cf-064a4ed57913:[100]], expected map[pod1:[100] pod2:[101]] (4.032919591s elapsed, will retry)
Mar 24 07:36:46.553: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b97ht exposes endpoints map[pod1:[100] pod2:[101]] (6.040823459s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-b97ht
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b97ht to expose endpoints map[pod2:[101]]
Mar 24 07:36:47.563: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b97ht exposes endpoints map[pod2:[101]] (1.006485026s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-b97ht
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b97ht to expose endpoints map[]
Mar 24 07:36:48.574: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b97ht exposes endpoints map[] (1.003052178s elapsed)
[AfterEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:36:48.591: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-b97ht" for this suite.

• [SLOW TEST:36.159 seconds]
[k8s.io] Services
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should serve multiport endpoints from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
------------------------------
SSSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:839
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:37:08.630: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:37:08.634: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-9ekik
Mar 24 07:37:08.634: INFO: Get service account default in ns e2e-tests-kubectl-9ekik failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:37:10.636: INFO: Service account default in ns e2e-tests-kubectl-9ekik with secrets found. (2.002179429s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:37:10.636: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-9ekik
Mar 24 07:37:10.637: INFO: Service account default in ns e2e-tests-kubectl-9ekik with secrets found. (840.335µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl logs
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:793
STEP: creating an rc
Mar 24 07:37:10.637: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-9ekik'
Mar 24 07:37:10.690: INFO: stderr: ""
Mar 24 07:37:10.690: INFO: stdout: "replicationcontroller \"redis-master\" created"
[It] should be able to retrieve and filter logs [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:839
Mar 24 07:37:12.693: INFO: Waiting up to 5m0s for pod redis-master-t8seh status to be running
Mar 24 07:37:12.695: INFO: Waiting for pod redis-master-t8seh in namespace 'e2e-tests-kubectl-9ekik' status to be 'running'(found phase: "Pending", readiness: false) (1.825027ms elapsed)
Mar 24 07:37:14.697: INFO: Waiting for pod redis-master-t8seh in namespace 'e2e-tests-kubectl-9ekik' status to be 'running'(found phase: "Pending", readiness: false) (2.003819212s elapsed)
Mar 24 07:37:16.699: INFO: Waiting for pod redis-master-t8seh in namespace 'e2e-tests-kubectl-9ekik' status to be 'running'(found phase: "Pending", readiness: false) (4.005853511s elapsed)
Mar 24 07:37:18.702: INFO: Found pod 'redis-master-t8seh' on node '127.0.0.1'
STEP: checking for a matching strings
Mar 24 07:37:18.702: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik'
Mar 24 07:37:18.724: INFO: stderr: ""
Mar 24 07:37:18.724: INFO: stdout: "2:C 24 Mar 07:37:14.890 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n2:M 24 Mar 07:37:14.907 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.\n2:M 24 Mar 07:37:14.910 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.\n2:M 24 Mar 07:37:14.912 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.\n                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 2\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n2:M 24 Mar 07:37:14.946 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n2:M 24 Mar 07:37:14.947 # Server started, Redis version 3.0.7\n2:M 24 Mar 07:37:14.954 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.\n2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379"
STEP: limiting log lines
Mar 24 07:37:18.724: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik --tail=1'
Mar 24 07:37:18.743: INFO: stderr: ""
Mar 24 07:37:18.743: INFO: stdout: "2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379"
STEP: limiting log bytes
Mar 24 07:37:18.743: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik --limit-bytes=1'
Mar 24 07:37:18.763: INFO: stderr: ""
Mar 24 07:37:18.763: INFO: stdout: "2"
STEP: exposing timestamps
Mar 24 07:37:18.763: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik --tail=1 --timestamps'
Mar 24 07:37:18.780: INFO: stderr: ""
Mar 24 07:37:18.780: INFO: stdout: "2016-03-24T07:37:16.430831165Z 2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379"
STEP: restricting to a time range
Mar 24 07:37:21.281: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik --since=1s'
Mar 24 07:37:21.301: INFO: stderr: ""
Mar 24 07:37:21.301: INFO: stdout: "2:C 24 Mar 07:37:14.890 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n2:M 24 Mar 07:37:14.907 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.\n2:M 24 Mar 07:37:14.910 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.\n2:M 24 Mar 07:37:14.912 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.\n                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 2\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n2:M 24 Mar 07:37:14.946 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n2:M 24 Mar 07:37:14.947 # Server started, Redis version 3.0.7\n2:M 24 Mar 07:37:14.954 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.\n2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379"
Mar 24 07:37:21.301: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-t8seh redis-master --namespace=e2e-tests-kubectl-9ekik --since=24h'
Mar 24 07:37:21.322: INFO: stderr: ""
Mar 24 07:37:21.322: INFO: stdout: "2:C 24 Mar 07:37:14.890 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n2:M 24 Mar 07:37:14.907 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.\n2:M 24 Mar 07:37:14.910 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.\n2:M 24 Mar 07:37:14.912 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.\n                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 2\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n2:M 24 Mar 07:37:14.946 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n2:M 24 Mar 07:37:14.947 # Server started, Redis version 3.0.7\n2:M 24 Mar 07:37:14.954 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.\n2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379"
[AfterEach] [k8s.io] Kubectl logs
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:796
STEP: using delete to clean up resources
Mar 24 07:37:21.326: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete --grace-period=0 -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-9ekik'
Mar 24 07:37:23.412: INFO: stderr: ""
Mar 24 07:37:23.412: INFO: stdout: "replicationcontroller \"redis-master\" deleted"
Mar 24 07:37:23.412: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-9ekik'
Mar 24 07:37:23.436: INFO: stderr: ""
Mar 24 07:37:23.436: INFO: stdout: ""
Mar 24 07:37:23.436: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-9ekik -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:37:23.448: INFO: stderr: ""
Mar 24 07:37:23.448: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-kubectl-9ekik".
Mar 24 07:37:23.452: INFO: At 2016-03-24 07:37:10 +0000 UTC - event for redis-master: {replication-controller } SuccessfulCreate: Created pod: redis-master-t8seh
Mar 24 07:37:23.452: INFO: At 2016-03-24 07:37:10 +0000 UTC - event for redis-master-t8seh: {default-scheduler } Scheduled: Successfully assigned redis-master-t8seh to 127.0.0.1
Mar 24 07:37:23.452: INFO: At 2016-03-24 07:37:10 +0000 UTC - event for redis-master-t8seh: {kubelet 127.0.0.1} Pulling: pulling image "redis"
Mar 24 07:37:23.452: INFO: At 2016-03-24 07:37:10 +0000 UTC - event for redis-master-t8seh: {kubelet 127.0.0.1} Pulled: Successfully pulled image "redis"
Mar 24 07:37:23.452: INFO: At 2016-03-24 07:37:22 +0000 UTC - event for redis-master: {replication-controller } SuccessfulDelete: Deleted pod: redis-master-t8seh
Mar 24 07:37:23.453: INFO: POD                 NODE       PHASE    GRACE  CONDITIONS
Mar 24 07:37:23.453: INFO: redis-master-t8seh  127.0.0.1  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 07:37:16 +0000 UTC  }]
Mar 24 07:37:23.453: INFO: 
Mar 24 07:37:23.455: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:37:23.456: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 403 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:37:20 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:37:20 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:37:23.456: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:37:23.457: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:37:23.459: INFO: redis-master-t8seh started at 2016-03-24 07:37:10 +0000 UTC (1 container statuses recorded)
Mar 24 07:37:23.459: INFO: 	Container redis-master ready: true, restart count 0
Mar 24 07:37:23.708: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:37:23.708: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.473686s}
Mar 24 07:37:23.708: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:18.391778s}
Mar 24 07:37:23.708: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9ekik" for this suite.

• Failure [35.086 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl logs
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should be able to retrieve and filter logs [Conformance] [It]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:839

    expected recent(26) to be less than older(26)
    recent lines:
    2:C 24 Mar 07:37:14.890 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    2:M 24 Mar 07:37:14.907 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
    2:M 24 Mar 07:37:14.910 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
    2:M 24 Mar 07:37:14.912 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
                    _._                                                  
               _.-``__ ''-._                                             
          _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit
      .-`` .-```.  ```\/    _.,_ ''-._                                   
     (    '      ,       .-`  | `,    )     Running in standalone mode
     |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
     |    `-._   `._    /     _.-'    |     PID: 2
      `-._    `-._  `-./  _.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |           http://redis.io        
      `-._    `-._`-.__.-'_.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |                                  
      `-._    `-._`-.__.-'_.-'    _.-'                                   
          `-._    `-.__.-'    _.-'                                       
              `-._        _.-'                                           
                  `-.__.-'                                               
    
    2:M 24 Mar 07:37:14.946 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
    2:M 24 Mar 07:37:14.947 # Server started, Redis version 3.0.7
    2:M 24 Mar 07:37:14.954 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379
    older lines:
    2:C 24 Mar 07:37:14.890 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    2:M 24 Mar 07:37:14.907 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
    2:M 24 Mar 07:37:14.910 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
    2:M 24 Mar 07:37:14.912 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
                    _._                                                  
               _.-``__ ''-._                                             
          _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit
      .-`` .-```.  ```\/    _.,_ ''-._                                   
     (    '      ,       .-`  | `,    )     Running in standalone mode
     |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
     |    `-._   `._    /     _.-'    |     PID: 2
      `-._    `-._  `-./  _.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |           http://redis.io        
      `-._    `-._`-.__.-'_.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |                                  
      `-._    `-._`-.__.-'_.-'    _.-'                                   
          `-._    `-.__.-'    _.-'                                       
              `-._        _.-'                                           
                  `-.__.-'                                               
    
    2:M 24 Mar 07:37:14.946 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
    2:M 24 Mar 07:37:14.947 # Server started, Redis version 3.0.7
    2:M 24 Mar 07:37:14.954 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    2:M 24 Mar 07:37:14.956 * The server is now ready to accept connections on port 6379
    
    Expected
        <int>: 26
    to be <
        <int>: 26

    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837
------------------------------
SSSSSSSS
------------------------------
[k8s.io] ConfigMap 
  updates should be reflected in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
[BeforeEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:37:43.716: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:37:43.718: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-4y0d0
Mar 24 07:37:43.719: INFO: Get service account default in ns e2e-tests-configmap-4y0d0 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:37:45.721: INFO: Service account default in ns e2e-tests-configmap-4y0d0 with secrets found. (2.002585386s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:37:45.721: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-4y0d0
Mar 24 07:37:45.722: INFO: Service account default in ns e2e-tests-configmap-4y0d0 with secrets found. (898.203µs)
[It] updates should be reflected in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
STEP: Creating configMap with name configmap-test-upd-4ce4dbaa-f193-11e5-8186-064a4ed57913
STEP: Creating the pod
Mar 24 07:37:45.728: INFO: Waiting up to 5m0s for pod pod-configmaps-4ce54792-f193-11e5-8186-064a4ed57913 status to be running
Mar 24 07:37:45.730: INFO: Waiting for pod pod-configmaps-4ce54792-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-4y0d0' status to be 'running'(found phase: "Pending", readiness: false) (2.347576ms elapsed)
Mar 24 07:37:47.732: INFO: Waiting for pod pod-configmaps-4ce54792-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-4y0d0' status to be 'running'(found phase: "Pending", readiness: false) (2.004261789s elapsed)
Mar 24 07:37:49.734: INFO: Waiting for pod pod-configmaps-4ce54792-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-4y0d0' status to be 'running'(found phase: "Pending", readiness: false) (4.006309863s elapsed)
Mar 24 07:37:51.736: INFO: Found pod 'pod-configmaps-4ce54792-f193-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: Updating configmap configmap-test-upd-4ce4dbaa-f193-11e5-8186-064a4ed57913
STEP: waiting to observe update in volume
STEP: Deleting the pod
STEP: Cleaning up the configMap
[AfterEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:39:11.977: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4y0d0" for this suite.

• [SLOW TEST:93.271 seconds]
[k8s.io] ConfigMap
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  updates should be reflected in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
------------------------------
S
------------------------------
[k8s.io] ConfigMap 
  should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
[BeforeEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:39:16.987: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:39:16.989: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-7ry9u
Mar 24 07:39:16.990: INFO: Get service account default in ns e2e-tests-configmap-7ry9u failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:39:18.992: INFO: Service account default in ns e2e-tests-configmap-7ry9u with secrets found. (2.00219448s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:39:18.992: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-7ry9u
Mar 24 07:39:18.992: INFO: Service account default in ns e2e-tests-configmap-7ry9u with secrets found. (833.254µs)
[It] should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
STEP: Creating configMap with name configmap-test-volume-847cdb6e-f193-11e5-8186-064a4ed57913
STEP: Creating a pod to test consume configMaps
Mar 24 07:39:18.998: INFO: Waiting up to 5m0s for pod pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:39:18.999: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913' yet
Mar 24 07:39:18.999: INFO: Waiting for pod pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-7ry9u' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.509495ms elapsed)
Mar 24 07:39:21.002: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-7ry9u' so far
Mar 24 07:39:21.002: INFO: Waiting for pod pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-7ry9u' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004093244s elapsed)
Mar 24 07:39:23.004: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-7ry9u' so far
Mar 24 07:39:23.004: INFO: Waiting for pod pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-7ry9u' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006079157s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-configmaps-847d12b9-f193-11e5-8186-064a4ed57913 container configmap-volume-test: <nil>
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/data-1": value-1


STEP: Cleaning up the configMap
[AfterEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:39:25.020: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7ry9u" for this suite.

• [SLOW TEST:13.041 seconds]
[k8s.io] ConfigMap
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0777,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:39:30.028: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:39:30.030: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-zen1q
Mar 24 07:39:30.031: INFO: Get service account default in ns e2e-tests-emptydir-zen1q failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:39:32.033: INFO: Service account default in ns e2e-tests-emptydir-zen1q with secrets found. (2.00290156s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:39:32.033: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-zen1q
Mar 24 07:39:32.034: INFO: Service account default in ns e2e-tests-emptydir-zen1q with secrets found. (850.546µs)
[It] should support (non-root,0777,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 24 07:39:32.035: FAIL: Failed to create pod: pods "pod-8c42c6d0-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-zen1q".
Mar 24 07:39:32.041: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 07:39:32.041: INFO: 
Mar 24 07:39:32.042: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:39:32.043: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 484 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:39:31 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:39:31 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:39:32.043: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:39:32.044: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:39:32.075: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:39:32.075: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.473686s}
Mar 24 07:39:32.075: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:15.222838s}
Mar 24 07:39:32.075: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zen1q" for this suite.

• Failure [7.055 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0777,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112

  Mar 24 07:39:32.035: Failed to create pod: pods "pod-8c42c6d0-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Service endpoints latency 
  should not be very high [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
[BeforeEach] [k8s.io] Service endpoints latency
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:39:37.083: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:39:37.085: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-2z02q
Mar 24 07:39:37.086: INFO: Get service account default in ns e2e-tests-svc-latency-2z02q failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:39:39.088: INFO: Service account default in ns e2e-tests-svc-latency-2z02q with secrets found. (2.002420125s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:39:39.088: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-2z02q
Mar 24 07:39:39.089: INFO: Service account default in ns e2e-tests-svc-latency-2z02q with secrets found. (840.419µs)
[It] should not be very high [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2z02q
Mar 24 07:39:39.092: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2z02q, replica count: 1
Mar 24 07:39:40.092: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:41.092: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:42.092: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:43.093: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:44.093: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:45.093: INFO: svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 07:39:45.198: INFO: Created: latency-svc-5om65
Mar 24 07:39:45.202: INFO: Got endpoints: latency-svc-5om65 [8.734428ms]
Mar 24 07:39:45.228: INFO: Created: latency-svc-cuyj5
Mar 24 07:39:45.233: INFO: Created: latency-svc-mvfc3
Mar 24 07:39:45.235: INFO: Got endpoints: latency-svc-cuyj5 [19.143498ms]
Mar 24 07:39:45.241: INFO: Created: latency-svc-dcfud
Mar 24 07:39:45.241: INFO: Got endpoints: latency-svc-mvfc3 [26.34928ms]
Mar 24 07:39:45.244: INFO: Created: latency-svc-kkqk5
Mar 24 07:39:45.253: INFO: Created: latency-svc-80iqx
Mar 24 07:39:45.256: INFO: Got endpoints: latency-svc-dcfud [41.012563ms]
Mar 24 07:39:45.257: INFO: Got endpoints: latency-svc-kkqk5 [42.260653ms]
Mar 24 07:39:45.260: INFO: Created: latency-svc-d9ney
Mar 24 07:39:45.263: INFO: Got endpoints: latency-svc-80iqx [47.634956ms]
Mar 24 07:39:45.269: INFO: Got endpoints: latency-svc-d9ney [53.640366ms]
Mar 24 07:39:45.271: INFO: Created: latency-svc-4whxr
Mar 24 07:39:45.284: INFO: Got endpoints: latency-svc-4whxr [68.189892ms]
Mar 24 07:39:45.287: INFO: Created: latency-svc-6p5nb
Mar 24 07:39:45.292: INFO: Created: latency-svc-6yplw
Mar 24 07:39:45.298: INFO: Got endpoints: latency-svc-6p5nb [82.157825ms]
Mar 24 07:39:45.299: INFO: Got endpoints: latency-svc-6yplw [83.511022ms]
Mar 24 07:39:45.301: INFO: Created: latency-svc-5o4xt
Mar 24 07:39:45.306: INFO: Got endpoints: latency-svc-5o4xt [90.369467ms]
Mar 24 07:39:45.317: INFO: Created: latency-svc-dhf9c
Mar 24 07:39:45.324: INFO: Created: latency-svc-nvgb3
Mar 24 07:39:45.328: INFO: Created: latency-svc-z0jtp
Mar 24 07:39:45.348: INFO: Created: latency-svc-c9jml
Mar 24 07:39:45.351: INFO: Created: latency-svc-b408y
Mar 24 07:39:45.357: INFO: Created: latency-svc-udzbh
Mar 24 07:39:45.363: INFO: Created: latency-svc-hcwys
Mar 24 07:39:45.373: INFO: Created: latency-svc-jol6s
Mar 24 07:39:45.378: INFO: Created: latency-svc-9c00u
Mar 24 07:39:45.381: INFO: Created: latency-svc-i37or
Mar 24 07:39:45.385: INFO: Created: latency-svc-1xkg0
Mar 24 07:39:45.387: INFO: Created: latency-svc-axu6h
Mar 24 07:39:45.392: INFO: Created: latency-svc-ossxv
Mar 24 07:39:45.397: INFO: Created: latency-svc-q83a9
Mar 24 07:39:45.402: INFO: Created: latency-svc-4hye7
Mar 24 07:39:45.529: INFO: Got endpoints: latency-svc-dhf9c [313.296011ms]
Mar 24 07:39:45.544: INFO: Created: latency-svc-9mx3t
Mar 24 07:39:45.679: INFO: Got endpoints: latency-svc-nvgb3 [463.460692ms]
Mar 24 07:39:45.699: INFO: Created: latency-svc-epfc1
Mar 24 07:39:45.730: INFO: Got endpoints: latency-svc-z0jtp [514.24902ms]
Mar 24 07:39:45.746: INFO: Created: latency-svc-k9mk7
Mar 24 07:39:45.830: INFO: Got endpoints: latency-svc-c9jml [615.129981ms]
Mar 24 07:39:45.851: INFO: Created: latency-svc-8pfjq
Mar 24 07:39:45.880: INFO: Got endpoints: latency-svc-b408y [578.916563ms]
Mar 24 07:39:45.897: INFO: Created: latency-svc-pje5k
Mar 24 07:39:46.029: INFO: Got endpoints: latency-svc-udzbh [813.196471ms]
Mar 24 07:39:46.046: INFO: Created: latency-svc-5es2p
Mar 24 07:39:46.180: INFO: Got endpoints: latency-svc-hcwys [858.718012ms]
Mar 24 07:39:46.197: INFO: Created: latency-svc-ixsos
Mar 24 07:39:46.230: INFO: Got endpoints: latency-svc-jol6s [899.279566ms]
Mar 24 07:39:46.262: INFO: Created: latency-svc-bu4f7
Mar 24 07:39:46.330: INFO: Got endpoints: latency-svc-9c00u [996.987716ms]
Mar 24 07:39:46.347: INFO: Created: latency-svc-4g51w
Mar 24 07:39:46.380: INFO: Got endpoints: latency-svc-i37or [1.045021876s]
Mar 24 07:39:46.395: INFO: Created: latency-svc-pfa4b
Mar 24 07:39:46.529: INFO: Got endpoints: latency-svc-1xkg0 [1.191462067s]
Mar 24 07:39:46.547: INFO: Created: latency-svc-3ggan
Mar 24 07:39:46.680: INFO: Got endpoints: latency-svc-axu6h [1.339998205s]
Mar 24 07:39:46.697: INFO: Created: latency-svc-1bik4
Mar 24 07:39:46.730: INFO: Got endpoints: latency-svc-ossxv [1.387823484s]
Mar 24 07:39:46.746: INFO: Created: latency-svc-8s8zi
Mar 24 07:39:46.829: INFO: Got endpoints: latency-svc-q83a9 [1.476439642s]
Mar 24 07:39:46.866: INFO: Created: latency-svc-ieibc
Mar 24 07:39:46.879: INFO: Got endpoints: latency-svc-4hye7 [1.513738155s]
Mar 24 07:39:46.894: INFO: Created: latency-svc-auvvn
Mar 24 07:39:47.179: INFO: Got endpoints: latency-svc-9mx3t [1.641470829s]
Mar 24 07:39:47.193: INFO: Created: latency-svc-3qgzk
Mar 24 07:39:47.280: INFO: Got endpoints: latency-svc-epfc1 [1.589175075s]
Mar 24 07:39:47.297: INFO: Created: latency-svc-ui98z
Mar 24 07:39:47.380: INFO: Got endpoints: latency-svc-k9mk7 [1.63895138s]
Mar 24 07:39:47.397: INFO: Created: latency-svc-4dqqm
Mar 24 07:39:47.579: INFO: Got endpoints: latency-svc-8pfjq [1.735677663s]
Mar 24 07:39:47.595: INFO: Created: latency-svc-qu7vm
Mar 24 07:39:47.730: INFO: Got endpoints: latency-svc-pje5k [1.840298264s]
Mar 24 07:39:47.753: INFO: Created: latency-svc-urbco
Mar 24 07:39:47.880: INFO: Got endpoints: latency-svc-5es2p [1.841317657s]
Mar 24 07:39:47.894: INFO: Created: latency-svc-u222n
Mar 24 07:39:48.030: INFO: Got endpoints: latency-svc-ixsos [1.841838241s]
Mar 24 07:39:48.042: INFO: Created: latency-svc-lasn7
Mar 24 07:39:48.179: INFO: Got endpoints: latency-svc-bu4f7 [1.927271599s]
Mar 24 07:39:48.208: INFO: Created: latency-svc-ji036
Mar 24 07:39:48.330: INFO: Got endpoints: latency-svc-4g51w [1.989362191s]
Mar 24 07:39:48.346: INFO: Created: latency-svc-e6m3x
Mar 24 07:39:48.479: INFO: Got endpoints: latency-svc-pfa4b [2.088855912s]
Mar 24 07:39:48.494: INFO: Created: latency-svc-9q8yh
Mar 24 07:39:48.629: INFO: Got endpoints: latency-svc-3ggan [2.087146877s]
Mar 24 07:39:48.644: INFO: Created: latency-svc-hvavx
Mar 24 07:39:48.780: INFO: Got endpoints: latency-svc-1bik4 [2.087749423s]
Mar 24 07:39:48.800: INFO: Created: latency-svc-stguh
Mar 24 07:39:48.929: INFO: Got endpoints: latency-svc-8s8zi [2.187796257s]
Mar 24 07:39:48.949: INFO: Created: latency-svc-nd65s
Mar 24 07:39:49.079: INFO: Got endpoints: latency-svc-ieibc [2.219909794s]
Mar 24 07:39:49.093: INFO: Created: latency-svc-cmop3
Mar 24 07:39:49.229: INFO: Got endpoints: latency-svc-auvvn [2.339933869s]
Mar 24 07:39:49.243: INFO: Created: latency-svc-k5ytj
Mar 24 07:39:49.379: INFO: Got endpoints: latency-svc-3qgzk [2.190410746s]
Mar 24 07:39:49.398: INFO: Created: latency-svc-f5fj6
Mar 24 07:39:49.529: INFO: Got endpoints: latency-svc-ui98z [2.239847661s]
Mar 24 07:39:49.548: INFO: Created: latency-svc-72mxk
Mar 24 07:39:49.680: INFO: Got endpoints: latency-svc-4dqqm [2.290449315s]
Mar 24 07:39:49.695: INFO: Created: latency-svc-7j8vp
Mar 24 07:39:49.830: INFO: Got endpoints: latency-svc-qu7vm [2.239551011s]
Mar 24 07:39:49.844: INFO: Created: latency-svc-14lf3
Mar 24 07:39:49.979: INFO: Got endpoints: latency-svc-urbco [2.230985843s]
Mar 24 07:39:49.995: INFO: Created: latency-svc-a8l15
Mar 24 07:39:50.130: INFO: Got endpoints: latency-svc-u222n [2.241289765s]
Mar 24 07:39:50.143: INFO: Created: latency-svc-i96x2
Mar 24 07:39:50.279: INFO: Got endpoints: latency-svc-lasn7 [2.241539486s]
Mar 24 07:39:50.293: INFO: Created: latency-svc-rbe7h
Mar 24 07:39:50.430: INFO: Got endpoints: latency-svc-ji036 [2.232219376s]
Mar 24 07:39:50.444: INFO: Created: latency-svc-r0ypu
Mar 24 07:39:50.579: INFO: Got endpoints: latency-svc-e6m3x [2.238622773s]
Mar 24 07:39:50.603: INFO: Created: latency-svc-2ga5x
Mar 24 07:39:50.730: INFO: Got endpoints: latency-svc-9q8yh [2.241184761s]
Mar 24 07:39:50.743: INFO: Created: latency-svc-g8qzi
Mar 24 07:39:50.880: INFO: Got endpoints: latency-svc-hvavx [2.241212408s]
Mar 24 07:39:50.894: INFO: Created: latency-svc-p52w7
Mar 24 07:39:51.029: INFO: Got endpoints: latency-svc-stguh [2.240843413s]
Mar 24 07:39:51.045: INFO: Created: latency-svc-ihryk
Mar 24 07:39:51.180: INFO: Got endpoints: latency-svc-nd65s [2.238922666s]
Mar 24 07:39:51.195: INFO: Created: latency-svc-40mb5
Mar 24 07:39:51.330: INFO: Got endpoints: latency-svc-cmop3 [2.241228124s]
Mar 24 07:39:51.351: INFO: Created: latency-svc-jsaqs
Mar 24 07:39:51.479: INFO: Got endpoints: latency-svc-k5ytj [2.241899332s]
Mar 24 07:39:51.494: INFO: Created: latency-svc-pzy7b
Mar 24 07:39:51.629: INFO: Got endpoints: latency-svc-f5fj6 [2.235707239s]
Mar 24 07:39:51.654: INFO: Created: latency-svc-4rvpj
Mar 24 07:39:51.779: INFO: Got endpoints: latency-svc-72mxk [2.238298618s]
Mar 24 07:39:51.795: INFO: Created: latency-svc-ygtsj
Mar 24 07:39:51.930: INFO: Got endpoints: latency-svc-7j8vp [2.241083507s]
Mar 24 07:39:51.948: INFO: Created: latency-svc-30j2y
Mar 24 07:39:52.079: INFO: Got endpoints: latency-svc-14lf3 [2.241653815s]
Mar 24 07:39:52.093: INFO: Created: latency-svc-8azcy
Mar 24 07:39:52.230: INFO: Got endpoints: latency-svc-a8l15 [2.242798356s]
Mar 24 07:39:52.264: INFO: Created: latency-svc-0jwjf
Mar 24 07:39:52.379: INFO: Got endpoints: latency-svc-i96x2 [2.241527055s]
Mar 24 07:39:52.396: INFO: Created: latency-svc-hdhsp
Mar 24 07:39:52.530: INFO: Got endpoints: latency-svc-rbe7h [2.242547455s]
Mar 24 07:39:52.543: INFO: Created: latency-svc-fg0j2
Mar 24 07:39:52.680: INFO: Got endpoints: latency-svc-r0ypu [2.24245432s]
Mar 24 07:39:52.696: INFO: Created: latency-svc-5ikcy
Mar 24 07:39:52.830: INFO: Got endpoints: latency-svc-2ga5x [2.233183094s]
Mar 24 07:39:52.842: INFO: Created: latency-svc-9bv3q
Mar 24 07:39:52.979: INFO: Got endpoints: latency-svc-g8qzi [2.241402372s]
Mar 24 07:39:52.994: INFO: Created: latency-svc-r365a
Mar 24 07:39:53.129: INFO: Got endpoints: latency-svc-p52w7 [2.242096348s]
Mar 24 07:39:53.143: INFO: Created: latency-svc-buudj
Mar 24 07:39:53.279: INFO: Got endpoints: latency-svc-ihryk [2.241178623s]
Mar 24 07:39:53.294: INFO: Created: latency-svc-0ppbj
Mar 24 07:39:53.430: INFO: Got endpoints: latency-svc-40mb5 [2.239912762s]
Mar 24 07:39:53.449: INFO: Created: latency-svc-so91p
Mar 24 07:39:53.580: INFO: Got endpoints: latency-svc-jsaqs [2.23570455s]
Mar 24 07:39:53.594: INFO: Created: latency-svc-kihl6
Mar 24 07:39:53.730: INFO: Got endpoints: latency-svc-pzy7b [2.242504347s]
Mar 24 07:39:53.743: INFO: Created: latency-svc-iw2c4
Mar 24 07:39:53.879: INFO: Got endpoints: latency-svc-4rvpj [2.23688825s]
Mar 24 07:39:53.896: INFO: Created: latency-svc-bvsfk
Mar 24 07:39:54.029: INFO: Got endpoints: latency-svc-ygtsj [2.238017647s]
Mar 24 07:39:54.042: INFO: Created: latency-svc-5yos5
Mar 24 07:39:54.180: INFO: Got endpoints: latency-svc-30j2y [2.240288972s]
Mar 24 07:39:54.235: INFO: Created: latency-svc-5vu3w
Mar 24 07:39:54.330: INFO: Got endpoints: latency-svc-8azcy [2.242254678s]
Mar 24 07:39:54.348: INFO: Created: latency-svc-x2baf
Mar 24 07:39:54.479: INFO: Got endpoints: latency-svc-0jwjf [2.225654269s]
Mar 24 07:39:54.499: INFO: Created: latency-svc-t51uy
Mar 24 07:39:54.629: INFO: Got endpoints: latency-svc-hdhsp [2.239479319s]
Mar 24 07:39:54.644: INFO: Created: latency-svc-iccnq
Mar 24 07:39:54.779: INFO: Got endpoints: latency-svc-fg0j2 [2.242566385s]
Mar 24 07:39:54.793: INFO: Created: latency-svc-m0knz
Mar 24 07:39:54.930: INFO: Got endpoints: latency-svc-5ikcy [2.239087288s]
Mar 24 07:39:54.949: INFO: Created: latency-svc-us5wa
Mar 24 07:39:55.079: INFO: Got endpoints: latency-svc-9bv3q [2.242329541s]
Mar 24 07:39:55.096: INFO: Created: latency-svc-ckac0
Mar 24 07:39:55.229: INFO: Got endpoints: latency-svc-r365a [2.240244331s]
Mar 24 07:39:55.249: INFO: Created: latency-svc-73zfq
Mar 24 07:39:55.379: INFO: Got endpoints: latency-svc-buudj [2.239453718s]
Mar 24 07:39:55.396: INFO: Created: latency-svc-mtj8b
Mar 24 07:39:55.530: INFO: Got endpoints: latency-svc-0ppbj [2.241027348s]
Mar 24 07:39:55.546: INFO: Created: latency-svc-ngtx6
Mar 24 07:39:55.680: INFO: Got endpoints: latency-svc-so91p [2.236759035s]
Mar 24 07:39:55.697: INFO: Created: latency-svc-7tnx7
Mar 24 07:39:55.830: INFO: Got endpoints: latency-svc-kihl6 [2.240136046s]
Mar 24 07:39:55.845: INFO: Created: latency-svc-7sksu
Mar 24 07:39:55.979: INFO: Got endpoints: latency-svc-iw2c4 [2.24079602s]
Mar 24 07:39:55.994: INFO: Created: latency-svc-dxq5p
Mar 24 07:39:56.130: INFO: Got endpoints: latency-svc-bvsfk [2.240228461s]
Mar 24 07:39:56.153: INFO: Created: latency-svc-zm9gp
Mar 24 07:39:56.280: INFO: Got endpoints: latency-svc-5yos5 [2.241868446s]
Mar 24 07:39:56.301: INFO: Created: latency-svc-qje97
Mar 24 07:39:56.430: INFO: Got endpoints: latency-svc-5vu3w [2.23337143s]
Mar 24 07:39:56.451: INFO: Created: latency-svc-vnvb3
Mar 24 07:39:56.579: INFO: Got endpoints: latency-svc-x2baf [2.239926065s]
Mar 24 07:39:56.593: INFO: Created: latency-svc-ghp1o
Mar 24 07:39:56.729: INFO: Got endpoints: latency-svc-t51uy [2.24169897s]
Mar 24 07:39:56.745: INFO: Created: latency-svc-taj1f
Mar 24 07:39:56.880: INFO: Got endpoints: latency-svc-iccnq [2.241264241s]
Mar 24 07:39:56.894: INFO: Created: latency-svc-723o1
Mar 24 07:39:57.029: INFO: Got endpoints: latency-svc-m0knz [2.240932816s]
Mar 24 07:39:57.043: INFO: Created: latency-svc-uhm5j
Mar 24 07:39:57.180: INFO: Got endpoints: latency-svc-us5wa [2.240165614s]
Mar 24 07:39:57.192: INFO: Created: latency-svc-56vc6
Mar 24 07:39:57.330: INFO: Got endpoints: latency-svc-ckac0 [2.241644849s]
Mar 24 07:39:57.355: INFO: Created: latency-svc-md1dc
Mar 24 07:39:57.480: INFO: Got endpoints: latency-svc-73zfq [2.236707961s]
Mar 24 07:39:57.495: INFO: Created: latency-svc-uk4vc
Mar 24 07:39:57.629: INFO: Got endpoints: latency-svc-mtj8b [2.240468343s]
Mar 24 07:39:57.647: INFO: Created: latency-svc-k1d16
Mar 24 07:39:57.780: INFO: Got endpoints: latency-svc-ngtx6 [2.241153129s]
Mar 24 07:39:57.794: INFO: Created: latency-svc-hhml3
Mar 24 07:39:57.938: INFO: Got endpoints: latency-svc-7tnx7 [2.245925973s]
Mar 24 07:39:57.956: INFO: Created: latency-svc-wp6gh
Mar 24 07:39:58.080: INFO: Got endpoints: latency-svc-7sksu [2.239257074s]
Mar 24 07:39:58.101: INFO: Created: latency-svc-8ejvy
Mar 24 07:39:58.230: INFO: Got endpoints: latency-svc-dxq5p [2.242449243s]
Mar 24 07:39:58.252: INFO: Created: latency-svc-trpgs
Mar 24 07:39:58.379: INFO: Got endpoints: latency-svc-zm9gp [2.231199244s]
Mar 24 07:39:58.391: INFO: Created: latency-svc-q5xwh
Mar 24 07:39:58.530: INFO: Got endpoints: latency-svc-qje97 [2.235128051s]
Mar 24 07:39:58.545: INFO: Created: latency-svc-hdfo4
Mar 24 07:39:58.680: INFO: Got endpoints: latency-svc-vnvb3 [2.237333789s]
Mar 24 07:39:58.698: INFO: Created: latency-svc-eqj6t
Mar 24 07:39:58.832: INFO: Got endpoints: latency-svc-ghp1o [2.242973291s]
Mar 24 07:39:58.846: INFO: Created: latency-svc-h1qmz
Mar 24 07:39:58.980: INFO: Got endpoints: latency-svc-taj1f [2.240310109s]
Mar 24 07:39:58.995: INFO: Created: latency-svc-zw0tg
Mar 24 07:39:59.130: INFO: Got endpoints: latency-svc-723o1 [2.24187827s]
Mar 24 07:39:59.151: INFO: Created: latency-svc-ctidt
Mar 24 07:39:59.280: INFO: Got endpoints: latency-svc-uhm5j [2.24077027s]
Mar 24 07:39:59.298: INFO: Created: latency-svc-3tm5p
Mar 24 07:39:59.430: INFO: Got endpoints: latency-svc-56vc6 [2.241934657s]
Mar 24 07:39:59.444: INFO: Created: latency-svc-cmh5y
Mar 24 07:39:59.580: INFO: Got endpoints: latency-svc-md1dc [2.233676462s]
Mar 24 07:39:59.593: INFO: Created: latency-svc-s0v8l
Mar 24 07:39:59.730: INFO: Got endpoints: latency-svc-uk4vc [2.241742414s]
Mar 24 07:39:59.746: INFO: Created: latency-svc-yrmuo
Mar 24 07:39:59.880: INFO: Got endpoints: latency-svc-k1d16 [2.238378226s]
Mar 24 07:39:59.896: INFO: Created: latency-svc-61io4
Mar 24 07:40:00.030: INFO: Got endpoints: latency-svc-hhml3 [2.241423273s]
Mar 24 07:40:00.046: INFO: Created: latency-svc-afn3e
Mar 24 07:40:00.180: INFO: Got endpoints: latency-svc-wp6gh [2.228084659s]
Mar 24 07:40:00.199: INFO: Created: latency-svc-o6tvh
Mar 24 07:40:00.329: INFO: Got endpoints: latency-svc-8ejvy [2.232732609s]
Mar 24 07:40:00.346: INFO: Created: latency-svc-39avu
Mar 24 07:40:00.480: INFO: Got endpoints: latency-svc-trpgs [2.236772015s]
Mar 24 07:40:00.498: INFO: Created: latency-svc-qlc69
Mar 24 07:40:00.630: INFO: Got endpoints: latency-svc-q5xwh [2.242472669s]
Mar 24 07:40:00.648: INFO: Created: latency-svc-tvl9q
Mar 24 07:40:00.779: INFO: Got endpoints: latency-svc-hdfo4 [2.241007181s]
Mar 24 07:40:00.793: INFO: Created: latency-svc-n255r
Mar 24 07:40:00.929: INFO: Got endpoints: latency-svc-eqj6t [2.240012043s]
Mar 24 07:40:00.952: INFO: Created: latency-svc-x42xw
Mar 24 07:40:01.079: INFO: Got endpoints: latency-svc-h1qmz [2.239305176s]
Mar 24 07:40:01.103: INFO: Created: latency-svc-bm4dq
Mar 24 07:40:01.230: INFO: Got endpoints: latency-svc-zw0tg [2.240718092s]
Mar 24 07:40:01.243: INFO: Created: latency-svc-ozauz
Mar 24 07:40:01.379: INFO: Got endpoints: latency-svc-ctidt [2.237038866s]
Mar 24 07:40:01.401: INFO: Created: latency-svc-bl87e
Mar 24 07:40:01.529: INFO: Got endpoints: latency-svc-3tm5p [2.235863612s]
Mar 24 07:40:01.543: INFO: Created: latency-svc-oq13v
Mar 24 07:40:01.691: INFO: Got endpoints: latency-svc-cmh5y [2.250648195s]
Mar 24 07:40:01.716: INFO: Created: latency-svc-8qw34
Mar 24 07:40:01.830: INFO: Got endpoints: latency-svc-s0v8l [2.240642464s]
Mar 24 07:40:01.845: INFO: Created: latency-svc-mkgxo
Mar 24 07:40:01.980: INFO: Got endpoints: latency-svc-yrmuo [2.241139927s]
Mar 24 07:40:02.004: INFO: Created: latency-svc-ya3u2
Mar 24 07:40:02.131: INFO: Got endpoints: latency-svc-61io4 [2.241693748s]
Mar 24 07:40:02.149: INFO: Created: latency-svc-y0tzc
Mar 24 07:40:02.280: INFO: Got endpoints: latency-svc-afn3e [2.240640447s]
Mar 24 07:40:02.294: INFO: Created: latency-svc-i9xs5
Mar 24 07:40:02.430: INFO: Got endpoints: latency-svc-o6tvh [2.240472391s]
Mar 24 07:40:02.447: INFO: Created: latency-svc-19c02
Mar 24 07:40:02.579: INFO: Got endpoints: latency-svc-39avu [2.240310044s]
Mar 24 07:40:02.599: INFO: Created: latency-svc-ohuc4
Mar 24 07:40:02.730: INFO: Got endpoints: latency-svc-qlc69 [2.237854363s]
Mar 24 07:40:02.746: INFO: Created: latency-svc-ovuwm
Mar 24 07:40:02.879: INFO: Got endpoints: latency-svc-tvl9q [2.240389018s]
Mar 24 07:40:02.893: INFO: Created: latency-svc-cm095
Mar 24 07:40:03.029: INFO: Got endpoints: latency-svc-n255r [2.239996128s]
Mar 24 07:40:03.053: INFO: Created: latency-svc-o1enw
Mar 24 07:40:03.180: INFO: Got endpoints: latency-svc-x42xw [2.230500591s]
Mar 24 07:40:03.196: INFO: Created: latency-svc-zwu7p
Mar 24 07:40:03.330: INFO: Got endpoints: latency-svc-bm4dq [2.238366616s]
Mar 24 07:40:03.345: INFO: Created: latency-svc-d3r7v
Mar 24 07:40:03.480: INFO: Got endpoints: latency-svc-ozauz [2.24107975s]
Mar 24 07:40:03.494: INFO: Created: latency-svc-pt42f
Mar 24 07:40:03.630: INFO: Got endpoints: latency-svc-bl87e [2.235352474s]
Mar 24 07:40:03.641: INFO: Created: latency-svc-9oqd0
Mar 24 07:40:03.780: INFO: Got endpoints: latency-svc-oq13v [2.242037302s]
Mar 24 07:40:03.793: INFO: Created: latency-svc-472sb
Mar 24 07:40:03.930: INFO: Got endpoints: latency-svc-8qw34 [2.220230524s]
Mar 24 07:40:03.946: INFO: Created: latency-svc-wtguq
Mar 24 07:40:04.080: INFO: Got endpoints: latency-svc-mkgxo [2.240295589s]
Mar 24 07:40:04.092: INFO: Created: latency-svc-face8
Mar 24 07:40:04.230: INFO: Got endpoints: latency-svc-ya3u2 [2.236024123s]
Mar 24 07:40:04.249: INFO: Created: latency-svc-w3j6x
Mar 24 07:40:04.380: INFO: Got endpoints: latency-svc-y0tzc [2.238914703s]
Mar 24 07:40:04.398: INFO: Created: latency-svc-13lwz
Mar 24 07:40:04.530: INFO: Got endpoints: latency-svc-i9xs5 [2.242170578s]
Mar 24 07:40:04.545: INFO: Created: latency-svc-8r1iu
Mar 24 07:40:04.680: INFO: Got endpoints: latency-svc-19c02 [2.240094876s]
Mar 24 07:40:04.696: INFO: Created: latency-svc-covyp
Mar 24 07:40:04.829: INFO: Got endpoints: latency-svc-ohuc4 [2.235541258s]
Mar 24 07:40:04.846: INFO: Created: latency-svc-0b3r8
Mar 24 07:40:04.980: INFO: Got endpoints: latency-svc-ovuwm [2.240117196s]
Mar 24 07:40:04.995: INFO: Created: latency-svc-pp3sb
Mar 24 07:40:05.130: INFO: Got endpoints: latency-svc-cm095 [2.241587264s]
Mar 24 07:40:05.144: INFO: Created: latency-svc-5r1f2
Mar 24 07:40:05.280: INFO: Got endpoints: latency-svc-o1enw [2.232348923s]
Mar 24 07:40:05.294: INFO: Created: latency-svc-njclt
Mar 24 07:40:05.430: INFO: Got endpoints: latency-svc-zwu7p [2.240335662s]
Mar 24 07:40:05.445: INFO: Created: latency-svc-12rut
Mar 24 07:40:05.580: INFO: Got endpoints: latency-svc-d3r7v [2.24239348s]
Mar 24 07:40:05.597: INFO: Created: latency-svc-0i6eg
Mar 24 07:40:05.730: INFO: Got endpoints: latency-svc-pt42f [2.24073654s]
Mar 24 07:40:05.746: INFO: Created: latency-svc-l7tle
Mar 24 07:40:05.880: INFO: Got endpoints: latency-svc-9oqd0 [2.242702118s]
Mar 24 07:40:05.900: INFO: Created: latency-svc-eeqfj
Mar 24 07:40:06.030: INFO: Got endpoints: latency-svc-472sb [2.242319337s]
Mar 24 07:40:06.043: INFO: Created: latency-svc-4z3oy
Mar 24 07:40:06.230: INFO: Got endpoints: latency-svc-wtguq [2.289304348s]
Mar 24 07:40:06.249: INFO: Created: latency-svc-ztt8x
Mar 24 07:40:06.379: INFO: Got endpoints: latency-svc-face8 [2.29157365s]
Mar 24 07:40:06.392: INFO: Created: latency-svc-csn1i
Mar 24 07:40:06.529: INFO: Got endpoints: latency-svc-w3j6x [2.290999319s]
Mar 24 07:40:06.544: INFO: Created: latency-svc-itd3y
Mar 24 07:40:06.680: INFO: Got endpoints: latency-svc-13lwz [2.288541837s]
Mar 24 07:40:06.697: INFO: Created: latency-svc-6kgyb
Mar 24 07:40:06.830: INFO: Got endpoints: latency-svc-8r1iu [2.291342507s]
Mar 24 07:40:06.850: INFO: Created: latency-svc-3vc0q
Mar 24 07:40:06.979: INFO: Got endpoints: latency-svc-covyp [2.28885084s]
Mar 24 07:40:06.995: INFO: Created: latency-svc-59rg8
Mar 24 07:40:07.031: INFO: Got endpoints: latency-svc-0b3r8 [2.190910734s]
Mar 24 07:40:07.049: INFO: Created: latency-svc-qts38
Mar 24 07:40:07.130: INFO: Got endpoints: latency-svc-njclt [1.840168528s]
Mar 24 07:40:07.150: INFO: Created: latency-svc-vcz5n
Mar 24 07:40:07.184: INFO: Got endpoints: latency-svc-5r1f2 [2.047232186s]
Mar 24 07:40:07.197: INFO: Created: latency-svc-y74zy
Mar 24 07:40:07.330: INFO: Got endpoints: latency-svc-0i6eg [1.738174546s]
Mar 24 07:40:07.343: INFO: Created: latency-svc-mea78
Mar 24 07:40:07.479: INFO: Got endpoints: latency-svc-pp3sb [2.490113869s]
Mar 24 07:40:07.498: INFO: Created: latency-svc-7icld
Mar 24 07:40:07.531: INFO: Got endpoints: latency-svc-12rut [2.092703799s]
Mar 24 07:40:07.545: INFO: Created: latency-svc-yjj85
Mar 24 07:40:07.630: INFO: Got endpoints: latency-svc-l7tle [1.890993621s]
Mar 24 07:40:07.642: INFO: Created: latency-svc-navgc
Mar 24 07:40:07.682: INFO: Got endpoints: latency-svc-4z3oy [1.644877552s]
Mar 24 07:40:07.700: INFO: Created: latency-svc-3xlor
Mar 24 07:40:07.830: INFO: Got endpoints: latency-svc-eeqfj [1.939514036s]
Mar 24 07:40:07.850: INFO: Created: latency-svc-uy2oj
Mar 24 07:40:08.529: INFO: Got endpoints: latency-svc-ztt8x [2.285968519s]
Mar 24 07:40:08.541: INFO: Created: latency-svc-x3fke
Mar 24 07:40:08.630: INFO: Got endpoints: latency-svc-csn1i [2.242074579s]
Mar 24 07:40:08.645: INFO: Created: latency-svc-o3sgq
Mar 24 07:40:08.729: INFO: Got endpoints: latency-svc-itd3y [2.190308553s]
Mar 24 07:40:08.749: INFO: Created: latency-svc-rq1fb
Mar 24 07:40:08.929: INFO: Got endpoints: latency-svc-6kgyb [2.239448084s]
Mar 24 07:40:08.945: INFO: Created: latency-svc-xxd2j
Mar 24 07:40:09.079: INFO: Got endpoints: latency-svc-3vc0q [2.236893159s]
Mar 24 07:40:09.093: INFO: Created: latency-svc-atntc
Mar 24 07:40:09.230: INFO: Got endpoints: latency-svc-59rg8 [2.240285676s]
Mar 24 07:40:09.274: INFO: Created: latency-svc-5a5jj
Mar 24 07:40:09.380: INFO: Got endpoints: latency-svc-qts38 [2.339949313s]
Mar 24 07:40:09.407: INFO: Created: latency-svc-s3lfs
Mar 24 07:40:09.529: INFO: Got endpoints: latency-svc-vcz5n [2.385468966s]
Mar 24 07:40:09.546: INFO: Created: latency-svc-cp9um
Mar 24 07:40:09.679: INFO: Got endpoints: latency-svc-y74zy [2.488210879s]
Mar 24 07:40:09.701: INFO: Created: latency-svc-5pauv
Mar 24 07:40:09.829: INFO: Got endpoints: latency-svc-mea78 [2.491633302s]
Mar 24 07:40:09.846: INFO: Created: latency-svc-z4p5s
Mar 24 07:40:09.979: INFO: Got endpoints: latency-svc-7icld [2.489433322s]
Mar 24 07:40:09.992: INFO: Created: latency-svc-sds84
Mar 24 07:40:10.130: INFO: Got endpoints: latency-svc-yjj85 [2.588665821s]
Mar 24 07:40:10.145: INFO: Created: latency-svc-y6hpk
Mar 24 07:40:10.280: INFO: Got endpoints: latency-svc-navgc [2.642154451s]
Mar 24 07:40:10.300: INFO: Created: latency-svc-bi1y6
Mar 24 07:40:10.430: INFO: Got endpoints: latency-svc-3xlor [2.737662091s]
Mar 24 07:40:10.444: INFO: Created: latency-svc-v327c
Mar 24 07:40:10.580: INFO: Got endpoints: latency-svc-uy2oj [2.740610042s]
Mar 24 07:40:10.593: INFO: Created: latency-svc-ndt5h
Mar 24 07:40:10.729: INFO: Got endpoints: latency-svc-x3fke [2.192132292s]
Mar 24 07:40:10.748: INFO: Created: latency-svc-bwna8
Mar 24 07:40:10.880: INFO: Got endpoints: latency-svc-o3sgq [2.239128086s]
Mar 24 07:40:10.896: INFO: Created: latency-svc-34gx6
Mar 24 07:40:11.030: INFO: Got endpoints: latency-svc-rq1fb [2.286375691s]
Mar 24 07:40:11.045: INFO: Created: latency-svc-xlmeo
Mar 24 07:40:11.180: INFO: Got endpoints: latency-svc-xxd2j [2.240340007s]
Mar 24 07:40:11.330: INFO: Got endpoints: latency-svc-atntc [2.242294361s]
Mar 24 07:40:11.480: INFO: Got endpoints: latency-svc-5a5jj [2.224443638s]
Mar 24 07:40:11.629: INFO: Got endpoints: latency-svc-s3lfs [2.239783077s]
Mar 24 07:40:11.780: INFO: Got endpoints: latency-svc-cp9um [2.239566151s]
Mar 24 07:40:11.929: INFO: Got endpoints: latency-svc-5pauv [2.234375691s]
Mar 24 07:40:12.080: INFO: Got endpoints: latency-svc-z4p5s [2.239905313s]
Mar 24 07:40:12.231: INFO: Got endpoints: latency-svc-sds84 [2.243128413s]
Mar 24 07:40:12.380: INFO: Got endpoints: latency-svc-y6hpk [2.240650436s]
Mar 24 07:40:12.530: INFO: Got endpoints: latency-svc-bi1y6 [2.23513392s]
Mar 24 07:40:12.679: INFO: Got endpoints: latency-svc-v327c [2.241630769s]
Mar 24 07:40:12.829: INFO: Got endpoints: latency-svc-ndt5h [2.241434146s]
Mar 24 07:40:12.979: INFO: Got endpoints: latency-svc-bwna8 [2.23686209s]
Mar 24 07:40:13.129: INFO: Got endpoints: latency-svc-34gx6 [2.241211841s]
Mar 24 07:40:13.279: INFO: Got endpoints: latency-svc-xlmeo [2.239993725s]
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2z02q
Mar 24 07:40:15.303: INFO: Deleting RC svc-latency-rc took: 2.013465283s
Mar 24 07:40:15.303: INFO: Terminating RC svc-latency-rc pods took: 80.693µs
Mar 24 07:40:15.303: INFO: Latencies: [19.143498ms 26.34928ms 41.012563ms 42.260653ms 47.634956ms 53.640366ms 68.189892ms 82.157825ms 83.511022ms 90.369467ms 313.296011ms 463.460692ms 514.24902ms 578.916563ms 615.129981ms 813.196471ms 858.718012ms 899.279566ms 996.987716ms 1.045021876s 1.191462067s 1.339998205s 1.387823484s 1.476439642s 1.513738155s 1.589175075s 1.63895138s 1.641470829s 1.644877552s 1.735677663s 1.738174546s 1.840168528s 1.840298264s 1.841317657s 1.841838241s 1.890993621s 1.927271599s 1.939514036s 1.989362191s 2.047232186s 2.087146877s 2.087749423s 2.088855912s 2.092703799s 2.187796257s 2.190308553s 2.190410746s 2.190910734s 2.192132292s 2.219909794s 2.220230524s 2.224443638s 2.225654269s 2.228084659s 2.230500591s 2.230985843s 2.231199244s 2.232219376s 2.232348923s 2.232732609s 2.233183094s 2.23337143s 2.233676462s 2.234375691s 2.235128051s 2.23513392s 2.235352474s 2.235541258s 2.23570455s 2.235707239s 2.235863612s 2.236024123s 2.236707961s 2.236759035s 2.236772015s 2.23686209s 2.23688825s 2.236893159s 2.237038866s 2.237333789s 2.237854363s 2.238017647s 2.238298618s 2.238366616s 2.238378226s 2.238622773s 2.238914703s 2.238922666s 2.239087288s 2.239128086s 2.239257074s 2.239305176s 2.239448084s 2.239453718s 2.239479319s 2.239551011s 2.239566151s 2.239783077s 2.239847661s 2.239905313s 2.239912762s 2.239926065s 2.239993725s 2.239996128s 2.240012043s 2.240094876s 2.240117196s 2.240136046s 2.240165614s 2.240228461s 2.240244331s 2.240285676s 2.240288972s 2.240295589s 2.240310044s 2.240310109s 2.240335662s 2.240340007s 2.240389018s 2.240468343s 2.240472391s 2.240640447s 2.240642464s 2.240650436s 2.240718092s 2.24073654s 2.24077027s 2.24079602s 2.240843413s 2.240932816s 2.241007181s 2.241027348s 2.24107975s 2.241083507s 2.241139927s 2.241153129s 2.241178623s 2.241184761s 2.241211841s 2.241212408s 2.241228124s 2.241264241s 2.241289765s 2.241402372s 2.241423273s 2.241434146s 2.241527055s 2.241539486s 2.241587264s 2.241630769s 2.241644849s 2.241653815s 2.241693748s 2.24169897s 2.241742414s 2.241868446s 2.24187827s 2.241899332s 2.241934657s 2.242037302s 2.242074579s 2.242096348s 2.242170578s 2.242254678s 2.242294361s 2.242319337s 2.242329541s 2.24239348s 2.242449243s 2.24245432s 2.242472669s 2.242504347s 2.242547455s 2.242566385s 2.242702118s 2.242798356s 2.242973291s 2.243128413s 2.245925973s 2.250648195s 2.285968519s 2.286375691s 2.288541837s 2.28885084s 2.289304348s 2.290449315s 2.290999319s 2.291342507s 2.29157365s 2.339933869s 2.339949313s 2.385468966s 2.488210879s 2.489433322s 2.490113869s 2.491633302s 2.588665821s 2.642154451s 2.737662091s 2.740610042s]
Mar 24 07:40:15.303: INFO: 50 %ile: 2.239912762s
Mar 24 07:40:15.303: INFO: 90 %ile: 2.285968519s
Mar 24 07:40:15.303: INFO: 99 %ile: 2.737662091s
Mar 24 07:40:15.303: INFO: Total sample count: 200
[AfterEach] [k8s.io] Service endpoints latency
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:40:15.303: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-2z02q" for this suite.

• [SLOW TEST:58.228 seconds]
[k8s.io] Service endpoints latency
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should not be very high [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
------------------------------
[k8s.io] PreStop 
  should call prestop when killing a pod [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
[BeforeEach] [k8s.io] PreStop
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:40:35.311: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:40:35.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-9qnyd
Mar 24 07:40:35.315: INFO: Get service account default in ns e2e-tests-prestop-9qnyd failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:40:37.316: INFO: Service account default in ns e2e-tests-prestop-9qnyd with secrets found. (2.003070005s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:40:37.316: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-9qnyd
Mar 24 07:40:37.317: INFO: Service account default in ns e2e-tests-prestop-9qnyd with secrets found. (911.336µs)
[It] should call prestop when killing a pod [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
STEP: Creating server pod server in namespace e2e-tests-prestop-9qnyd
STEP: Waiting for pods to come up.
Mar 24 07:40:37.321: INFO: Waiting up to 5m0s for pod server status to be running
Mar 24 07:40:37.323: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (2.072274ms elapsed)
Mar 24 07:40:39.325: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (2.003928323s elapsed)
Mar 24 07:40:41.326: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (4.005715722s elapsed)
Mar 24 07:40:43.328: INFO: Found pod 'server' on node '127.0.0.1'
STEP: Creating tester pod tester in namespace e2e-tests-prestop-9qnyd
Mar 24 07:40:43.332: INFO: Waiting up to 5m0s for pod tester status to be running
Mar 24 07:40:43.334: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (2.452292ms elapsed)
Mar 24 07:40:45.337: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (2.004600628s elapsed)
Mar 24 07:40:47.338: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-9qnyd' status to be 'running'(found phase: "Pending", readiness: false) (4.006518996s elapsed)
Mar 24 07:40:49.340: INFO: Found pod 'tester' on node '127.0.0.1'
STEP: Deleting pre-stop pod
Mar 24 07:40:54.355: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
		"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] PreStop
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:40:54.360: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-9qnyd" for this suite.

• [SLOW TEST:39.066 seconds]
[k8s.io] PreStop
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should call prestop when killing a pod [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
------------------------------
SS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is Never [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1070
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:41:14.377: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:41:14.379: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-aoxzg
Mar 24 07:41:14.380: INFO: Get service account default in ns e2e-tests-kubectl-aoxzg failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:41:16.382: INFO: Service account default in ns e2e-tests-kubectl-aoxzg with secrets found. (2.002752349s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:41:16.382: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-aoxzg
Mar 24 07:41:16.383: INFO: Service account default in ns e2e-tests-kubectl-aoxzg with secrets found. (891.938µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1028
[It] should create a job from an image when restart is Never [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1070
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 24 07:41:16.383: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=Never --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-aoxzg'
Mar 24 07:41:16.413: INFO: stderr: ""
Mar 24 07:41:16.413: INFO: stdout: "job \"e2e-test-nginx-job\" created"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1032
Mar 24 07:41:16.420: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-aoxzg'
Mar 24 07:41:18.486: INFO: stderr: ""
Mar 24 07:41:18.486: INFO: stdout: "job \"e2e-test-nginx-job\" deleted"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:41:18.486: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-aoxzg" for this suite.

• [SLOW TEST:24.119 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create a job from an image when restart is Never [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1070
------------------------------
[k8s.io] Downward API volume 
  should provide podname only [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
[BeforeEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:41:38.496: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:41:38.499: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-sxtjs
Mar 24 07:41:38.500: INFO: Get service account default in ns e2e-tests-downward-api-sxtjs failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:41:40.501: INFO: Service account default in ns e2e-tests-downward-api-sxtjs with secrets found. (2.002304727s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:41:40.501: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-sxtjs
Mar 24 07:41:40.502: INFO: Service account default in ns e2e-tests-downward-api-sxtjs with secrets found. (915.232µs)
[It] should provide podname only [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
STEP: Creating a pod to test downward API volume plugin
Mar 24 07:41:40.505: INFO: Waiting up to 5m0s for pod downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:41:40.508: INFO: No Status.Info for container 'client-container' in pod 'downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913' yet
Mar 24 07:41:40.508: INFO: Waiting for pod downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-sxtjs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.254512ms elapsed)
Mar 24 07:41:42.510: INFO: Nil State.Terminated for container 'client-container' in pod 'downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913' in namespace 'e2e-tests-downward-api-sxtjs' so far
Mar 24 07:41:42.510: INFO: Waiting for pod downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-sxtjs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004360528s elapsed)
Mar 24 07:41:44.512: INFO: Nil State.Terminated for container 'client-container' in pod 'downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913' in namespace 'e2e-tests-downward-api-sxtjs' so far
Mar 24 07:41:44.512: INFO: Waiting for pod downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-sxtjs' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006473714s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913 container client-container: <nil>
STEP: Successfully fetched pod logs:content of file "/etc/podname": downwardapi-volume-d8d584eb-f193-11e5-8186-064a4ed57913


[AfterEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:41:46.524: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sxtjs" for this suite.

• [SLOW TEST:13.050 seconds]
[k8s.io] Downward API volume
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide podname only [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
------------------------------
SS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1094
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:41:51.547: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:41:51.555: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-wyqhc
Mar 24 07:41:51.560: INFO: Get service account default in ns e2e-tests-kubectl-wyqhc failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:41:53.562: INFO: Service account default in ns e2e-tests-kubectl-wyqhc with secrets found. (2.006807811s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:41:53.562: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-wyqhc
Mar 24 07:41:53.562: INFO: Service account default in ns e2e-tests-kubectl-wyqhc with secrets found. (819.397µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should create a job from an image, then delete the job [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1094
STEP: executing a command with run --rm and attach with stdin
Mar 24 07:41:53.563: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Mar 24 07:42:02.206: INFO: stderr: ""
Mar 24 07:42:02.206: INFO: stdout: "Waiting for pod default/e2e-test-rm-busybox-job-i98rz to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-i98rz to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-i98rz to be running, status is Pending, pod ready: false\nabcd1234stdin closed\njob \"e2e-test-rm-busybox-job\" deleted"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:42:02.207: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wyqhc" for this suite.

• [SLOW TEST:15.667 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run --rm job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create a job from an image, then delete the job [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1094
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0644,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:42:07.214: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:42:07.216: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-lmtys
Mar 24 07:42:07.217: INFO: Get service account default in ns e2e-tests-emptydir-lmtys failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:42:09.219: INFO: Service account default in ns e2e-tests-emptydir-lmtys with secrets found. (2.003288646s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:42:09.219: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-lmtys
Mar 24 07:42:09.220: INFO: Service account default in ns e2e-tests-emptydir-lmtys with secrets found. (777.282µs)
[It] should support (non-root,0644,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 24 07:42:09.221: FAIL: Failed to create pod: pods "pod-e9f38054-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-lmtys".
Mar 24 07:42:09.230: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 07:42:09.230: INFO: 
Mar 24 07:42:09.232: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:42:09.234: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 1854 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:42:02 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:42:02 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:42:09.234: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:42:09.235: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:42:09.255: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:42:09.255: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:20.112881s}
Mar 24 07:42:09.255: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.202715s}
Mar 24 07:42:09.255: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lmtys" for this suite.

• Failure [7.048 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0644,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104

  Mar 24 07:42:09.221: Failed to create pod: pods "pod-e9f38054-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0666,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:42:14.262: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:42:14.264: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-l5244
Mar 24 07:42:14.265: INFO: Get service account default in ns e2e-tests-emptydir-l5244 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:42:16.267: INFO: Service account default in ns e2e-tests-emptydir-l5244 with secrets found. (2.002207367s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:42:16.267: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-l5244
Mar 24 07:42:16.267: INFO: Service account default in ns e2e-tests-emptydir-l5244 with secrets found. (849.397µs)
[It] should support (non-root,0666,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 24 07:42:16.269: FAIL: Failed to create pod: pods "pod-ee26dde9-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-l5244".
Mar 24 07:42:16.275: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 07:42:16.275: INFO: 
Mar 24 07:42:16.276: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:42:16.277: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 1865 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:42:12 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:42:12 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:42:16.277: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:42:16.278: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:42:16.298: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:42:16.298: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:20.112881s}
Mar 24 07:42:16.299: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.202715s}
Mar 24 07:42:16.299: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l5244" for this suite.

• Failure [7.049 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0666,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108

  Mar 24 07:42:16.269: Failed to create pod: pods "pod-ee26dde9-f193-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:742
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:42:21.311: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:42:21.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ua3x4
Mar 24 07:42:21.314: INFO: Get service account default in ns e2e-tests-kubectl-ua3x4 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:42:23.316: INFO: Service account default in ns e2e-tests-kubectl-ua3x4 with secrets found. (2.003021916s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:42:23.316: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ua3x4
Mar 24 07:42:23.317: INFO: Service account default in ns e2e-tests-kubectl-ua3x4 with secrets found. (844.43µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should create services for rc [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:742
STEP: creating Redis RC
Mar 24 07:42:23.317: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-ua3x4'
Mar 24 07:42:23.368: INFO: stderr: ""
Mar 24 07:42:23.368: INFO: stdout: "replicationcontroller \"redis-master\" created"
Mar 24 07:42:25.372: INFO: Waiting up to 5m0s for pod redis-master-s90t4 status to be running
Mar 24 07:42:25.373: INFO: Waiting for pod redis-master-s90t4 in namespace 'e2e-tests-kubectl-ua3x4' status to be 'running'(found phase: "Pending", readiness: false) (1.249732ms elapsed)
Mar 24 07:42:27.375: INFO: Waiting for pod redis-master-s90t4 in namespace 'e2e-tests-kubectl-ua3x4' status to be 'running'(found phase: "Pending", readiness: false) (2.003370619s elapsed)
Mar 24 07:42:29.377: INFO: Waiting for pod redis-master-s90t4 in namespace 'e2e-tests-kubectl-ua3x4' status to be 'running'(found phase: "Pending", readiness: false) (4.005334373s elapsed)
Mar 24 07:42:31.379: INFO: Found pod 'redis-master-s90t4' on node '127.0.0.1'
Mar 24 07:42:31.379: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config log redis-master-s90t4 redis-master --namespace=e2e-tests-kubectl-ua3x4'
Mar 24 07:42:31.404: INFO: stderr: ""
Mar 24 07:42:31.404: INFO: stdout: "2:C 24 Mar 07:42:27.927 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n2:M 24 Mar 07:42:27.946 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.\n2:M 24 Mar 07:42:27.949 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.\n2:M 24 Mar 07:42:27.951 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.\n                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 2\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n2:M 24 Mar 07:42:27.985 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n2:M 24 Mar 07:42:27.986 # Server started, Redis version 3.0.7\n2:M 24 Mar 07:42:27.986 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.\n2:M 24 Mar 07:42:27.993 * The server is now ready to accept connections on port 6379"
STEP: exposing RC
Mar 24 07:42:31.404: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ua3x4'
Mar 24 07:42:31.427: INFO: stderr: ""
Mar 24 07:42:31.427: INFO: stdout: "service \"rm2\" exposed"
Mar 24 07:42:31.429: INFO: Service rm2 in namespace e2e-tests-kubectl-ua3x4 found.
STEP: exposing service
Mar 24 07:42:33.432: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ua3x4'
Mar 24 07:42:33.452: INFO: stderr: ""
Mar 24 07:42:33.452: INFO: stdout: "service \"rm3\" exposed"
Mar 24 07:42:33.455: INFO: Service rm3 in namespace e2e-tests-kubectl-ua3x4 found.
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:42:35.457: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ua3x4" for this suite.

• [SLOW TEST:34.156 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl expose
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create services for rc [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:742
------------------------------
S
------------------------------
[k8s.io] Proxy version v1 
  should proxy to cadvisor using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:42:55.467: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:42:55.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-l1jkj
Mar 24 07:42:55.472: INFO: Service account default in ns e2e-tests-proxy-l1jkj had 0 secrets, ignoring for 2s: <nil>
Mar 24 07:42:57.473: INFO: Service account default in ns e2e-tests-proxy-l1jkj with secrets found. (2.00309472s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:42:57.474: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-l1jkj
Mar 24 07:42:57.474: INFO: Service account default in ns e2e-tests-proxy-l1jkj with secrets found. (895.171µs)
[It] should proxy to cadvisor using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Mar 24 07:42:57.480: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 3.79244ms)
Mar 24 07:42:57.482: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.211366ms)
Mar 24 07:42:57.484: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.995362ms)
Mar 24 07:42:57.486: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.045574ms)
Mar 24 07:42:57.488: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.099818ms)
Mar 24 07:42:57.490: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.145695ms)
Mar 24 07:42:57.492: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.992684ms)
Mar 24 07:42:57.494: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.057552ms)
Mar 24 07:42:57.496: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.982456ms)
Mar 24 07:42:57.498: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.980201ms)
Mar 24 07:42:57.500: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.002597ms)
Mar 24 07:42:57.502: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.985604ms)
Mar 24 07:42:57.504: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.046936ms)
Mar 24 07:42:57.506: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.071932ms)
Mar 24 07:42:57.508: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.993065ms)
Mar 24 07:42:57.510: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.031549ms)
Mar 24 07:42:57.513: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.047211ms)
Mar 24 07:42:57.515: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.947282ms)
Mar 24 07:42:57.517: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.037937ms)
Mar 24 07:42:57.519: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.918641ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:42:57.519: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-l1jkj" for this suite.

• [SLOW TEST:7.058 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy to cadvisor using proxy subresource [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:572
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:43:02.525: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:43:02.528: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1n2uq
Mar 24 07:43:02.529: INFO: Get service account default in ns e2e-tests-pods-1n2uq failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:43:04.531: INFO: Service account default in ns e2e-tests-pods-1n2uq with secrets found. (2.003098692s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:43:04.531: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1n2uq
Mar 24 07:43:04.532: INFO: Service account default in ns e2e-tests-pods-1n2uq with secrets found. (930.888µs)
[It] should allow activeDeadlineSeconds to be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:572
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 24 07:43:04.537: INFO: Waiting up to 5m0s for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 status to be running
Mar 24 07:43:04.538: INFO: Waiting for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-1n2uq' status to be 'running'(found phase: "Pending", readiness: false) (1.336848ms elapsed)
Mar 24 07:43:06.541: INFO: Waiting for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-1n2uq' status to be 'running'(found phase: "Pending", readiness: false) (2.003766481s elapsed)
Mar 24 07:43:08.543: INFO: Waiting for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-1n2uq' status to be 'running'(found phase: "Pending", readiness: false) (4.005835795s elapsed)
Mar 24 07:43:10.545: INFO: Waiting for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-1n2uq' status to be 'running'(found phase: "Pending", readiness: false) (6.007854268s elapsed)
Mar 24 07:43:12.547: INFO: Found pod 'pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 24 07:43:13.050: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Mar 24 07:43:13.556: INFO: Successfully updated pod
Mar 24 07:43:13.556: INFO: Waiting up to 5m0s for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 status to be terminated due to deadline exceeded
Mar 24 07:43:13.558: INFO: Waiting for pod pod-update-activedeadlineseconds-0aeb7336-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-1n2uq' status to be 'terminated due to deadline exceeded'(found phase: "Running", readiness: true) (1.97807ms elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:43:15.642: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-1n2uq" for this suite.

• [SLOW TEST:18.142 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should allow activeDeadlineSeconds to be updated [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:572
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:43:20.667: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:43:20.669: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-9eh7l
Mar 24 07:43:20.670: INFO: Get service account default in ns e2e-tests-containers-9eh7l failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:43:22.672: INFO: Service account default in ns e2e-tests-containers-9eh7l with secrets found. (2.002676819s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:43:22.672: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-9eh7l
Mar 24 07:43:22.673: INFO: Service account default in ns e2e-tests-containers-9eh7l with secrets found. (927.896µs)
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default arguments (docker cmd) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
STEP: Creating a pod to test override arguments
Mar 24 07:43:22.678: INFO: Waiting up to 5m0s for pod client-containers-15bb8b62-f194-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:43:22.682: INFO: No Status.Info for container 'test-container' in pod 'client-containers-15bb8b62-f194-11e5-8186-064a4ed57913' yet
Mar 24 07:43:22.682: INFO: Waiting for pod client-containers-15bb8b62-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-9eh7l' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.441457ms elapsed)
Mar 24 07:43:24.684: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-15bb8b62-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-9eh7l' so far
Mar 24 07:43:24.684: INFO: Waiting for pod client-containers-15bb8b62-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-9eh7l' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.006413282s elapsed)
Mar 24 07:43:26.687: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-15bb8b62-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-9eh7l' so far
Mar 24 07:43:26.687: INFO: Waiting for pod client-containers-15bb8b62-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-9eh7l' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.008890308s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-15bb8b62-f194-11e5-8186-064a4ed57913 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep override arguments]


[AfterEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:43:28.700: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9eh7l" for this suite.

• [SLOW TEST:13.043 seconds]
[k8s.io] Docker Containers
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be able to override the image's default arguments (docker cmd) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:43:33.710: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:43:33.712: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wh97k
Mar 24 07:43:33.713: INFO: Get service account default in ns e2e-tests-pods-wh97k failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:43:35.715: INFO: Service account default in ns e2e-tests-pods-wh97k with secrets found. (2.002702097s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:43:35.715: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wh97k
Mar 24 07:43:35.716: INFO: Service account default in ns e2e-tests-pods-wh97k with secrets found. (828.083µs)
[It] should get a host IP [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
STEP: creating pod
STEP: ensuring that pod is running and has a hostIP
Mar 24 07:43:35.720: INFO: Waiting up to 5m0s for pod pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913 status to be running
Mar 24 07:43:35.721: INFO: Waiting for pod pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-wh97k' status to be 'running'(found phase: "Pending", readiness: false) (1.056787ms elapsed)
Mar 24 07:43:37.723: INFO: Waiting for pod pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-wh97k' status to be 'running'(found phase: "Pending", readiness: false) (2.003288273s elapsed)
Mar 24 07:43:39.725: INFO: Waiting for pod pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-wh97k' status to be 'running'(found phase: "Pending", readiness: false) (4.005298133s elapsed)
Mar 24 07:43:41.727: INFO: Found pod 'pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913' on node '127.0.0.1'
Mar 24 07:43:41.728: INFO: Pod pod-hostip-1d81b8a1-f194-11e5-8186-064a4ed57913 has hostIP: 127.0.0.1
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:43:41.733: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wh97k" for this suite.

• [SLOW TEST:13.033 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should get a host IP [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:43:46.743: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:43:46.745: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-6stwr
Mar 24 07:43:46.746: INFO: Get service account default in ns e2e-tests-containers-6stwr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:43:48.748: INFO: Service account default in ns e2e-tests-containers-6stwr with secrets found. (2.002797126s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:43:48.748: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-6stwr
Mar 24 07:43:48.749: INFO: Service account default in ns e2e-tests-containers-6stwr with secrets found. (872.584µs)
[BeforeEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default command and arguments [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
STEP: Creating a pod to test override all
Mar 24 07:43:48.751: INFO: Waiting up to 5m0s for pod client-containers-25466639-f194-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:43:48.754: INFO: No Status.Info for container 'test-container' in pod 'client-containers-25466639-f194-11e5-8186-064a4ed57913' yet
Mar 24 07:43:48.754: INFO: Waiting for pod client-containers-25466639-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-6stwr' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.650042ms elapsed)
Mar 24 07:43:50.756: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-25466639-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-6stwr' so far
Mar 24 07:43:50.756: INFO: Waiting for pod client-containers-25466639-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-6stwr' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004736243s elapsed)
Mar 24 07:43:52.758: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-25466639-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-containers-6stwr' so far
Mar 24 07:43:52.758: INFO: Waiting for pod client-containers-25466639-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-containers-6stwr' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006586778s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-25466639-f194-11e5-8186-064a4ed57913 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2 override arguments]


[AfterEach] [k8s.io] Docker Containers
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:43:54.772: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-6stwr" for this suite.

• [SLOW TEST:13.036 seconds]
[k8s.io] Docker Containers
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be able to override the image's default command and arguments [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
------------------------------
[k8s.io] ConfigMap 
  should be consumable from pods in volume with mappings [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
[BeforeEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:43:59.779: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:43:59.785: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-jlggo
Mar 24 07:43:59.786: INFO: Get service account default in ns e2e-tests-configmap-jlggo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:44:01.788: INFO: Service account default in ns e2e-tests-configmap-jlggo with secrets found. (2.002531044s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:44:01.788: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-jlggo
Mar 24 07:44:01.789: INFO: Service account default in ns e2e-tests-configmap-jlggo with secrets found. (867.539µs)
[It] should be consumable from pods in volume with mappings [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
STEP: Creating configMap with name configmap-test-volume-map-2d0c275d-f194-11e5-8186-064a4ed57913
STEP: Creating a pod to test consume configMaps
Mar 24 07:44:01.793: INFO: Waiting up to 5m0s for pod pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:44:01.796: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913' yet
Mar 24 07:44:01.796: INFO: Waiting for pod pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-jlggo' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.01928ms elapsed)
Mar 24 07:44:03.799: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-jlggo' so far
Mar 24 07:44:03.799: INFO: Waiting for pod pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-jlggo' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005647215s elapsed)
Mar 24 07:44:05.801: INFO: Nil State.Terminated for container 'configmap-volume-test' in pod 'pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-jlggo' so far
Mar 24 07:44:05.801: INFO: Waiting for pod pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-jlggo' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007730409s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-configmaps-2d0c6122-f194-11e5-8186-064a4ed57913 container configmap-volume-test: <nil>
STEP: Successfully fetched pod logs:content of file "/etc/configmap-volume/path/to/data-2": value-2


STEP: Cleaning up the configMap
[AfterEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:44:07.818: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jlggo" for this suite.

• [SLOW TEST:13.046 seconds]
[k8s.io] ConfigMap
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be consumable from pods in volume with mappings [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:137
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:44:12.825: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:44:12.827: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xh5kr
Mar 24 07:44:12.828: INFO: Get service account default in ns e2e-tests-kubectl-xh5kr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:44:14.829: INFO: Service account default in ns e2e-tests-kubectl-xh5kr with secrets found. (2.002794669s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:44:14.829: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xh5kr
Mar 24 07:44:14.830: INFO: Service account default in ns e2e-tests-kubectl-xh5kr with secrets found. (790.174µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:130
[It] should create and stop a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:137
STEP: creating a replication controller
Mar 24 07:44:14.830: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:14.884: INFO: stderr: ""
Mar 24 07:44:14.884: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:44:14.884: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:14.922: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:14.922: INFO: stdout: "update-demo-nautilus-pc7m2 update-demo-nautilus-uruk1"
Mar 24 07:44:14.922: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pc7m2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:14.956: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:14.956: INFO: stdout: ""
Mar 24 07:44:14.956: INFO: update-demo-nautilus-pc7m2 is created but not running
Mar 24 07:44:19.956: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:19.977: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:19.977: INFO: stdout: "update-demo-nautilus-pc7m2 update-demo-nautilus-uruk1"
Mar 24 07:44:19.977: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pc7m2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:19.994: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:19.994: INFO: stdout: ""
Mar 24 07:44:19.994: INFO: update-demo-nautilus-pc7m2 is created but not running
Mar 24 07:44:24.994: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:25.010: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:25.010: INFO: stdout: "update-demo-nautilus-pc7m2 update-demo-nautilus-uruk1"
Mar 24 07:44:25.010: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pc7m2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:25.025: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:25.025: INFO: stdout: ""
Mar 24 07:44:25.025: INFO: update-demo-nautilus-pc7m2 is created but not running
Mar 24 07:44:30.025: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:30.042: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:30.042: INFO: stdout: "update-demo-nautilus-pc7m2 update-demo-nautilus-uruk1"
Mar 24 07:44:30.042: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pc7m2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:30.056: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:30.056: INFO: stdout: "true"
Mar 24 07:44:30.056: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pc7m2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:30.070: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:30.070: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:44:30.070: INFO: validating pod update-demo-nautilus-pc7m2
Mar 24 07:44:30.119: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:44:30.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:44:30.119: INFO: update-demo-nautilus-pc7m2 is verified up and running
Mar 24 07:44:30.119: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-uruk1 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:30.135: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:30.135: INFO: stdout: "true"
Mar 24 07:44:30.135: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-uruk1 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:30.150: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:44:30.150: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:44:30.150: INFO: validating pod update-demo-nautilus-uruk1
Mar 24 07:44:30.199: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:44:30.199: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:44:30.199: INFO: update-demo-nautilus-uruk1 is verified up and running
STEP: using delete to clean up resources
Mar 24 07:44:30.199: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete --grace-period=0 -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:32.228: INFO: stderr: ""
Mar 24 07:44:32.228: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted"
Mar 24 07:44:32.228: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xh5kr'
Mar 24 07:44:32.249: INFO: stderr: ""
Mar 24 07:44:32.249: INFO: stdout: ""
Mar 24 07:44:32.249: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xh5kr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:44:32.274: INFO: stderr: ""
Mar 24 07:44:32.274: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:44:32.274: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xh5kr" for this suite.

• [SLOW TEST:39.478 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create and stop a replication controller [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:137
------------------------------
[k8s.io] ConfigMap 
  should be consumable via environment variable [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
[BeforeEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:44:52.303: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:44:52.305: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-xirdp
Mar 24 07:44:52.306: INFO: Get service account default in ns e2e-tests-configmap-xirdp failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:44:54.307: INFO: Service account default in ns e2e-tests-configmap-xirdp with secrets found. (2.002640671s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:44:54.307: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-xirdp
Mar 24 07:44:54.308: INFO: Service account default in ns e2e-tests-configmap-xirdp with secrets found. (926.017µs)
[It] should be consumable via environment variable [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
STEP: Creating configMap e2e-tests-configmap-xirdp/configmap-test-4c59fd47-f194-11e5-8186-064a4ed57913
STEP: Creating a pod to test consume configMaps
Mar 24 07:44:54.313: INFO: Waiting up to 5m0s for pod pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:44:54.315: INFO: No Status.Info for container 'env-test' in pod 'pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913' yet
Mar 24 07:44:54.315: INFO: Waiting for pod pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-xirdp' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.98477ms elapsed)
Mar 24 07:44:56.318: INFO: Nil State.Terminated for container 'env-test' in pod 'pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-xirdp' so far
Mar 24 07:44:56.318: INFO: Waiting for pod pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-xirdp' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00465861s elapsed)
Mar 24 07:44:58.320: INFO: Nil State.Terminated for container 'env-test' in pod 'pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913' in namespace 'e2e-tests-configmap-xirdp' so far
Mar 24 07:44:58.320: INFO: Waiting for pod pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913 in namespace 'e2e-tests-configmap-xirdp' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007029589s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-configmaps-4c5a3bd0-f194-11e5-8186-064a4ed57913 container env-test: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
CONFIG_DATA_1=value-1
SHLVL=1
HOME=/
TERM=linux
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/bin:/sbin/:/usr/bin/:/usr/sbin/
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1


STEP: Cleaning up the configMap
[AfterEach] [k8s.io] ConfigMap
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:45:00.340: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xirdp" for this suite.

• [SLOW TEST:13.044 seconds]
[k8s.io] ConfigMap
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be consumable via environment variable [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
------------------------------
[k8s.io] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:181
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:45:05.347: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:45:05.349: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8f38f
Mar 24 07:45:05.350: INFO: Get service account default in ns e2e-tests-kubectl-8f38f failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:45:07.352: INFO: Service account default in ns e2e-tests-kubectl-8f38f with secrets found. (2.002894248s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:45:07.352: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8f38f
Mar 24 07:45:07.352: INFO: Service account default in ns e2e-tests-kubectl-8f38f with secrets found. (857.927µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Guestbook application
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169
[It] should create and stop a working application [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:181
STEP: creating all guestbook components
Mar 24 07:45:07.353: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-8f38f'
Mar 24 07:45:07.890: INFO: stderr: ""
Mar 24 07:45:07.890: INFO: stdout: "replicationcontroller \"frontend\" created\nservice \"frontend\" created\nreplicationcontroller \"redis-master\" created\nservice \"redis-master\" created\nreplicationcontroller \"redis-slave\" created\nservice \"redis-slave\" created"
STEP: validating guestbook app
Mar 24 07:45:07.890: INFO: Waiting for frontend to serve content.
Mar 24 07:45:07.892: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:12.899: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:17.901: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:22.903: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:27.905: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:32.908: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response: 
Mar 24 07:45:38.300: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:45:43.380: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:45:48.447: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:45:53.524: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:45:58.769: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:05.032: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:10.506: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:15.898: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:20.972: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:26.056: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:32.088: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:37.583: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:42.999: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:48.070: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:53.895: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:46:58.974: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:04.046: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:09.459: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:14.531: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:19.602: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:24.684: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:29.765: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:34.854: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:39.937: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:45.254: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:50.338: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:47:55.417: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:00.752: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:06.087: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:11.154: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:16.233: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:21.303: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:26.364: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:31.437: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:36.517: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:41.586: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:46.657: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:51.726: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:48:56.793: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:01.873: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:07.398: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:12.669: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:18.042: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:23.114: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:28.186: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:33.587: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:38.655: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:43.725: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:48.795: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:54.190: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:49:59.258: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:04.335: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:09.410: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:14.477: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:19.549: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:24.618: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:29.692: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:35.031: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:40.099: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:45.416: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:50.488: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:50:55.883: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:00.957: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:06.031: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:11.115: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:16.184: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:21.256: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:26.325: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:31.396: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:36.463: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:41.530: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:46.597: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:51.667: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:51:57.051: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:02.119: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:07.188: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:12.258: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:17.332: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:22.400: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:27.466: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:32.533: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:38.036: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:43.105: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:48.174: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:53.243: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:52:58.317: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:03.384: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:08.760: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:13.833: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:18.900: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:23.970: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:29.040: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:34.109: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:39.179: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:44.248: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:49.314: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:54.378: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:53:59.444: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:04.511: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:09.784: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:14.853: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:19.921: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:24.988: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:30.358: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:35.726: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:40.794: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:45.862: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:50.932: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:54:56.004: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:55:01.073: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:55:07.217: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

Mar 24 07:55:12.217: FAIL: Frontend service did not start serving content in 600 seconds.
STEP: using delete to clean up resources
Mar 24 07:55:12.220: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete --grace-period=0 -f /root/src/k8s.io/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-8f38f'
Mar 24 07:55:18.387: INFO: stderr: ""
Mar 24 07:55:18.387: INFO: stdout: "replicationcontroller \"frontend\" deleted\nservice \"frontend\" deleted\nreplicationcontroller \"redis-master\" deleted\nservice \"redis-master\" deleted\nreplicationcontroller \"redis-slave\" deleted\nservice \"redis-slave\" deleted"
Mar 24 07:55:18.388: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l app=guestbook,tier=frontend --no-headers --namespace=e2e-tests-kubectl-8f38f'
Mar 24 07:55:18.402: INFO: stderr: ""
Mar 24 07:55:18.402: INFO: stdout: ""
Mar 24 07:55:18.402: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l app=guestbook,tier=frontend --namespace=e2e-tests-kubectl-8f38f -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:55:18.418: INFO: stderr: ""
Mar 24 07:55:18.418: INFO: stdout: ""
Mar 24 07:55:18.418: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l app=redis,role=master --no-headers --namespace=e2e-tests-kubectl-8f38f'
Mar 24 07:55:18.432: INFO: stderr: ""
Mar 24 07:55:18.432: INFO: stdout: ""
Mar 24 07:55:18.432: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l app=redis,role=master --namespace=e2e-tests-kubectl-8f38f -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:55:18.447: INFO: stderr: ""
Mar 24 07:55:18.447: INFO: stdout: ""
Mar 24 07:55:18.447: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l app=redis,role=slave --no-headers --namespace=e2e-tests-kubectl-8f38f'
Mar 24 07:55:18.460: INFO: stderr: ""
Mar 24 07:55:18.460: INFO: stdout: ""
Mar 24 07:55:18.460: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l app=redis,role=slave --namespace=e2e-tests-kubectl-8f38f -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:55:18.476: INFO: stderr: ""
Mar 24 07:55:18.476: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-kubectl-8f38f".
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-z4nrt
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-04dlc
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend: {replication-controller } SuccessfulCreate: Created pod: frontend-nfjwu
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-04dlc: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-04dlc: {default-scheduler } Scheduled: Successfully assigned frontend-04dlc to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-nfjwu: {default-scheduler } Scheduled: Successfully assigned frontend-nfjwu to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-nfjwu: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-z4nrt: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_samples/gb-frontend:v4" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for frontend-z4nrt: {default-scheduler } Scheduled: Successfully assigned frontend-z4nrt to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-master: {replication-controller } SuccessfulCreate: Created pod: redis-master-exkdz
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-master-exkdz: {default-scheduler } Scheduled: Successfully assigned redis-master-exkdz to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-master-exkdz: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/redis:e2e" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-slave: {replication-controller } SuccessfulCreate: Created pod: redis-slave-jl3x3
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-slave: {replication-controller } SuccessfulCreate: Created pod: redis-slave-osa3l
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-slave-jl3x3: {default-scheduler } Scheduled: Successfully assigned redis-slave-jl3x3 to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:07 +0000 UTC - event for redis-slave-osa3l: {default-scheduler } Scheduled: Successfully assigned redis-slave-osa3l to 127.0.0.1
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:08 +0000 UTC - event for redis-slave-jl3x3: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_samples/gb-redisslave:v1" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:45:12 +0000 UTC - event for redis-slave-osa3l: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_samples/gb-redisslave:v1" already present on machine
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:13 +0000 UTC - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-nfjwu
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:13 +0000 UTC - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-z4nrt
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:13 +0000 UTC - event for frontend: {replication-controller } SuccessfulDelete: Deleted pod: frontend-04dlc
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:15 +0000 UTC - event for redis-master: {replication-controller } SuccessfulDelete: Deleted pod: redis-master-exkdz
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:17 +0000 UTC - event for redis-slave: {replication-controller } SuccessfulDelete: Deleted pod: redis-slave-osa3l
Mar 24 07:55:18.483: INFO: At 2016-03-24 07:55:17 +0000 UTC - event for redis-slave: {replication-controller } SuccessfulDelete: Deleted pod: redis-slave-jl3x3
Mar 24 07:55:18.485: INFO: POD                 NODE       PHASE    GRACE  CONDITIONS
Mar 24 07:55:18.486: INFO: frontend-04dlc      127.0.0.1  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 07:45:36 +0000 UTC  }]
Mar 24 07:55:18.486: INFO: redis-master-exkdz  127.0.0.1  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 07:45:36 +0000 UTC  }]
Mar 24 07:55:18.486: INFO: redis-slave-jl3x3   127.0.0.1  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 07:45:36 +0000 UTC  }]
Mar 24 07:55:18.486: INFO: redis-slave-osa3l   127.0.0.1  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 07:45:36 +0000 UTC  }]
Mar 24 07:55:18.486: INFO: 
Mar 24 07:55:18.487: INFO: 
Logging node info for node 127.0.0.1
Mar 24 07:55:18.488: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2284 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:55:17 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:55:17 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 07:55:18.488: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 07:55:18.489: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 07:55:18.492: INFO: redis-master-exkdz started at 2016-03-24 07:45:07 +0000 UTC (1 container statuses recorded)
Mar 24 07:55:18.492: INFO: 	Container master ready: true, restart count 0
Mar 24 07:55:18.492: INFO: redis-slave-jl3x3 started at 2016-03-24 07:45:07 +0000 UTC (1 container statuses recorded)
Mar 24 07:55:18.492: INFO: 	Container slave ready: true, restart count 0
Mar 24 07:55:18.492: INFO: redis-slave-osa3l started at 2016-03-24 07:45:07 +0000 UTC (1 container statuses recorded)
Mar 24 07:55:18.492: INFO: 	Container slave ready: true, restart count 0
Mar 24 07:55:18.492: INFO: frontend-04dlc started at 2016-03-24 07:45:07 +0000 UTC (1 container statuses recorded)
Mar 24 07:55:18.492: INFO: 	Container php-redis ready: true, restart count 0
Mar 24 07:55:18.521: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 07:55:18.521: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8f38f" for this suite.

• Failure [648.183 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Guestbook application
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create and stop a working application [Conformance] [It]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:181

    Mar 24 07:55:12.217: Frontend service did not start serving content in 600 seconds.

    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1235
------------------------------
SSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:880
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:55:53.529: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:55:53.532: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6osym
Mar 24 07:55:53.533: INFO: Get service account default in ns e2e-tests-kubectl-6osym failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:55:55.535: INFO: Service account default in ns e2e-tests-kubectl-6osym with secrets found. (2.002300289s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:55:55.535: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6osym
Mar 24 07:55:55.535: INFO: Service account default in ns e2e-tests-kubectl-6osym with secrets found. (809.302µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should check is all data is printed [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:880
Mar 24 07:55:55.535: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config version'
Mar 24 07:55:55.548: INFO: stderr: ""
Mar 24 07:55:55.548: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"3+\", GitVersion:\"v1.3.0-alpha.0.854+d9aa83a25cdc21\", GitCommit:\"d9aa83a25cdc21bb5a43056e928536883648d0f2\", GitTreeState:\"clean\", BuildDate:\"2016-03-24T07:22:48Z\", GoVersion:\"go1.5.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"3+\", GitVersion:\"v1.3.0-alpha.0.854+d9aa83a25cdc21\", GitCommit:\"d9aa83a25cdc21bb5a43056e928536883648d0f2\", GitTreeState:\"clean\", BuildDate:\"2016-03-24T07:22:48Z\", GoVersion:\"go1.5.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:55:55.548: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6osym" for this suite.

• [SLOW TEST:7.026 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl version
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should check is all data is printed [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:880
------------------------------
SSSS
------------------------------
[k8s.io] Downward API 
  should provide pod name and namespace as env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
[BeforeEach] [k8s.io] Downward API
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:56:00.556: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:56:00.558: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-4s37d
Mar 24 07:56:00.559: INFO: Get service account default in ns e2e-tests-downward-api-4s37d failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:56:02.560: INFO: Service account default in ns e2e-tests-downward-api-4s37d with secrets found. (2.002245466s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:56:02.560: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-4s37d
Mar 24 07:56:02.561: INFO: Service account default in ns e2e-tests-downward-api-4s37d with secrets found. (868.837µs)
[It] should provide pod name and namespace as env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
STEP: Creating a pod to test downward api env vars
Mar 24 07:56:02.565: INFO: Waiting up to 5m0s for pod downward-api-daa950ed-f195-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:56:02.567: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-daa950ed-f195-11e5-8186-064a4ed57913' yet
Mar 24 07:56:02.567: INFO: Waiting for pod downward-api-daa950ed-f195-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-4s37d' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.660525ms elapsed)
Mar 24 07:56:04.569: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-daa950ed-f195-11e5-8186-064a4ed57913' in namespace 'e2e-tests-downward-api-4s37d' so far
Mar 24 07:56:04.569: INFO: Waiting for pod downward-api-daa950ed-f195-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-4s37d' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003871792s elapsed)
Mar 24 07:56:06.571: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-daa950ed-f195-11e5-8186-064a4ed57913' in namespace 'e2e-tests-downward-api-4s37d' so far
Mar 24 07:56:06.571: INFO: Waiting for pod downward-api-daa950ed-f195-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-4s37d' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005921316s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod downward-api-daa950ed-f195-11e5-8186-064a4ed57913 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
SHLVL=1
HOME=/
TERM=linux
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
POD_NAME=downward-api-daa950ed-f195-11e5-8186-064a4ed57913
PATH=/bin:/sbin/:/usr/bin/:/usr/sbin/
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
POD_NAMESPACE=e2e-tests-downward-api-4s37d
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1


[AfterEach] [k8s.io] Downward API
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:56:08.585: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4s37d" for this suite.

• [SLOW TEST:13.037 seconds]
[k8s.io] Downward API
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide pod name and namespace as env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
------------------------------
SSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:56:13.593: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:56:13.596: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-hhz2e
Mar 24 07:56:13.597: INFO: Get service account default in ns e2e-tests-sched-pred-hhz2e failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:56:15.598: INFO: Service account default in ns e2e-tests-sched-pred-hhz2e with secrets found. (2.00223591s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:56:15.598: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-hhz2e
Mar 24 07:56:15.599: INFO: Service account default in ns e2e-tests-sched-pred-hhz2e with secrets found. (827.493µs)
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 24 07:56:15.600: INFO: Waiting for terminating namespaces to be deleted...
Mar 24 07:56:15.602: INFO: >>> testContext.KubeConfig: /root/.kube/config

Mar 24 07:56:15.603: INFO: Waiting up to 2m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 24 07:56:15.604: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 24 07:56:15.604: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar 24 07:56:15.604: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1 before test
[It] validates resource limits of pods that are allowed to run [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
Mar 24 07:56:15.608: INFO: Node: 127.0.0.1 has capacity: 2000
STEP: Starting additional 4 Pods to fully saturate the cluster CPU and trying to start another one
Mar 24 07:56:15.630: INFO: Waiting for running...
Mar 24 07:56:35.633: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-hhz2e
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:56:45.734: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-hhz2e" for this suite.

• [SLOW TEST:37.155 seconds]
[k8s.io] SchedulerPredicates [Serial]
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  validates resource limits of pods that are allowed to run [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
------------------------------
SS
------------------------------
[k8s.io] ReplicaSet 
  should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
[BeforeEach] [k8s.io] ReplicaSet
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:56:50.748: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:56:50.750: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replicaset-71iw9
Mar 24 07:56:50.751: INFO: Get service account default in ns e2e-tests-replicaset-71iw9 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:56:52.753: INFO: Service account default in ns e2e-tests-replicaset-71iw9 with secrets found. (2.002588488s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:56:52.753: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replicaset-71iw9
Mar 24 07:56:52.754: INFO: Service account default in ns e2e-tests-replicaset-71iw9 with secrets found. (895.658µs)
[It] should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
STEP: Creating ReplicaSet my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913
Mar 24 07:56:52.759: INFO: Pod name my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913: Found 0 pods out of 2
Mar 24 07:56:57.766: INFO: Pod name my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913: Found 2 pods out of 2
STEP: Ensuring each pod is running
Mar 24 07:56:57.766: INFO: Waiting up to 5m0s for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b status to be running
Mar 24 07:56:57.769: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (2.949554ms elapsed)
Mar 24 07:56:59.771: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (2.004973884s elapsed)
Mar 24 07:57:01.773: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (4.00701844s elapsed)
Mar 24 07:57:03.775: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (6.009029445s elapsed)
Mar 24 07:57:05.777: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (8.011002629s elapsed)
Mar 24 07:57:07.783: INFO: Waiting for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b in namespace 'e2e-tests-replicaset-71iw9' status to be 'running'(found phase: "Pending", readiness: false) (10.016786763s elapsed)
Mar 24 07:57:09.785: INFO: Found pod 'my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b' on node '127.0.0.1'
Mar 24 07:57:09.785: INFO: Waiting up to 5m0s for pod my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-wqie2 status to be running
Mar 24 07:57:09.786: INFO: Found pod 'my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-wqie2' on node '127.0.0.1'
STEP: Trying to dial each unique pod
Mar 24 07:57:14.841: INFO: Controller my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913: Got expected result from replica 1 [my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b]: "my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-jh06b", 1 of 2 required successes so far
Mar 24 07:57:14.898: INFO: Controller my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913: Got expected result from replica 2 [my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-wqie2]: "my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913-wqie2", 2 of 2 required successes so far
STEP: deleting ReplicaSet my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913 in namespace e2e-tests-replicaset-71iw9
Mar 24 07:57:16.913: INFO: Deleting RS my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913 took: 2.013731148s
Mar 24 07:57:20.917: INFO: Terminating ReplicaSet my-hostname-basic-f8940f6d-f195-11e5-8186-064a4ed57913 pods took: 4.003429807s
[AfterEach] [k8s.io] ReplicaSet
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:57:20.917: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-71iw9" for this suite.

• [SLOW TEST:35.176 seconds]
[k8s.io] ReplicaSet
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
------------------------------
SSSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:151
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:57:25.924: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:57:25.926: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-uhr2n
Mar 24 07:57:25.927: INFO: Get service account default in ns e2e-tests-kubectl-uhr2n failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:57:27.929: INFO: Service account default in ns e2e-tests-kubectl-uhr2n with secrets found. (2.003068916s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:57:27.929: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-uhr2n
Mar 24 07:57:27.930: INFO: Service account default in ns e2e-tests-kubectl-uhr2n with secrets found. (809.76µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:130
[It] should scale a replication controller [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:151
STEP: creating a replication controller
Mar 24 07:57:27.930: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:27.981: INFO: stderr: ""
Mar 24 07:57:27.981: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:57:27.981: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:28.007: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:28.007: INFO: stdout: "update-demo-nautilus-1oa8m update-demo-nautilus-6trst"
Mar 24 07:57:28.008: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-1oa8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:28.044: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:28.044: INFO: stdout: ""
Mar 24 07:57:28.044: INFO: update-demo-nautilus-1oa8m is created but not running
Mar 24 07:57:33.045: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:33.062: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:33.062: INFO: stdout: "update-demo-nautilus-1oa8m update-demo-nautilus-6trst"
Mar 24 07:57:33.062: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-1oa8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:33.075: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:33.075: INFO: stdout: ""
Mar 24 07:57:33.075: INFO: update-demo-nautilus-1oa8m is created but not running
Mar 24 07:57:38.075: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:38.091: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:38.091: INFO: stdout: "update-demo-nautilus-1oa8m update-demo-nautilus-6trst"
Mar 24 07:57:38.091: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-1oa8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:38.105: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:38.105: INFO: stdout: ""
Mar 24 07:57:38.105: INFO: update-demo-nautilus-1oa8m is created but not running
Mar 24 07:57:43.105: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:43.122: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:43.122: INFO: stdout: "update-demo-nautilus-1oa8m update-demo-nautilus-6trst"
Mar 24 07:57:43.122: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-1oa8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:43.136: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:43.136: INFO: stdout: "true"
Mar 24 07:57:43.136: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-1oa8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:43.150: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:43.150: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:43.150: INFO: validating pod update-demo-nautilus-1oa8m
Mar 24 07:57:43.202: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:43.202: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:43.202: INFO: update-demo-nautilus-1oa8m is verified up and running
Mar 24 07:57:43.202: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:43.217: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:43.217: INFO: stdout: "true"
Mar 24 07:57:43.217: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:43.231: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:43.231: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:43.231: INFO: validating pod update-demo-nautilus-6trst
Mar 24 07:57:43.285: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:43.285: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:43.285: INFO: update-demo-nautilus-6trst is verified up and running
STEP: scaling down the replication controller
Mar 24 07:57:43.285: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:45.308: INFO: stderr: ""
Mar 24 07:57:45.308: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:57:45.309: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:45.326: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:45.326: INFO: stdout: "update-demo-nautilus-1oa8m update-demo-nautilus-6trst"
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 24 07:57:50.327: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:50.345: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:50.345: INFO: stdout: "update-demo-nautilus-6trst"
Mar 24 07:57:50.345: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:50.359: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:50.359: INFO: stdout: "true"
Mar 24 07:57:50.360: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:50.376: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:50.376: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:50.376: INFO: validating pod update-demo-nautilus-6trst
Mar 24 07:57:50.394: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:50.394: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:50.394: INFO: update-demo-nautilus-6trst is verified up and running
STEP: scaling up the replication controller
Mar 24 07:57:50.394: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:52.419: INFO: stderr: ""
Mar 24 07:57:52.419: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 24 07:57:52.419: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:52.436: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:52.436: INFO: stdout: "update-demo-nautilus-6trst update-demo-nautilus-pn08q"
Mar 24 07:57:52.436: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:52.453: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:52.453: INFO: stdout: "true"
Mar 24 07:57:52.453: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:52.469: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:52.469: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:52.469: INFO: validating pod update-demo-nautilus-6trst
Mar 24 07:57:52.478: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:52.478: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:52.478: INFO: update-demo-nautilus-6trst is verified up and running
Mar 24 07:57:52.478: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn08q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:52.499: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:52.499: INFO: stdout: ""
Mar 24 07:57:52.499: INFO: update-demo-nautilus-pn08q is created but not running
Mar 24 07:57:57.499: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:57.516: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:57.516: INFO: stdout: "update-demo-nautilus-6trst update-demo-nautilus-pn08q"
Mar 24 07:57:57.516: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:57.531: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:57.531: INFO: stdout: "true"
Mar 24 07:57:57.531: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6trst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:57.545: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:57.545: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:57.545: INFO: validating pod update-demo-nautilus-6trst
Mar 24 07:57:57.550: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:57.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:57.550: INFO: update-demo-nautilus-6trst is verified up and running
Mar 24 07:57:57.550: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn08q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:57.565: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:57.565: INFO: stdout: "true"
Mar 24 07:57:57.565: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn08q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:57.579: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 24 07:57:57.579: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 24 07:57:57.579: INFO: validating pod update-demo-nautilus-pn08q
Mar 24 07:57:57.635: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 24 07:57:57.635: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 24 07:57:57.635: INFO: update-demo-nautilus-pn08q is verified up and running
STEP: using delete to clean up resources
Mar 24 07:57:57.635: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete --grace-period=0 -f /root/src/k8s.io/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:59.695: INFO: stderr: ""
Mar 24 07:57:59.695: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted"
Mar 24 07:57:59.695: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-uhr2n'
Mar 24 07:57:59.751: INFO: stderr: ""
Mar 24 07:57:59.751: INFO: stdout: ""
Mar 24 07:57:59.751: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-uhr2n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 24 07:57:59.810: INFO: stderr: ""
Mar 24 07:57:59.810: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:57:59.810: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-uhr2n" for this suite.

• [SLOW TEST:53.895 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Update Demo
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should scale a replication controller [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:151
------------------------------
[k8s.io] Secrets 
  should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
[BeforeEach] [k8s.io] Secrets
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:58:19.819: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:58:19.821: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-2sms8
Mar 24 07:58:19.822: INFO: Get service account default in ns e2e-tests-secrets-2sms8 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:58:21.823: INFO: Service account default in ns e2e-tests-secrets-2sms8 with secrets found. (2.002289969s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:58:21.823: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-2sms8
Mar 24 07:58:21.824: INFO: Service account default in ns e2e-tests-secrets-2sms8 with secrets found. (884.013µs)
[It] should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
STEP: Creating secret with name secret-test-2dab2a9f-f196-11e5-8186-064a4ed57913
STEP: Creating a pod to test consume secrets
Mar 24 07:58:21.829: INFO: Waiting up to 5m0s for pod pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:58:21.831: INFO: No Status.Info for container 'secret-volume-test' in pod 'pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913' yet
Mar 24 07:58:21.831: INFO: Waiting for pod pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-2sms8' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.872012ms elapsed)
Mar 24 07:58:23.833: INFO: Nil State.Terminated for container 'secret-volume-test' in pod 'pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913' in namespace 'e2e-tests-secrets-2sms8' so far
Mar 24 07:58:23.833: INFO: Waiting for pod pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-2sms8' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.0038604s elapsed)
Mar 24 07:58:25.835: INFO: Nil State.Terminated for container 'secret-volume-test' in pod 'pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913' in namespace 'e2e-tests-secrets-2sms8' so far
Mar 24 07:58:25.835: INFO: Waiting for pod pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-2sms8' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005689101s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-secrets-2dab6997-f196-11e5-8186-064a4ed57913 container secret-volume-test: <nil>
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r--
content of file "/etc/secret-volume/data-1": value-1



STEP: Cleaning up the secret
[AfterEach] [k8s.io] Secrets
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:58:27.851: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2sms8" for this suite.

• [SLOW TEST:13.039 seconds]
[k8s.io] Secrets
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be consumable from pods in volume [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:62
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:58:32.858: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:58:32.860: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-6bbng
Mar 24 07:58:32.862: INFO: Get service account default in ns e2e-tests-proxy-6bbng failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:58:34.863: INFO: Service account default in ns e2e-tests-proxy-6bbng with secrets found. (2.002663276s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:58:34.863: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-6bbng
Mar 24 07:58:34.864: INFO: Service account default in ns e2e-tests-proxy-6bbng with secrets found. (841.132µs)
[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:62
Mar 24 07:58:34.868: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.706242ms)
Mar 24 07:58:34.870: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.325406ms)
Mar 24 07:58:34.873: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.364374ms)
Mar 24 07:58:34.875: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.23756ms)
Mar 24 07:58:34.877: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.295462ms)
Mar 24 07:58:34.880: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.224346ms)
Mar 24 07:58:34.889: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 9.064778ms)
Mar 24 07:58:34.910: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 21.036285ms)
Mar 24 07:58:34.916: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 5.819141ms)
Mar 24 07:58:34.918: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.575659ms)
Mar 24 07:58:34.920: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.260463ms)
Mar 24 07:58:34.923: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.166076ms)
Mar 24 07:58:34.926: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 3.271662ms)
Mar 24 07:58:34.928: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.228482ms)
Mar 24 07:58:34.931: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.383415ms)
Mar 24 07:58:34.933: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.653888ms)
Mar 24 07:58:34.935: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.196622ms)
Mar 24 07:58:34.938: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.132915ms)
Mar 24 07:58:34.940: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.291708ms)
Mar 24 07:58:34.942: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.238225ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:58:34.942: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-6bbng" for this suite.

• [SLOW TEST:7.091 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:62
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:58:39.949: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:58:39.951: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-eez5u
Mar 24 07:58:39.952: INFO: Get service account default in ns e2e-tests-sched-pred-eez5u failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:58:41.954: INFO: Service account default in ns e2e-tests-sched-pred-eez5u with secrets found. (2.002445323s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:58:41.954: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-eez5u
Mar 24 07:58:41.955: INFO: Service account default in ns e2e-tests-sched-pred-eez5u with secrets found. (917.318µs)
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 24 07:58:41.956: INFO: Waiting for terminating namespaces to be deleted...
Mar 24 07:58:41.957: INFO: >>> testContext.KubeConfig: /root/.kube/config

Mar 24 07:58:41.959: INFO: Waiting up to 2m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 24 07:58:41.961: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 24 07:58:41.961: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar 24 07:58:41.961: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1 before test
[It] validates that NodeSelector is respected if not matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
STEP: Trying to schedule Pod with nonempty NodeSelector.
Mar 24 07:58:41.968: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-eez5u
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:58:51.979: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-eez5u" for this suite.

• [SLOW TEST:17.039 seconds]
[k8s.io] SchedulerPredicates [Serial]
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  validates that NodeSelector is respected if not matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
------------------------------
SSSSSS
------------------------------
[k8s.io] Secrets 
  should be consumable from pods in env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
[BeforeEach] [k8s.io] Secrets
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:58:56.988: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:58:56.992: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-p7ock
Mar 24 07:58:56.993: INFO: Get service account default in ns e2e-tests-secrets-p7ock failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:58:58.994: INFO: Service account default in ns e2e-tests-secrets-p7ock with secrets found. (2.002120145s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:58:58.994: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-p7ock
Mar 24 07:58:58.995: INFO: Service account default in ns e2e-tests-secrets-p7ock with secrets found. (838.941µs)
[It] should be consumable from pods in env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
STEP: Creating secret with name secret-test-43d2f48e-f196-11e5-8186-064a4ed57913
STEP: Creating a pod to test consume secrets
Mar 24 07:58:59.000: INFO: Waiting up to 5m0s for pod pod-secrets-43d33052-f196-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 07:58:59.002: INFO: No Status.Info for container 'secret-env-test' in pod 'pod-secrets-43d33052-f196-11e5-8186-064a4ed57913' yet
Mar 24 07:58:59.002: INFO: Waiting for pod pod-secrets-43d33052-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-p7ock' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.979586ms elapsed)
Mar 24 07:59:01.004: INFO: Nil State.Terminated for container 'secret-env-test' in pod 'pod-secrets-43d33052-f196-11e5-8186-064a4ed57913' in namespace 'e2e-tests-secrets-p7ock' so far
Mar 24 07:59:01.004: INFO: Waiting for pod pod-secrets-43d33052-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-p7ock' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00388395s elapsed)
Mar 24 07:59:03.006: INFO: Nil State.Terminated for container 'secret-env-test' in pod 'pod-secrets-43d33052-f196-11e5-8186-064a4ed57913' in namespace 'e2e-tests-secrets-p7ock' so far
Mar 24 07:59:03.006: INFO: Waiting for pod pod-secrets-43d33052-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-secrets-p7ock' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005779145s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-secrets-43d33052-f196-11e5-8186-064a4ed57913 container secret-env-test: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
SHLVL=1
HOME=/
SECRET_DATA=value-1
TERM=linux
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/bin:/sbin/:/usr/bin/:/usr/sbin/
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1


STEP: Cleaning up the secret
[AfterEach] [k8s.io] Secrets
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:59:05.022: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-p7ock" for this suite.

• [SLOW TEST:13.040 seconds]
[k8s.io] Secrets
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be consumable from pods in env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:59:10.028: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:59:10.031: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-6gv4i
Mar 24 07:59:10.032: INFO: Service account default in ns e2e-tests-sched-pred-6gv4i had 0 secrets, ignoring for 2s: <nil>
Mar 24 07:59:12.034: INFO: Service account default in ns e2e-tests-sched-pred-6gv4i with secrets found. (2.003039841s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:59:12.034: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-6gv4i
Mar 24 07:59:12.035: INFO: Service account default in ns e2e-tests-sched-pred-6gv4i with secrets found. (820.088µs)
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 24 07:59:12.036: INFO: Waiting for terminating namespaces to be deleted...
Mar 24 07:59:12.038: INFO: >>> testContext.KubeConfig: /root/.kube/config

Mar 24 07:59:12.038: INFO: Waiting up to 2m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 24 07:59:12.040: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 24 07:59:12.040: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar 24 07:59:12.040: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1 before test
[It] validates that NodeSelector is respected if matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
STEP: Trying to launch a pod without a label to get a node which can launch it.
Mar 24 07:59:12.045: INFO: Waiting up to 5m0s for pod without-label status to be running
Mar 24 07:59:12.046: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6gv4i' status to be 'running'(found phase: "Pending", readiness: false) (995.662µs elapsed)
Mar 24 07:59:14.049: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6gv4i' status to be 'running'(found phase: "Pending", readiness: false) (2.004174251s elapsed)
Mar 24 07:59:16.052: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6gv4i' status to be 'running'(found phase: "Pending", readiness: false) (4.006640791s elapsed)
Mar 24 07:59:18.054: INFO: Found pod 'without-label' on node '127.0.0.1'
STEP: Trying to apply a random label on the found node.
STEP: Trying to relaunch the pod, now with labels.
Mar 24 07:59:18.072: INFO: Waiting up to 5m0s for pod with-labels status to be !pending
Mar 24 07:59:18.078: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6gv4i' status to be '!pending'(found phase: "Pending", readiness: false) (5.893735ms elapsed)
Mar 24 07:59:20.082: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6gv4i' status to be '!pending'(found phase: "Pending", readiness: false) (2.009407159s elapsed)
Mar 24 07:59:22.085: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6gv4i' status to be '!pending'(found phase: "Pending", readiness: false) (4.013018874s elapsed)
Mar 24 07:59:24.087: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6gv4i' status to be '!pending'(found phase: "Pending", readiness: false) (6.015017075s elapsed)
Mar 24 07:59:26.089: INFO: Saw pod 'with-labels' in namespace 'e2e-tests-sched-pred-6gv4i' out of pending state (found '"Running"')
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:59:26.097: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6gv4i" for this suite.

• [SLOW TEST:21.079 seconds]
[k8s.io] SchedulerPredicates [Serial]
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  validates that NodeSelector is respected if matching [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
------------------------------
SSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:573
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:59:31.108: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:59:31.110: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8jdsg
Mar 24 07:59:31.115: INFO: Get service account default in ns e2e-tests-kubectl-8jdsg failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:59:33.117: INFO: Service account default in ns e2e-tests-kubectl-8jdsg with secrets found. (2.007291664s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:59:33.117: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8jdsg
Mar 24 07:59:33.118: INFO: Service account default in ns e2e-tests-kubectl-8jdsg with secrets found. (855.814µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should check if Kubernetes master services is included in cluster-info [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:573
STEP: validating cluster-info
Mar 24 07:59:33.118: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config cluster-info'
Mar 24 07:59:33.132: INFO: stderr: ""
Mar 24 07:59:33.132: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33m127.0.0.1:8080\x1b[0m"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:59:33.132: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8jdsg" for this suite.

• [SLOW TEST:7.031 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl cluster-info
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should check if Kubernetes master services is included in cluster-info [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:573
------------------------------
SS
------------------------------
[k8s.io] Networking 
  should provide Internet connection for containers [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:59:38.139: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:59:38.142: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-gau4d
Mar 24 07:59:38.142: INFO: Get service account default in ns e2e-tests-nettest-gau4d failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:59:40.144: INFO: Service account default in ns e2e-tests-nettest-gau4d with secrets found. (2.00224504s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:59:40.144: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-gau4d
Mar 24 07:59:40.145: INFO: Service account default in ns e2e-tests-nettest-gau4d with secrets found. (878.533µs)
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
STEP: Running container which tries to wget google.com
Mar 24 07:59:40.195: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure
Mar 24 07:59:40.197: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Mar 24 07:59:40.197: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-gau4d' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.300176ms elapsed)
Mar 24 07:59:42.199: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-gau4d' so far
Mar 24 07:59:42.199: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-gau4d' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004149146s elapsed)
Mar 24 07:59:44.202: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-gau4d' so far
Mar 24 07:59:44.202: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-gau4d' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006983219s elapsed)
STEP: Saw pod success
[AfterEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 07:59:46.212: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-gau4d" for this suite.

• [SLOW TEST:13.083 seconds]
[k8s.io] Networking
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide Internet connection for containers [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
------------------------------
SS
------------------------------
[k8s.io] Downward API volume 
  should update annotations on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
[BeforeEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 07:59:51.222: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 07:59:51.224: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-lzlbq
Mar 24 07:59:51.225: INFO: Get service account default in ns e2e-tests-downward-api-lzlbq failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 07:59:53.226: INFO: Service account default in ns e2e-tests-downward-api-lzlbq with secrets found. (2.002419036s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 07:59:53.226: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-lzlbq
Mar 24 07:59:53.227: INFO: Service account default in ns e2e-tests-downward-api-lzlbq with secrets found. (827.26µs)
[It] should update annotations on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
STEP: Creating the pod
Mar 24 07:59:53.230: INFO: Waiting up to 5m0s for pod annotationupdate64262717-f196-11e5-8186-064a4ed57913 status to be running
Mar 24 07:59:53.235: INFO: Waiting for pod annotationupdate64262717-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-lzlbq' status to be 'running'(found phase: "Pending", readiness: false) (4.169402ms elapsed)
Mar 24 07:59:55.237: INFO: Waiting for pod annotationupdate64262717-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-lzlbq' status to be 'running'(found phase: "Pending", readiness: false) (2.006448405s elapsed)
Mar 24 07:59:57.239: INFO: Waiting for pod annotationupdate64262717-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-lzlbq' status to be 'running'(found phase: "Pending", readiness: false) (4.008433123s elapsed)
Mar 24 07:59:59.241: INFO: Found pod 'annotationupdate64262717-f196-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: Deleting the pod
[AfterEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:00:01.265: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lzlbq" for this suite.

• [SLOW TEST:15.057 seconds]
[k8s.io] Downward API volume
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should update annotations on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
------------------------------
SSSS
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:00:06.278: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:00:06.281: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-86zww
Mar 24 08:00:06.282: INFO: Get service account default in ns e2e-tests-emptydir-86zww failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:00:08.283: INFO: Service account default in ns e2e-tests-emptydir-86zww with secrets found. (2.002270464s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:00:08.283: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-86zww
Mar 24 08:00:08.284: INFO: Service account default in ns e2e-tests-emptydir-86zww with secrets found. (855.3µs)
[It] should support (non-root,0777,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 24 08:00:08.286: FAIL: Failed to create pod: pods "pod-6d1fa4e3-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-86zww".
Mar 24 08:00:08.293: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:00:08.293: INFO: 
Mar 24 08:00:08.294: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:00:08.295: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2731 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 07:59:59 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 07:59:59 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:00:08.295: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:00:08.296: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:00:08.840: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:00:08.840: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10m31.904484s}
Mar 24 08:00:08.840: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10m31.090137s}
Mar 24 08:00:08.840: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.708595s}
Mar 24 08:00:08.840: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.135333s}
Mar 24 08:00:08.840: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:18.412852s}
Mar 24 08:00:08.840: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:00:08.840: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:17.749006s}
Mar 24 08:00:08.840: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-86zww" for this suite.

• Failure [7.570 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0777,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84

  Mar 24 08:00:08.286: Failed to create pod: pods "pod-6d1fa4e3-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] EmptyDir volumes 
  volume on default medium should have the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:00:13.848: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:00:13.851: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ajoj1
Mar 24 08:00:13.852: INFO: Get service account default in ns e2e-tests-emptydir-ajoj1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:00:15.853: INFO: Service account default in ns e2e-tests-emptydir-ajoj1 with secrets found. (2.002269367s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:00:15.853: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ajoj1
Mar 24 08:00:15.854: INFO: Service account default in ns e2e-tests-emptydir-ajoj1 with secrets found. (809.575µs)
[It] volume on default medium should have the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 24 08:00:15.855: FAIL: Failed to create pod: pods "pod-71a2b1d2-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-ajoj1".
Mar 24 08:00:15.865: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:00:15.865: INFO: 
Mar 24 08:00:15.866: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:00:15.867: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2753 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:00:09 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:00:09 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:00:15.867: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:00:15.868: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:00:15.892: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:00:15.892: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10m31.904484s}
Mar 24 08:00:15.892: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10m31.090137s}
Mar 24 08:00:15.892: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.708595s}
Mar 24 08:00:15.892: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.135333s}
Mar 24 08:00:15.892: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:18.412852s}
Mar 24 08:00:15.892: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:00:15.892: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:17.749006s}
Mar 24 08:00:15.892: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ajoj1" for this suite.

• Failure [7.051 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  volume on default medium should have the correct mode [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88

  Mar 24 08:00:15.855: Failed to create pod: pods "pod-71a2b1d2-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
[BeforeEach] [k8s.io] Events
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:00:20.900: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:00:20.902: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-n34k0
Mar 24 08:00:20.903: INFO: Get service account default in ns e2e-tests-events-n34k0 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:00:22.904: INFO: Service account default in ns e2e-tests-events-n34k0 with secrets found. (2.002673838s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:00:22.904: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-n34k0
Mar 24 08:00:22.905: INFO: Service account default in ns e2e-tests-events-n34k0 with secrets found. (847.362µs)
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 24 08:00:22.908: INFO: Waiting up to 5m0s for pod send-events-75d6a159-f196-11e5-8186-064a4ed57913 status to be running
Mar 24 08:00:22.911: INFO: Waiting for pod send-events-75d6a159-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-events-n34k0' status to be 'running'(found phase: "Pending", readiness: false) (2.641797ms elapsed)
Mar 24 08:00:24.913: INFO: Waiting for pod send-events-75d6a159-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-events-n34k0' status to be 'running'(found phase: "Pending", readiness: false) (2.004576644s elapsed)
Mar 24 08:00:26.914: INFO: Waiting for pod send-events-75d6a159-f196-11e5-8186-064a4ed57913 in namespace 'e2e-tests-events-n34k0' status to be 'running'(found phase: "Pending", readiness: false) (4.006486759s elapsed)
Mar 24 08:00:28.916: INFO: Found pod 'send-events-75d6a159-f196-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
&{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:send-events-75d6a159-f196-11e5-8186-064a4ed57913 GenerateName: Namespace:e2e-tests-events-n34k0 SelfLink:/api/v1/namespaces/e2e-tests-events-n34k0/pods/send-events-75d6a159-f196-11e5-8186-064a4ed57913 UID:75d6d90d-f196-11e5-a8cf-064a4ed57913 ResourceVersion:2774 Generation:0 CreationTimestamp:2016-03-24 08:00:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:foo time:905690299] Annotations:map[]} Spec:{Volumes:[{Name:default-token-5664m VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc8205e0980 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil> AzureFile:<nil> ConfigMap:<nil>}}] Containers:[{Name:p Image:gcr.io/google_containers/serve_hostname:v1.4 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-5664m ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc8205e09b0 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:127.0.0.1 SecurityContext:0xc8208880c0 ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-03-24 08:00:28 +0000 UTC Reason: Message:}] Message: Reason: HostIP:127.0.0.1 PodIP:192.168.123.56 StartTime:2016-03-24 08:00:22 +0000 UTC ContainerStatuses:[{Name:p State:{Waiting:<nil> Running:0xc820aea8a0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:gcr.io/google_containers/serve_hostname:v1.4 ImageID:sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 ContainerID:hyper://1ffb15bb6c24cd30aad3058b0338fa612083f5b32c9dae1b8c498c9f28043977}]}}
STEP: checking for scheduler event about the pod
Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] Events
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:00:32.927: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-events-n34k0" for this suite.

• [SLOW TEST:32.038 seconds]
[k8s.io] Events
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
------------------------------
[k8s.io] Services 
  should serve a basic endpoint from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:00:52.938: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:00:52.940: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-q9z8h
Mar 24 08:00:52.941: INFO: Get service account default in ns e2e-tests-services-q9z8h failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:00:54.942: INFO: Service account default in ns e2e-tests-services-q9z8h with secrets found. (2.002558461s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:00:54.942: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-q9z8h
Mar 24 08:00:54.943: INFO: Service account default in ns e2e-tests-services-q9z8h with secrets found. (807.382µs)
[BeforeEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 24 08:00:54.943: INFO: >>> testContext.KubeConfig: /root/.kube/config

[It] should serve a basic endpoint from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
STEP: creating service endpoint-test2 in namespace e2e-tests-services-q9z8h
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-q9z8h to expose endpoints map[]
Mar 24 08:00:54.949: INFO: Get endpoints failed (921.189µs elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 24 08:00:55.950: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-q9z8h exposes endpoints map[] (1.002359375s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-q9z8h
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-q9z8h to expose endpoints map[pod1:[80]]
Mar 24 08:00:59.968: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.014787085s elapsed, will retry)
Mar 24 08:01:01.973: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-q9z8h exposes endpoints map[pod1:[80]] (6.020311374s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-q9z8h
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-q9z8h to expose endpoints map[pod1:[80] pod2:[80]]
Mar 24 08:01:06.000: INFO: Unexpected endpoints: found map[8989180a-f196-11e5-a8cf-064a4ed57913:[80]], expected map[pod1:[80] pod2:[80]] (4.024858439s elapsed, will retry)
Mar 24 08:01:07.004: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-q9z8h exposes endpoints map[pod1:[80] pod2:[80]] (5.028919377s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-q9z8h
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-q9z8h to expose endpoints map[pod2:[80]]
Mar 24 08:01:08.016: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-q9z8h exposes endpoints map[pod2:[80]] (1.007732214s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-q9z8h
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-q9z8h to expose endpoints map[]
Mar 24 08:01:09.026: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-q9z8h exposes endpoints map[] (1.003227297s elapsed)
[AfterEach] [k8s.io] Services
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:01:09.035: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-q9z8h" for this suite.

• [SLOW TEST:36.142 seconds]
[k8s.io] Services
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should serve a basic endpoint from pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
------------------------------
SSSSS
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:01:29.080: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:01:29.083: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2mzb4
Mar 24 08:01:29.083: INFO: Get service account default in ns e2e-tests-emptydir-2mzb4 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:01:31.085: INFO: Service account default in ns e2e-tests-emptydir-2mzb4 with secrets found. (2.002276743s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:01:31.085: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2mzb4
Mar 24 08:01:31.086: INFO: Service account default in ns e2e-tests-emptydir-2mzb4 with secrets found. (826.078µs)
[It] should support (non-root,0666,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 24 08:01:31.087: FAIL: Failed to create pod: pods "pod-9e7a2798-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-2mzb4".
Mar 24 08:01:31.093: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:01:31.093: INFO: 
Mar 24 08:01:31.094: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:01:31.096: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2842 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:01:29 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:01:29 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:01:31.096: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:01:31.099: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:01:31.125: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:01:31.125: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10m31.904484s}
Mar 24 08:01:31.125: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10m31.064876s}
Mar 24 08:01:31.125: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.708595s}
Mar 24 08:01:31.125: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.135333s}
Mar 24 08:01:31.125: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:18.412852s}
Mar 24 08:01:31.125: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:01:31.125: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:17.749006s}
Mar 24 08:01:31.125: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2mzb4" for this suite.

• Failure [7.052 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0666,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80

  Mar 24 08:01:31.087: Failed to create pod: pods "pod-9e7a2798-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:741
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:01:36.132: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:01:36.134: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4ssyf
Mar 24 08:01:36.136: INFO: Get service account default in ns e2e-tests-pods-4ssyf failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:01:38.137: INFO: Service account default in ns e2e-tests-pods-4ssyf with secrets found. (2.002855511s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:01:38.137: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4ssyf
Mar 24 08:01:38.138: INFO: Service account default in ns e2e-tests-pods-4ssyf with secrets found. (915.048µs)
[It] should be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:741
STEP: Creating pod liveness-http in namespace e2e-tests-pods-4ssyf
Mar 24 08:01:38.142: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 24 08:01:38.144: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-4ssyf' status to be '!pending'(found phase: "Pending", readiness: false) (1.937556ms elapsed)
Mar 24 08:01:40.145: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-4ssyf' status to be '!pending'(found phase: "Pending", readiness: false) (2.003925304s elapsed)
Mar 24 08:01:42.147: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-4ssyf' status to be '!pending'(found phase: "Pending", readiness: false) (4.005856993s elapsed)
Mar 24 08:01:44.149: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-4ssyf' out of pending state (found '"Running"')
Mar 24 08:01:44.149: INFO: Started pod liveness-http in namespace e2e-tests-pods-4ssyf
STEP: checking the pod's current state and verifying that restartCount is present
Mar 24 08:01:44.151: INFO: Initial restart count of pod liveness-http is 0
Mar 24 08:02:06.174: INFO: Restart count of pod e2e-tests-pods-4ssyf/liveness-http is now 1 (22.023733461s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:02:06.181: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4ssyf" for this suite.

• [SLOW TEST:35.059 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be restarted with a /healthz http liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:741
------------------------------
SSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:02:11.191: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:02:11.193: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-5lgwn
Mar 24 08:02:11.194: INFO: Get service account default in ns e2e-tests-kubectl-5lgwn failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:02:13.195: INFO: Service account default in ns e2e-tests-kubectl-5lgwn with secrets found. (2.002684741s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:02:13.195: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-5lgwn
Mar 24 08:02:13.196: INFO: Service account default in ns e2e-tests-kubectl-5lgwn with secrets found. (827.591µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl run default
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:902
[It] should create an rc or deployment from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 24 08:02:13.197: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-5lgwn'
Mar 24 08:02:13.227: INFO: stderr: ""
Mar 24 08:02:13.227: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:906
Mar 24 08:02:15.232: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-5lgwn'
Mar 24 08:02:17.296: INFO: stderr: ""
Mar 24 08:02:17.296: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:02:17.296: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5lgwn" for this suite.

• [SLOW TEST:26.116 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run default
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create an rc or deployment from an image [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
------------------------------
S
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1051
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:02:37.307: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:02:37.309: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-bfa8u
Mar 24 08:02:37.310: INFO: Get service account default in ns e2e-tests-kubectl-bfa8u failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:02:39.311: INFO: Service account default in ns e2e-tests-kubectl-bfa8u with secrets found. (2.00273609s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:02:39.311: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-bfa8u
Mar 24 08:02:39.312: INFO: Service account default in ns e2e-tests-kubectl-bfa8u with secrets found. (996.476µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1028
[It] should create a job from an image when restart is OnFailure [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1051
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 24 08:02:39.313: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-bfa8u'
Mar 24 08:02:39.334: INFO: stderr: ""
Mar 24 08:02:39.334: INFO: stdout: "job \"e2e-test-nginx-job\" created"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1032
Mar 24 08:02:39.335: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-bfa8u'
Mar 24 08:02:41.370: INFO: stderr: ""
Mar 24 08:02:41.370: INFO: stdout: "job \"e2e-test-nginx-job\" deleted"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:02:41.370: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bfa8u" for this suite.

• [SLOW TEST:24.073 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run job
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create a job from an image when restart is OnFailure [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1051
------------------------------
SSSS
------------------------------
[k8s.io] DNS 
  should provide DNS for the cluster [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280
[BeforeEach] [k8s.io] DNS
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:03:01.380: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:03:01.382: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-qfrgo
Mar 24 08:03:01.383: INFO: Get service account default in ns e2e-tests-dns-qfrgo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:03:03.385: INFO: Service account default in ns e2e-tests-dns-qfrgo with secrets found. (2.002367706s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:03:03.385: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-qfrgo
Mar 24 08:03:03.386: INFO: Service account default in ns e2e-tests-dns-qfrgo with secrets found. (833.187µs)
[It] should provide DNS for the cluster [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280
STEP: Waiting for DNS Service to be Running
Mar 24 08:03:03.386: FAIL: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true
[AfterEach] [k8s.io] DNS
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-dns-qfrgo".
Mar 24 08:03:03.393: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:03:03.393: INFO: 
Mar 24 08:03:03.394: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:03:03.395: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2953 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:03:00 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:03:00 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:03:03.396: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:03:03.396: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:03:03.422: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:03:03.422: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10m31.904484s}
Mar 24 08:03:03.423: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:35.055702s}
Mar 24 08:03:03.423: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.708595s}
Mar 24 08:03:03.423: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.135333s}
Mar 24 08:03:03.423: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:03:03.423: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:13.707771s}
Mar 24 08:03:03.423: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-qfrgo" for this suite.

• Failure [7.055 seconds]
[k8s.io] DNS
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide DNS for the cluster [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280

  Mar 24 08:03:03.386: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:229
------------------------------
S
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0644,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:03:08.435: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:03:08.444: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-i5yav
Mar 24 08:03:08.460: INFO: Get service account default in ns e2e-tests-emptydir-i5yav failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:03:10.462: INFO: Service account default in ns e2e-tests-emptydir-i5yav with secrets found. (2.018534002s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:03:10.462: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-i5yav
Mar 24 08:03:10.463: INFO: Service account default in ns e2e-tests-emptydir-i5yav with secrets found. (880.299µs)
[It] should support (root,0644,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 24 08:03:10.466: FAIL: Failed to create pod: pods "pod-d9b5eec8-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-i5yav".
Mar 24 08:03:10.478: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:03:10.478: INFO: 
Mar 24 08:03:10.485: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:03:10.488: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 2953 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI}]  [{OutOfDisk False 2016-03-24 08:03:00 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:03:00 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:03:10.488: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:03:10.490: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:03:10.522: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:03:10.522: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10m31.904484s}
Mar 24 08:03:10.522: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:35.055702s}
Mar 24 08:03:10.522: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.708595s}
Mar 24 08:03:10.522: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.135333s}
Mar 24 08:03:10.522: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:03:10.522: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:13.707771s}
Mar 24 08:03:10.522: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-i5yav" for this suite.

• Failure [7.094 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0644,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64

  Mar 24 08:03:10.466: Failed to create pod: pods "pod-d9b5eec8-f196-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SSSS
------------------------------
[k8s.io] Proxy version v1 
  should proxy through a service and a pod [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:247
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:03:15.529: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:03:15.531: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ucdpo
Mar 24 08:03:15.532: INFO: Get service account default in ns e2e-tests-proxy-ucdpo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:03:17.534: INFO: Service account default in ns e2e-tests-proxy-ucdpo with secrets found. (2.002592422s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:03:17.534: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ucdpo
Mar 24 08:03:17.535: INFO: Service account default in ns e2e-tests-proxy-ucdpo with secrets found. (785.778µs)
[It] should proxy through a service and a pod [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:247
STEP: creating replication controller proxy-service-m57s1 in namespace e2e-tests-proxy-ucdpo
Mar 24 08:03:17.543: INFO: Created replication controller with name: proxy-service-m57s1, namespace: e2e-tests-proxy-ucdpo, replica count: 1
Mar 24 08:03:18.543: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:19.543: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:20.544: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:21.545: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:22.545: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:23.546: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
Mar 24 08:03:24.546: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
Mar 24 08:03:25.546: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
Mar 24 08:03:26.546: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
Mar 24 08:03:27.547: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
Mar 24 08:03:28.547: INFO: proxy-service-m57s1 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 24 08:03:28.555: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 5.936369ms)
Mar 24 08:03:28.755: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 6.246522ms)
Mar 24 08:03:28.955: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 5.462823ms)
Mar 24 08:03:29.256: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 106.889366ms)
Mar 24 08:03:29.352: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.056982ms)
Mar 24 08:03:29.552: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.914498ms)
Mar 24 08:03:29.757: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 7.117078ms)
Mar 24 08:03:29.953: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.230961ms)
Mar 24 08:03:30.154: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 4.065442ms)
Mar 24 08:03:30.400: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 50.059831ms)
Mar 24 08:03:30.554: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.121854ms)
Mar 24 08:03:30.754: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.283276ms)
Mar 24 08:03:30.954: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 3.144041ms)
Mar 24 08:03:31.155: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.519187ms)
Mar 24 08:03:31.355: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.294021ms)
Mar 24 08:03:31.555: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 4.000344ms)
Mar 24 08:03:31.755: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.356356ms)
Mar 24 08:03:31.955: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.91918ms)
Mar 24 08:03:32.156: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.836361ms)
Mar 24 08:03:32.355: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.179554ms)
Mar 24 08:03:32.555: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.129441ms)
Mar 24 08:03:32.756: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.227265ms)
Mar 24 08:03:32.956: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.891155ms)
Mar 24 08:03:33.157: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.808903ms)
Mar 24 08:03:33.356: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.119701ms)
Mar 24 08:03:33.557: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.260122ms)
Mar 24 08:03:33.757: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.98166ms)
Mar 24 08:03:33.957: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.323903ms)
Mar 24 08:03:34.200: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 45.522443ms)
Mar 24 08:03:34.357: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.743139ms)
Mar 24 08:03:34.558: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.062065ms)
Mar 24 08:03:34.758: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.830639ms)
Mar 24 08:03:34.958: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.393215ms)
Mar 24 08:03:35.158: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.161865ms)
Mar 24 08:03:35.359: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.127966ms)
Mar 24 08:03:35.559: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.691363ms)
Mar 24 08:03:35.759: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.6156ms)
Mar 24 08:03:35.959: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.93379ms)
Mar 24 08:03:36.160: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.589317ms)
Mar 24 08:03:36.360: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 3.228342ms)
Mar 24 08:03:36.560: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.250213ms)
Mar 24 08:03:36.760: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.160502ms)
Mar 24 08:03:36.960: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.380864ms)
Mar 24 08:03:37.162: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 4.897736ms)
Mar 24 08:03:37.361: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.55814ms)
Mar 24 08:03:37.561: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.926824ms)
Mar 24 08:03:37.760: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.598343ms)
Mar 24 08:03:37.964: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 5.63396ms)
Mar 24 08:03:38.164: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 5.994646ms)
Mar 24 08:03:38.362: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.232271ms)
Mar 24 08:03:38.562: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.03265ms)
Mar 24 08:03:38.762: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.783046ms)
Mar 24 08:03:38.962: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.473563ms)
Mar 24 08:03:39.162: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.128571ms)
Mar 24 08:03:39.362: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.146892ms)
Mar 24 08:03:39.563: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.490286ms)
Mar 24 08:03:39.762: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.864283ms)
Mar 24 08:03:39.963: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.221301ms)
Mar 24 08:03:40.164: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.385784ms)
Mar 24 08:03:40.363: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.69675ms)
Mar 24 08:03:40.564: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.670229ms)
Mar 24 08:03:40.763: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.746705ms)
Mar 24 08:03:40.964: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.87494ms)
Mar 24 08:03:41.201: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 40.281835ms)
Mar 24 08:03:41.394: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 32.805803ms)
Mar 24 08:03:41.568: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 6.1912ms)
Mar 24 08:03:41.764: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.901078ms)
Mar 24 08:03:41.970: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 7.998654ms)
Mar 24 08:03:42.165: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.643103ms)
Mar 24 08:03:42.366: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 4.431366ms)
Mar 24 08:03:42.566: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.492426ms)
Mar 24 08:03:42.766: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.173044ms)
Mar 24 08:03:42.966: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.52581ms)
Mar 24 08:03:43.166: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.24015ms)
Mar 24 08:03:43.366: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.76895ms)
Mar 24 08:03:43.567: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 4.085934ms)
Mar 24 08:03:43.768: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 4.988359ms)
Mar 24 08:03:43.967: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.279194ms)
Mar 24 08:03:44.167: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.637958ms)
Mar 24 08:03:44.367: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.938147ms)
Mar 24 08:03:44.568: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 4.070522ms)
Mar 24 08:03:44.767: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.832512ms)
Mar 24 08:03:44.968: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.312188ms)
Mar 24 08:03:45.170: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 5.03047ms)
Mar 24 08:03:45.368: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.937298ms)
Mar 24 08:03:45.569: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 4.035834ms)
Mar 24 08:03:45.769: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.469609ms)
Mar 24 08:03:45.969: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.338468ms)
Mar 24 08:03:46.168: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.052499ms)
Mar 24 08:03:46.369: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.417457ms)
Mar 24 08:03:46.569: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 2.995448ms)
Mar 24 08:03:46.769: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.653839ms)
Mar 24 08:03:46.969: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.37827ms)
Mar 24 08:03:47.169: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.104919ms)
Mar 24 08:03:47.370: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.547255ms)
Mar 24 08:03:47.570: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.830573ms)
Mar 24 08:03:47.770: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 2.792578ms)
Mar 24 08:03:47.984: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 16.604883ms)
Mar 24 08:03:48.170: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 3.043755ms)
Mar 24 08:03:48.371: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.259488ms)
Mar 24 08:03:48.571: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.141718ms)
Mar 24 08:03:48.771: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.375535ms)
Mar 24 08:03:48.970: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 2.680123ms)
Mar 24 08:03:49.171: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.738248ms)
Mar 24 08:03:49.372: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.392811ms)
Mar 24 08:03:49.572: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.342812ms)
Mar 24 08:03:49.772: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.236979ms)
Mar 24 08:03:49.972: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.262804ms)
Mar 24 08:03:50.172: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.303485ms)
Mar 24 08:03:50.372: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.919222ms)
Mar 24 08:03:50.573: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.334886ms)
Mar 24 08:03:50.773: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.21817ms)
Mar 24 08:03:50.972: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.885731ms)
Mar 24 08:03:51.173: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.972783ms)
Mar 24 08:03:51.373: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.889028ms)
Mar 24 08:03:51.573: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.1483ms)
Mar 24 08:03:51.773: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.120008ms)
Mar 24 08:03:51.974: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.035775ms)
Mar 24 08:03:52.174: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.92204ms)
Mar 24 08:03:52.374: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.271115ms)
Mar 24 08:03:52.574: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.229855ms)
Mar 24 08:03:52.774: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.80505ms)
Mar 24 08:03:52.975: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.204576ms)
Mar 24 08:03:53.175: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.115104ms)
Mar 24 08:03:53.376: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 4.474094ms)
Mar 24 08:03:53.575: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.81282ms)
Mar 24 08:03:53.775: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.814163ms)
Mar 24 08:03:53.975: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.534383ms)
Mar 24 08:03:54.176: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 4.009517ms)
Mar 24 08:03:54.375: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.950707ms)
Mar 24 08:03:54.576: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.830459ms)
Mar 24 08:03:54.776: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 2.986374ms)
Mar 24 08:03:54.976: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.684581ms)
Mar 24 08:03:55.177: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.322477ms)
Mar 24 08:03:55.376: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.042789ms)
Mar 24 08:03:55.577: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.235989ms)
Mar 24 08:03:55.777: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.867355ms)
Mar 24 08:03:55.977: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.732305ms)
Mar 24 08:03:56.177: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.129283ms)
Mar 24 08:03:56.377: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.035046ms)
Mar 24 08:03:56.577: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.035655ms)
Mar 24 08:03:56.778: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.103852ms)
Mar 24 08:03:56.978: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.016267ms)
Mar 24 08:03:57.179: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 3.146417ms)
Mar 24 08:03:57.379: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 2.985931ms)
Mar 24 08:03:57.579: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.67443ms)
Mar 24 08:03:57.779: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.543861ms)
Mar 24 08:03:57.979: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.477274ms)
Mar 24 08:03:58.179: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.064813ms)
Mar 24 08:03:58.380: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.003366ms)
Mar 24 08:03:58.580: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.924982ms)
Mar 24 08:03:58.780: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.453748ms)
Mar 24 08:03:58.981: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.549208ms)
Mar 24 08:03:59.181: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.371371ms)
Mar 24 08:03:59.381: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.186845ms)
Mar 24 08:03:59.581: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.903363ms)
Mar 24 08:03:59.781: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.300714ms)
Mar 24 08:03:59.981: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.782779ms)
Mar 24 08:04:00.181: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.959074ms)
Mar 24 08:04:00.382: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.070668ms)
Mar 24 08:04:00.582: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.826178ms)
Mar 24 08:04:00.782: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.779631ms)
Mar 24 08:04:00.983: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 4.249328ms)
Mar 24 08:04:01.183: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.14253ms)
Mar 24 08:04:01.383: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.1675ms)
Mar 24 08:04:01.583: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.121301ms)
Mar 24 08:04:01.783: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.884521ms)
Mar 24 08:04:01.983: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.799142ms)
Mar 24 08:04:02.183: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.199004ms)
Mar 24 08:04:02.384: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.326574ms)
Mar 24 08:04:02.584: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.118095ms)
Mar 24 08:04:02.784: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.387037ms)
Mar 24 08:04:02.984: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.282296ms)
Mar 24 08:04:03.184: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.14556ms)
Mar 24 08:04:03.384: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.83356ms)
Mar 24 08:04:03.585: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.150613ms)
Mar 24 08:04:03.785: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.081089ms)
Mar 24 08:04:03.987: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 4.712314ms)
Mar 24 08:04:04.185: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.105664ms)
Mar 24 08:04:04.385: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.180285ms)
Mar 24 08:04:04.586: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.098726ms)
Mar 24 08:04:04.786: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 2.921656ms)
Mar 24 08:04:04.986: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.210644ms)
Mar 24 08:04:05.186: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.774358ms)
Mar 24 08:04:05.386: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.697949ms)
Mar 24 08:04:05.587: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.395052ms)
Mar 24 08:04:05.786: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.977751ms)
Mar 24 08:04:05.987: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.010488ms)
Mar 24 08:04:06.187: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.74308ms)
Mar 24 08:04:06.387: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.097927ms)
Mar 24 08:04:06.587: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.084391ms)
Mar 24 08:04:06.788: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.223991ms)
Mar 24 08:04:06.987: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.889682ms)
Mar 24 08:04:07.188: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.762101ms)
Mar 24 08:04:07.388: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 2.850487ms)
Mar 24 08:04:07.588: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.195631ms)
Mar 24 08:04:07.809: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 23.784665ms)
Mar 24 08:04:07.989: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.101988ms)
Mar 24 08:04:08.189: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.947265ms)
Mar 24 08:04:08.389: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.175605ms)
Mar 24 08:04:08.589: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.739545ms)
Mar 24 08:04:08.790: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.348458ms)
Mar 24 08:04:08.990: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.146426ms)
Mar 24 08:04:09.190: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.530733ms)
Mar 24 08:04:09.391: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 4.506252ms)
Mar 24 08:04:09.590: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.728173ms)
Mar 24 08:04:09.790: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.373804ms)
Mar 24 08:04:09.990: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.097517ms)
Mar 24 08:04:10.191: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.134788ms)
Mar 24 08:04:10.391: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.049186ms)
Mar 24 08:04:10.591: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.598912ms)
Mar 24 08:04:10.791: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.621734ms)
Mar 24 08:04:10.991: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.4759ms)
Mar 24 08:04:11.192: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.218797ms)
Mar 24 08:04:11.392: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.434417ms)
Mar 24 08:04:11.592: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.835193ms)
Mar 24 08:04:11.792: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.814099ms)
Mar 24 08:04:11.992: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.680605ms)
Mar 24 08:04:12.193: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.279889ms)
Mar 24 08:04:12.392: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.088004ms)
Mar 24 08:04:12.593: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.066546ms)
Mar 24 08:04:12.793: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.976145ms)
Mar 24 08:04:12.993: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.45408ms)
Mar 24 08:04:13.193: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.240491ms)
Mar 24 08:04:13.394: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.931479ms)
Mar 24 08:04:13.594: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.172089ms)
Mar 24 08:04:13.794: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 2.716776ms)
Mar 24 08:04:13.995: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.562328ms)
Mar 24 08:04:14.195: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.477302ms)
Mar 24 08:04:14.395: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.084057ms)
Mar 24 08:04:14.595: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.028996ms)
Mar 24 08:04:14.795: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.423153ms)
Mar 24 08:04:14.995: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.646509ms)
Mar 24 08:04:15.196: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.257365ms)
Mar 24 08:04:15.396: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.092098ms)
Mar 24 08:04:15.596: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.917495ms)
Mar 24 08:04:15.796: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.957114ms)
Mar 24 08:04:15.996: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.074724ms)
Mar 24 08:04:16.197: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.870063ms)
Mar 24 08:04:16.397: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.135317ms)
Mar 24 08:04:16.597: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.162814ms)
Mar 24 08:04:16.797: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.437148ms)
Mar 24 08:04:16.997: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.666697ms)
Mar 24 08:04:17.197: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.110448ms)
Mar 24 08:04:17.399: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.166912ms)
Mar 24 08:04:17.601: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 4.126786ms)
Mar 24 08:04:17.800: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.275764ms)
Mar 24 08:04:18.000: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.739786ms)
Mar 24 08:04:18.200: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.884549ms)
Mar 24 08:04:18.401: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.28477ms)
Mar 24 08:04:18.601: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 4.034065ms)
Mar 24 08:04:18.801: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.006336ms)
Mar 24 08:04:19.001: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.642657ms)
Mar 24 08:04:19.202: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.535346ms)
Mar 24 08:04:19.402: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.062738ms)
Mar 24 08:04:19.602: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.499703ms)
Mar 24 08:04:19.802: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.045362ms)
Mar 24 08:04:20.002: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.83327ms)
Mar 24 08:04:20.202: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.217964ms)
Mar 24 08:04:20.402: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.618688ms)
Mar 24 08:04:20.602: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.899238ms)
Mar 24 08:04:20.803: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.532149ms)
Mar 24 08:04:21.003: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.388837ms)
Mar 24 08:04:21.204: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.460107ms)
Mar 24 08:04:21.403: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.150111ms)
Mar 24 08:04:21.604: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.096396ms)
Mar 24 08:04:21.804: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.954619ms)
Mar 24 08:04:22.004: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.823213ms)
Mar 24 08:04:22.204: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 2.898174ms)
Mar 24 08:04:22.404: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.711415ms)
Mar 24 08:04:22.605: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.938759ms)
Mar 24 08:04:22.804: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.679963ms)
Mar 24 08:04:23.005: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.949213ms)
Mar 24 08:04:23.205: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.424844ms)
Mar 24 08:04:23.405: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.709937ms)
Mar 24 08:04:23.605: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.091107ms)
Mar 24 08:04:23.806: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.173527ms)
Mar 24 08:04:24.006: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.189936ms)
Mar 24 08:04:24.206: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.144432ms)
Mar 24 08:04:24.406: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.995401ms)
Mar 24 08:04:24.606: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.161411ms)
Mar 24 08:04:24.806: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.840612ms)
Mar 24 08:04:25.007: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.281439ms)
Mar 24 08:04:25.207: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.058172ms)
Mar 24 08:04:25.408: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.597261ms)
Mar 24 08:04:25.607: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.612933ms)
Mar 24 08:04:25.807: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.128492ms)
Mar 24 08:04:26.008: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.138673ms)
Mar 24 08:04:26.208: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.205195ms)
Mar 24 08:04:26.408: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.15038ms)
Mar 24 08:04:26.608: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.944554ms)
Mar 24 08:04:26.809: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.613679ms)
Mar 24 08:04:27.009: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.415578ms)
Mar 24 08:04:27.208: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.950135ms)
Mar 24 08:04:27.409: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.960929ms)
Mar 24 08:04:27.609: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.101078ms)
Mar 24 08:04:27.809: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.865929ms)
Mar 24 08:04:28.009: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.330388ms)
Mar 24 08:04:28.210: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.621188ms)
Mar 24 08:04:28.410: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.247115ms)
Mar 24 08:04:28.610: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.145211ms)
Mar 24 08:04:28.810: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 3.148038ms)
Mar 24 08:04:29.011: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.436273ms)
Mar 24 08:04:29.211: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.335264ms)
Mar 24 08:04:29.411: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.157907ms)
Mar 24 08:04:29.611: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.405415ms)
Mar 24 08:04:29.811: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.010653ms)
Mar 24 08:04:30.011: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.75975ms)
Mar 24 08:04:30.212: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.578577ms)
Mar 24 08:04:30.412: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.347752ms)
Mar 24 08:04:30.612: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.111329ms)
Mar 24 08:04:30.812: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.042604ms)
Mar 24 08:04:31.012: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.546585ms)
Mar 24 08:04:31.212: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.045753ms)
Mar 24 08:04:31.412: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.934333ms)
Mar 24 08:04:31.613: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.912206ms)
Mar 24 08:04:31.813: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.421714ms)
Mar 24 08:04:32.013: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.19967ms)
Mar 24 08:04:32.213: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.423461ms)
Mar 24 08:04:32.413: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.156066ms)
Mar 24 08:04:32.613: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.789722ms)
Mar 24 08:04:32.814: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.362871ms)
Mar 24 08:04:33.014: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.010566ms)
Mar 24 08:04:33.214: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.043248ms)
Mar 24 08:04:33.414: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.02325ms)
Mar 24 08:04:33.614: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.878426ms)
Mar 24 08:04:33.814: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.701841ms)
Mar 24 08:04:34.015: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.476783ms)
Mar 24 08:04:34.215: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.230071ms)
Mar 24 08:04:34.415: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.907421ms)
Mar 24 08:04:34.616: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.447453ms)
Mar 24 08:04:34.815: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.120366ms)
Mar 24 08:04:35.016: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.2395ms)
Mar 24 08:04:35.215: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 2.844361ms)
Mar 24 08:04:35.416: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.581672ms)
Mar 24 08:04:35.616: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.702313ms)
Mar 24 08:04:35.816: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.11402ms)
Mar 24 08:04:36.017: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.143916ms)
Mar 24 08:04:36.216: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.768687ms)
Mar 24 08:04:36.417: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.063842ms)
Mar 24 08:04:36.617: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.757954ms)
Mar 24 08:04:36.817: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.750851ms)
Mar 24 08:04:37.017: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.091575ms)
Mar 24 08:04:37.218: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.323579ms)
Mar 24 08:04:37.418: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.336538ms)
Mar 24 08:04:37.619: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.748189ms)
Mar 24 08:04:37.818: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.823148ms)
Mar 24 08:04:38.018: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.385727ms)
Mar 24 08:04:38.219: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.27762ms)
Mar 24 08:04:38.418: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.085981ms)
Mar 24 08:04:38.619: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.529397ms)
Mar 24 08:04:38.820: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 4.360986ms)
Mar 24 08:04:39.019: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.831947ms)
Mar 24 08:04:39.220: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.354456ms)
Mar 24 08:04:39.419: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.535477ms)
Mar 24 08:04:39.620: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.281695ms)
Mar 24 08:04:39.820: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.123927ms)
Mar 24 08:04:40.021: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.584606ms)
Mar 24 08:04:40.220: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.816899ms)
Mar 24 08:04:40.421: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.861487ms)
Mar 24 08:04:40.621: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.822056ms)
Mar 24 08:04:40.821: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.009155ms)
Mar 24 08:04:41.021: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.954108ms)
Mar 24 08:04:41.221: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.874867ms)
Mar 24 08:04:41.421: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.10761ms)
Mar 24 08:04:41.623: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 4.407308ms)
Mar 24 08:04:41.822: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.204694ms)
Mar 24 08:04:42.022: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.013828ms)
Mar 24 08:04:42.222: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.058704ms)
Mar 24 08:04:42.422: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.253841ms)
Mar 24 08:04:42.623: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.297191ms)
Mar 24 08:04:42.823: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 3.237402ms)
Mar 24 08:04:43.023: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.293168ms)
Mar 24 08:04:43.223: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.063156ms)
Mar 24 08:04:43.423: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.265408ms)
Mar 24 08:04:43.624: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.315603ms)
Mar 24 08:04:43.823: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.950715ms)
Mar 24 08:04:44.024: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.620471ms)
Mar 24 08:04:44.224: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.215513ms)
Mar 24 08:04:44.424: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.103521ms)
Mar 24 08:04:44.624: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.900983ms)
Mar 24 08:04:44.824: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.8114ms)
Mar 24 08:04:45.024: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 3.1935ms)
Mar 24 08:04:45.225: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.211299ms)
Mar 24 08:04:45.425: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.218198ms)
Mar 24 08:04:45.625: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.31518ms)
Mar 24 08:04:45.825: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 2.984591ms)
Mar 24 08:04:46.026: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.493601ms)
Mar 24 08:04:46.225: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.718461ms)
Mar 24 08:04:46.426: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.348976ms)
Mar 24 08:04:46.626: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.069932ms)
Mar 24 08:04:46.826: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.910069ms)
Mar 24 08:04:47.026: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.891559ms)
Mar 24 08:04:47.226: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.633544ms)
Mar 24 08:04:47.426: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.647218ms)
Mar 24 08:04:47.627: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.354007ms)
Mar 24 08:04:47.826: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.352485ms)
Mar 24 08:04:48.027: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.873627ms)
Mar 24 08:04:48.228: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.489706ms)
Mar 24 08:04:48.428: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.716366ms)
Mar 24 08:04:48.627: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.604443ms)
Mar 24 08:04:48.828: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 3.050653ms)
Mar 24 08:04:49.028: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.072578ms)
Mar 24 08:04:49.228: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.084019ms)
Mar 24 08:04:49.429: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.432743ms)
Mar 24 08:04:49.629: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.549072ms)
Mar 24 08:04:49.828: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.570215ms)
Mar 24 08:04:50.029: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.095332ms)
Mar 24 08:04:50.229: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 2.944901ms)
Mar 24 08:04:50.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 5.229823ms)
Mar 24 08:04:50.630: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.263703ms)
Mar 24 08:04:50.830: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.070247ms)
Mar 24 08:04:51.030: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.226963ms)
Mar 24 08:04:51.230: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.10419ms)
Mar 24 08:04:51.430: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.385943ms)
Mar 24 08:04:51.630: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.752285ms)
Mar 24 08:04:51.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.392155ms)
Mar 24 08:04:52.031: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.085353ms)
Mar 24 08:04:52.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.968588ms)
Mar 24 08:04:52.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.417955ms)
Mar 24 08:04:52.631: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.87454ms)
Mar 24 08:04:52.831: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.757955ms)
Mar 24 08:04:53.031: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 2.404043ms)
Mar 24 08:04:53.232: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.913157ms)
Mar 24 08:04:53.432: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.958946ms)
Mar 24 08:04:53.632: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.87506ms)
Mar 24 08:04:53.833: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.563656ms)
Mar 24 08:04:54.033: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.084619ms)
Mar 24 08:04:54.233: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.974097ms)
Mar 24 08:04:54.434: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.464013ms)
Mar 24 08:04:54.633: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 2.898415ms)
Mar 24 08:04:54.835: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 4.37352ms)
Mar 24 08:04:55.034: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.225889ms)
Mar 24 08:04:55.234: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.145915ms)
Mar 24 08:04:55.434: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.098538ms)
Mar 24 08:04:55.635: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.477536ms)
Mar 24 08:04:55.834: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.734229ms)
Mar 24 08:04:56.034: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.570676ms)
Mar 24 08:04:56.234: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.750578ms)
Mar 24 08:04:56.435: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.330025ms)
Mar 24 08:04:56.635: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.005619ms)
Mar 24 08:04:56.835: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.876957ms)
Mar 24 08:04:57.036: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.524409ms)
Mar 24 08:04:57.237: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 4.051079ms)
Mar 24 08:04:57.436: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.087735ms)
Mar 24 08:04:57.636: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.128341ms)
Mar 24 08:04:57.836: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.782289ms)
Mar 24 08:04:58.036: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.295653ms)
Mar 24 08:04:58.236: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 2.945408ms)
Mar 24 08:04:58.437: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.332614ms)
Mar 24 08:04:58.637: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.140216ms)
Mar 24 08:04:58.837: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.356848ms)
Mar 24 08:04:59.040: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 6.07463ms)
Mar 24 08:04:59.237: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.00665ms)
Mar 24 08:04:59.437: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.893083ms)
Mar 24 08:04:59.637: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.008049ms)
Mar 24 08:04:59.838: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.403821ms)
Mar 24 08:05:00.038: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.180783ms)
Mar 24 08:05:00.238: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.86538ms)
Mar 24 08:05:00.440: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 5.351423ms)
Mar 24 08:05:00.638: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.815884ms)
Mar 24 08:05:00.839: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.304093ms)
Mar 24 08:05:01.039: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.147772ms)
Mar 24 08:05:01.239: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.272784ms)
Mar 24 08:05:01.439: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 3.028457ms)
Mar 24 08:05:01.639: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.104803ms)
Mar 24 08:05:01.840: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.395958ms)
Mar 24 08:05:02.039: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.624394ms)
Mar 24 08:05:02.240: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.159346ms)
Mar 24 08:05:02.440: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.26221ms)
Mar 24 08:05:02.641: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 4.001273ms)
Mar 24 08:05:02.841: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.611568ms)
Mar 24 08:05:03.040: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.683051ms)
Mar 24 08:05:03.240: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.580007ms)
Mar 24 08:05:03.441: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.675127ms)
Mar 24 08:05:03.641: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.161016ms)
Mar 24 08:05:03.841: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 3.280586ms)
Mar 24 08:05:04.041: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.828082ms)
Mar 24 08:05:04.241: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.60476ms)
Mar 24 08:05:04.442: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 3.246745ms)
Mar 24 08:05:04.642: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.146632ms)
Mar 24 08:05:04.842: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.012999ms)
Mar 24 08:05:05.042: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.935735ms)
Mar 24 08:05:05.242: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.076717ms)
Mar 24 08:05:05.443: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.115853ms)
Mar 24 08:05:05.643: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.300231ms)
Mar 24 08:05:05.843: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 3.192489ms)
Mar 24 08:05:06.043: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.481286ms)
Mar 24 08:05:06.259: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 19.30706ms)
Mar 24 08:05:06.444: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.798866ms)
Mar 24 08:05:06.644: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.829794ms)
Mar 24 08:05:06.844: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.976091ms)
Mar 24 08:05:07.044: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.070321ms)
Mar 24 08:05:07.245: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.69264ms)
Mar 24 08:05:07.444: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.882884ms)
Mar 24 08:05:07.644: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.962056ms)
Mar 24 08:05:07.845: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.375318ms)
Mar 24 08:05:08.045: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.039001ms)
Mar 24 08:05:08.245: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.038027ms)
Mar 24 08:05:08.445: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.20454ms)
Mar 24 08:05:08.646: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.470304ms)
Mar 24 08:05:08.845: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.712628ms)
Mar 24 08:05:09.045: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 2.785296ms)
Mar 24 08:05:09.246: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.352319ms)
Mar 24 08:05:09.446: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.086882ms)
Mar 24 08:05:09.646: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.148258ms)
Mar 24 08:05:09.846: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.198927ms)
Mar 24 08:05:10.047: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 4.181543ms)
Mar 24 08:05:10.247: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.089135ms)
Mar 24 08:05:10.447: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.214058ms)
Mar 24 08:05:10.647: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.114344ms)
Mar 24 08:05:10.847: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 3.056955ms)
Mar 24 08:05:11.047: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.012852ms)
Mar 24 08:05:11.247: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.362343ms)
Mar 24 08:05:11.448: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.093876ms)
Mar 24 08:05:11.648: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.010482ms)
Mar 24 08:05:11.849: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.407396ms)
Mar 24 08:05:12.049: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.501119ms)
Mar 24 08:05:12.248: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.540929ms)
Mar 24 08:05:12.449: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.653786ms)
Mar 24 08:05:12.649: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.074733ms)
Mar 24 08:05:12.850: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.489358ms)
Mar 24 08:05:13.050: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.61879ms)
Mar 24 08:05:13.250: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.03015ms)
Mar 24 08:05:13.450: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 3.18908ms)
Mar 24 08:05:13.650: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.122553ms)
Mar 24 08:05:13.850: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.80857ms)
Mar 24 08:05:14.051: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.136308ms)
Mar 24 08:05:14.250: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.971886ms)
Mar 24 08:05:14.451: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.23734ms)
Mar 24 08:05:14.651: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.114132ms)
Mar 24 08:05:14.851: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.069952ms)
Mar 24 08:05:15.051: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 2.98834ms)
Mar 24 08:05:15.251: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.662448ms)
Mar 24 08:05:15.452: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.453655ms)
Mar 24 08:05:15.652: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.161188ms)
Mar 24 08:05:15.852: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.352256ms)
Mar 24 08:05:16.052: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.060723ms)
Mar 24 08:05:16.252: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 2.985667ms)
Mar 24 08:05:16.453: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 3.02837ms)
Mar 24 08:05:16.653: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.985582ms)
Mar 24 08:05:16.853: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.196902ms)
Mar 24 08:05:17.053: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.989015ms)
Mar 24 08:05:17.253: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.911545ms)
Mar 24 08:05:17.453: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.762398ms)
Mar 24 08:05:17.654: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.01778ms)
Mar 24 08:05:17.854: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.148823ms)
Mar 24 08:05:18.054: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.766256ms)
Mar 24 08:05:18.254: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.909329ms)
Mar 24 08:05:18.455: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.490974ms)
Mar 24 08:05:18.654: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.741287ms)
Mar 24 08:05:18.854: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.571282ms)
Mar 24 08:05:19.055: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.211449ms)
Mar 24 08:05:19.255: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.978934ms)
Mar 24 08:05:19.455: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.137338ms)
Mar 24 08:05:19.656: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.317175ms)
Mar 24 08:05:19.856: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.076952ms)
Mar 24 08:05:20.056: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.722665ms)
Mar 24 08:05:20.255: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.556626ms)
Mar 24 08:05:20.456: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.213303ms)
Mar 24 08:05:20.656: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.363175ms)
Mar 24 08:05:20.857: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.670371ms)
Mar 24 08:05:21.057: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.421351ms)
Mar 24 08:05:21.259: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 5.274419ms)
Mar 24 08:05:21.458: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 4.020635ms)
Mar 24 08:05:21.657: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 3.170219ms)
Mar 24 08:05:21.857: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.677507ms)
Mar 24 08:05:22.058: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.407763ms)
Mar 24 08:05:22.258: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.04975ms)
Mar 24 08:05:22.458: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.477725ms)
Mar 24 08:05:22.658: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.125769ms)
Mar 24 08:05:22.858: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.004062ms)
Mar 24 08:05:23.058: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.763329ms)
Mar 24 08:05:23.259: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.150441ms)
Mar 24 08:05:23.459: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.069045ms)
Mar 24 08:05:23.659: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.844407ms)
Mar 24 08:05:23.859: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 2.945874ms)
Mar 24 08:05:24.060: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.511554ms)
Mar 24 08:05:24.259: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.880798ms)
Mar 24 08:05:24.460: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.378827ms)
Mar 24 08:05:24.660: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.93839ms)
Mar 24 08:05:24.860: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 3.047066ms)
Mar 24 08:05:25.060: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 2.957147ms)
Mar 24 08:05:25.260: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 2.859789ms)
Mar 24 08:05:25.461: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.355269ms)
Mar 24 08:05:25.661: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.9335ms)
Mar 24 08:05:25.861: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.132465ms)
Mar 24 08:05:26.061: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.157482ms)
Mar 24 08:05:26.261: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.074258ms)
Mar 24 08:05:26.462: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.279219ms)
Mar 24 08:05:26.662: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.335873ms)
Mar 24 08:05:26.862: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.955697ms)
Mar 24 08:05:27.062: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.021261ms)
Mar 24 08:05:27.262: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.943992ms)
Mar 24 08:05:27.462: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.798295ms)
Mar 24 08:05:27.662: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 2.78301ms)
Mar 24 08:05:27.862: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.657628ms)
Mar 24 08:05:28.063: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.123746ms)
Mar 24 08:05:28.263: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.179763ms)
Mar 24 08:05:28.463: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.159567ms)
Mar 24 08:05:28.664: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 3.064036ms)
Mar 24 08:05:28.864: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.21319ms)
Mar 24 08:05:29.064: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.170356ms)
Mar 24 08:05:29.264: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.227317ms)
Mar 24 08:05:29.465: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.424762ms)
Mar 24 08:05:29.665: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.527083ms)
Mar 24 08:05:29.865: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.315742ms)
Mar 24 08:05:30.065: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.6429ms)
Mar 24 08:05:30.265: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.033164ms)
Mar 24 08:05:30.465: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.007887ms)
Mar 24 08:05:30.665: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.792077ms)
Mar 24 08:05:30.866: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.277313ms)
Mar 24 08:05:31.066: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 3.452797ms)
Mar 24 08:05:31.266: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 3.024423ms)
Mar 24 08:05:31.466: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 2.935719ms)
Mar 24 08:05:31.666: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.809068ms)
Mar 24 08:05:31.867: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 3.381862ms)
Mar 24 08:05:32.066: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 2.567229ms)
Mar 24 08:05:32.267: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.963364ms)
Mar 24 08:05:32.468: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.556126ms)
Mar 24 08:05:32.668: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.771487ms)
Mar 24 08:05:32.868: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.491695ms)
Mar 24 08:05:33.068: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.233299ms)
Mar 24 08:05:33.268: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.917511ms)
Mar 24 08:05:33.468: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.199096ms)
Mar 24 08:05:33.668: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.923836ms)
Mar 24 08:05:33.868: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.82166ms)
Mar 24 08:05:34.069: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 3.352075ms)
Mar 24 08:05:34.269: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.023115ms)
Mar 24 08:05:34.470: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.469752ms)
Mar 24 08:05:34.669: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 2.954335ms)
Mar 24 08:05:34.869: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 2.959678ms)
Mar 24 08:05:35.069: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.778225ms)
Mar 24 08:05:35.270: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.749423ms)
Mar 24 08:05:35.470: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.28036ms)
Mar 24 08:05:35.670: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 3.039177ms)
Mar 24 08:05:35.871: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.282183ms)
Mar 24 08:05:36.070: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.731601ms)
Mar 24 08:05:36.270: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 2.746669ms)
Mar 24 08:05:36.471: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 3.255683ms)
Mar 24 08:05:36.671: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.132576ms)
Mar 24 08:05:36.871: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.973772ms)
Mar 24 08:05:37.071: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 3.057379ms)
Mar 24 08:05:37.271: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 2.825863ms)
Mar 24 08:05:37.472: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.904105ms)
Mar 24 08:05:37.672: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.076441ms)
Mar 24 08:05:37.872: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/proxy/: bar (200; 3.107906ms)
Mar 24 08:05:38.073: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:80/: foo (200; 3.33222ms)
Mar 24 08:05:38.272: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/: bar (200; 2.685803ms)
Mar 24 08:05:38.473: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:81/: bar (200; 3.414978ms)
Mar 24 08:05:38.673: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/: tls baz (200; 3.195928ms)
Mar 24 08:05:38.873: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:443/: tls baz (200; 3.379192ms)
Mar 24 08:05:39.074: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:444/: tls qux (200; 3.476027ms)
Mar 24 08:05:39.273: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:162/: bar (200; 2.934234ms)
Mar 24 08:05:39.478: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:462/proxy/: tls qux (200; 7.604934ms)
Mar 24 08:05:39.673: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/proxy/: foo (200; 2.642067ms)
Mar 24 08:05:39.874: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname2/proxy/: bar (200; 2.953411ms)
Mar 24 08:05:40.074: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/proxy/: tls qux (200; 3.149431ms)
Mar 24 08:05:40.274: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf/proxy/rewriteme"... (200; 3.010544ms)
Mar 24 08:05:40.475: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/proxy/rewrite... (200; 3.344159ms)
Mar 24 08:05:40.675: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.141977ms)
Mar 24 08:05:40.875: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:81/: bar (200; 3.427215ms)
Mar 24 08:05:41.075: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/: foo (200; 2.784536ms)
Mar 24 08:05:41.275: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:80/: foo (200; 3.159356ms)
Mar 24 08:05:41.475: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/: foo (200; 2.989664ms)
Mar 24 08:05:41.675: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname2/: tls qux (200; 2.988879ms)
Mar 24 08:05:41.876: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:80/rewrite... (200; 3.061328ms)
Mar 24 08:05:42.076: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:443/proxy/... (200; 3.233839ms)
Mar 24 08:05:42.276: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname1/: foo (200; 2.675272ms)
Mar 24 08:05:42.477: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:160/: foo (200; 3.503495ms)
Mar 24 08:05:42.676: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/proxy/: bar (200; 2.912949ms)
Mar 24 08:05:42.876: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/: bar (200; 2.751599ms)
Mar 24 08:05:43.077: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/re... (200; 3.110795ms)
Mar 24 08:05:43.277: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/https:proxy-service-m57s1-zsosf:460/proxy/: tls baz (200; 2.943787ms)
Mar 24 08:05:43.477: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/http:proxy-service-m57s1:portname1/proxy/: foo (200; 3.05847ms)
Mar 24 08:05:43.677: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/proxy-service-m57s1:portname2/proxy/: bar (200; 2.96574ms)
Mar 24 08:05:43.877: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/services/https:proxy-service-m57s1:tlsportname1/proxy/: tls baz (200; 2.936725ms)
Mar 24 08:05:44.077: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:162/: bar (200; 2.769996ms)
Mar 24 08:05:44.278: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/http:proxy-service-m57s1-zsosf:80/proxy/re... (200; 3.078149ms)
Mar 24 08:05:44.478: INFO: /api/v1/namespaces/e2e-tests-proxy-ucdpo/pods/proxy-service-m57s1-zsosf:160/proxy/: foo (200; 3.300512ms)
STEP: deleting replication controller proxy-service-m57s1 in namespace e2e-tests-proxy-ucdpo
Mar 24 08:05:46.692: INFO: Deleting RC proxy-service-m57s1 took: 2.015292725s
Mar 24 08:05:46.692: INFO: Terminating RC proxy-service-m57s1 pods took: 54.728µs
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:05:46.728: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-ucdpo" for this suite.

• [SLOW TEST:171.216 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy through a service and a pod [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:247
------------------------------
SSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:06:06.745: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:06:06.747: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-wl4mg
Mar 24 08:06:06.748: INFO: Get service account default in ns e2e-tests-kubectl-wl4mg failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:06:08.750: INFO: Service account default in ns e2e-tests-kubectl-wl4mg with secrets found. (2.002887845s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:06:08.750: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-wl4mg
Mar 24 08:06:08.751: INFO: Service account default in ns e2e-tests-kubectl-wl4mg with secrets found. (903.71µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should support --unix-socket=/path [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
STEP: Starting the proxy
Mar 24 08:06:08.751: INFO: Asynchronously running '/usr/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix490244463/test'
STEP: retrieving proxy /api/ output
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:06:08.766: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wl4mg" for this suite.

• [SLOW TEST:7.029 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Proxy server
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should support --unix-socket=/path [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
------------------------------
SSSS
------------------------------
[k8s.io] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0 [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1117
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:06:13.774: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:06:13.776: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dqgs4
Mar 24 08:06:13.777: INFO: Get service account default in ns e2e-tests-kubectl-dqgs4 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:06:15.778: INFO: Service account default in ns e2e-tests-kubectl-dqgs4 with secrets found. (2.002775068s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:06:15.778: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dqgs4
Mar 24 08:06:15.779: INFO: Service account default in ns e2e-tests-kubectl-dqgs4 with secrets found. (811.277µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should support proxy with --port 0 [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1117
STEP: starting the proxy server
Mar 24 08:06:15.779: INFO: Asynchronously running '/usr/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config proxy -p 0'
STEP: curling proxy /api/ output
Mar 24 08:06:15.795: FAIL: Expected at least one supported apiversion, got error Failed to parse /api output : unexpected end of JSON input
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-kubectl-dqgs4".
Mar 24 08:06:15.801: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:06:15.801: INFO: 
Mar 24 08:06:15.802: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:06:15.804: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3045 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:06:11 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:06:11 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:06:15.804: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:06:15.804: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:06:15.831: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:06:15.831: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:35.055702s}
Mar 24 08:06:15.831: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:17.782222s}
Mar 24 08:06:15.831: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:13.707771s}
Mar 24 08:06:15.831: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:11.64504s}
Mar 24 08:06:15.831: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dqgs4" for this suite.

• Failure [7.064 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Proxy server
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should support proxy with --port 0 [Conformance] [It]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1117

    Mar 24 08:06:15.795: Expected at least one supported apiversion, got error Failed to parse /api output : unexpected end of JSON input

    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1112
------------------------------
S
------------------------------
[k8s.io] Networking 
  should function for intra-pod communication [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:06:20.838: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:06:20.841: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-s6s9e
Mar 24 08:06:20.842: INFO: Get service account default in ns e2e-tests-nettest-s6s9e failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:06:22.843: INFO: Service account default in ns e2e-tests-nettest-s6s9e with secrets found. (2.00205508s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:06:22.843: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-s6s9e
Mar 24 08:06:22.844: INFO: Service account default in ns e2e-tests-nettest-s6s9e with secrets found. (888.662µs)
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should function for intra-pod communication [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
STEP: Creating a service named "nettest" in namespace "e2e-tests-nettest-s6s9e"
STEP: Creating a webserver (pending) pod on each node
Mar 24 08:06:23.088: INFO: Only one ready node is detected. The test has limited scope in such setting. Rerun it with at least two nodes to get complete coverage.
Mar 24 08:06:23.098: INFO: Created pod nettest-2tbss on node 127.0.0.1
STEP: Waiting for the webserver pods to transition to Running state
Mar 24 08:06:23.098: INFO: Waiting up to 5m0s for pod nettest-2tbss status to be running
Mar 24 08:06:23.099: INFO: Waiting for pod nettest-2tbss in namespace 'e2e-tests-nettest-s6s9e' status to be 'running'(found phase: "Pending", readiness: false) (1.58851ms elapsed)
Mar 24 08:06:25.101: INFO: Waiting for pod nettest-2tbss in namespace 'e2e-tests-nettest-s6s9e' status to be 'running'(found phase: "Pending", readiness: false) (2.003521523s elapsed)
Mar 24 08:06:27.104: INFO: Waiting for pod nettest-2tbss in namespace 'e2e-tests-nettest-s6s9e' status to be 'running'(found phase: "Pending", readiness: false) (4.006338183s elapsed)
Mar 24 08:06:29.106: INFO: Found pod 'nettest-2tbss' on node '127.0.0.1'
STEP: Waiting for connectivity to be verified
Mar 24 08:06:31.106: INFO: About to make a proxy status call
Mar 24 08:06:31.134: INFO: Proxy status call returned in 28.366058ms
Mar 24 08:06:31.134: INFO: Attempt 0: test still running
Mar 24 08:06:33.135: INFO: About to make a proxy status call
Mar 24 08:06:33.141: INFO: Proxy status call returned in 5.782381ms
Mar 24 08:06:33.141: INFO: Attempt 1: test still running
Mar 24 08:06:35.141: INFO: About to make a proxy status call
Mar 24 08:06:35.144: INFO: Proxy status call returned in 3.299358ms
Mar 24 08:06:35.144: INFO: Passed on attempt 2. Cleaning up.
STEP: Cleaning up the webserver pods
STEP: Cleaning up the service
[AfterEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:06:35.164: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-s6s9e" for this suite.

• [SLOW TEST:34.339 seconds]
[k8s.io] Networking
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should function for intra-pod communication [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
------------------------------
[k8s.io] ReplicationController 
  should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
[BeforeEach] [k8s.io] ReplicationController
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:06:55.177: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:06:55.179: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-r2ytm
Mar 24 08:06:55.181: INFO: Get service account default in ns e2e-tests-replication-controller-r2ytm failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:06:57.183: INFO: Service account default in ns e2e-tests-replication-controller-r2ytm with secrets found. (2.003322429s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:06:57.183: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-r2ytm
Mar 24 08:06:57.184: INFO: Service account default in ns e2e-tests-replication-controller-r2ytm with secrets found. (936.789µs)
[It] should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
STEP: Creating replication controller my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913
Mar 24 08:06:57.189: INFO: Pod name my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913: Found 0 pods out of 2
Mar 24 08:07:02.192: INFO: Pod name my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913: Found 2 pods out of 2
STEP: Ensuring each pod is running
Mar 24 08:07:02.192: INFO: Waiting up to 5m0s for pod my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8 status to be running
Mar 24 08:07:02.194: INFO: Waiting for pod my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8 in namespace 'e2e-tests-replication-controller-r2ytm' status to be 'running'(found phase: "Pending", readiness: false) (1.942367ms elapsed)
Mar 24 08:07:04.198: INFO: Waiting for pod my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8 in namespace 'e2e-tests-replication-controller-r2ytm' status to be 'running'(found phase: "Pending", readiness: false) (2.006762969s elapsed)
Mar 24 08:07:06.200: INFO: Waiting for pod my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8 in namespace 'e2e-tests-replication-controller-r2ytm' status to be 'running'(found phase: "Pending", readiness: false) (4.008767466s elapsed)
Mar 24 08:07:08.203: INFO: Found pod 'my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8' on node '127.0.0.1'
Mar 24 08:07:08.203: INFO: Waiting up to 5m0s for pod my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-p7t25 status to be running
Mar 24 08:07:08.204: INFO: Found pod 'my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-p7t25' on node '127.0.0.1'
STEP: Trying to dial each unique pod
Mar 24 08:07:13.260: INFO: Controller my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913: Got expected result from replica 1 [my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8]: "my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-lexn8", 1 of 2 required successes so far
Mar 24 08:07:13.311: INFO: Controller my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913: Got expected result from replica 2 [my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-p7t25]: "my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913-p7t25", 2 of 2 required successes so far
STEP: deleting replication controller my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913 in namespace e2e-tests-replication-controller-r2ytm
Mar 24 08:07:15.327: INFO: Deleting RC my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913 took: 2.015187647s
Mar 24 08:07:15.327: INFO: Terminating RC my-hostname-basic-60d8c1d4-f197-11e5-8186-064a4ed57913 pods took: 57.021µs
[AfterEach] [k8s.io] ReplicationController
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:07:15.327: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-r2ytm" for this suite.

• [SLOW TEST:40.160 seconds]
[k8s.io] ReplicationController
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should serve a basic image on each replica with a public image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:659
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:07:35.337: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:07:35.339: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-g20n2
Mar 24 08:07:35.340: INFO: Get service account default in ns e2e-tests-pods-g20n2 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:07:37.342: INFO: Service account default in ns e2e-tests-pods-g20n2 with secrets found. (2.002470945s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:07:37.342: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-g20n2
Mar 24 08:07:37.343: INFO: Service account default in ns e2e-tests-pods-g20n2 with secrets found. (832.708µs)
[It] should contain environment variables for services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:659
Mar 24 08:07:37.347: INFO: Waiting up to 5m0s for pod server-envvars-78c88e57-f197-11e5-8186-064a4ed57913 status to be running
Mar 24 08:07:37.349: INFO: Waiting for pod server-envvars-78c88e57-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'running'(found phase: "Pending", readiness: false) (1.178439ms elapsed)
Mar 24 08:07:39.351: INFO: Waiting for pod server-envvars-78c88e57-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'running'(found phase: "Pending", readiness: false) (2.003319649s elapsed)
Mar 24 08:07:41.353: INFO: Waiting for pod server-envvars-78c88e57-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'running'(found phase: "Pending", readiness: false) (4.005314316s elapsed)
Mar 24 08:07:43.355: INFO: Found pod 'server-envvars-78c88e57-f197-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: Creating a pod to test service env
Mar 24 08:07:43.368: INFO: Waiting up to 5m0s for pod client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:07:43.372: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913' yet
Mar 24 08:07:43.372: INFO: Waiting for pod client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.786648ms elapsed)
Mar 24 08:07:45.375: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-pods-g20n2' so far
Mar 24 08:07:45.375: INFO: Waiting for pod client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.006102928s elapsed)
Mar 24 08:07:47.377: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-pods-g20n2' so far
Mar 24 08:07:47.377: INFO: Waiting for pod client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-g20n2' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.008182186s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-envvars-7c5f5bef-f197-11e5-8186-064a4ed57913 container env3cont: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.0.0.1:443
FOOSERVICE_PORT_8765_TCP_PORT=8765
FOOSERVICE_PORT_8765_TCP_PROTO=tcp
SHLVL=1
HOME=/
FOOSERVICE_PORT_8765_TCP=tcp://10.0.0.95:8765
TERM=linux
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/bin:/sbin/:/usr/bin/:/usr/sbin/
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
FOOSERVICE_SERVICE_HOST=10.0.0.95
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
FOOSERVICE_SERVICE_PORT=8765
FOOSERVICE_PORT=tcp://10.0.0.95:8765
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.0.95


[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:07:49.414: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-g20n2" for this suite.

• [SLOW TEST:19.087 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should contain environment variables for services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:659
------------------------------
[k8s.io] Proxy version v1 
  should proxy logs on node with explicit kubelet port [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:07:54.424: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:07:54.426: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-mtqtd
Mar 24 08:07:54.427: INFO: Get service account default in ns e2e-tests-proxy-mtqtd failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:07:56.429: INFO: Service account default in ns e2e-tests-proxy-mtqtd with secrets found. (2.002599307s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:07:56.429: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-mtqtd
Mar 24 08:07:56.430: INFO: Service account default in ns e2e-tests-proxy-mtqtd with secrets found. (833.991µs)
[It] should proxy logs on node with explicit kubelet port [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
Mar 24 08:07:56.433: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.567366ms)
Mar 24 08:07:56.436: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.329424ms)
Mar 24 08:07:56.438: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.204067ms)
Mar 24 08:07:56.440: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.285677ms)
Mar 24 08:07:56.443: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.262302ms)
Mar 24 08:07:56.445: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.290039ms)
Mar 24 08:07:56.447: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.495933ms)
Mar 24 08:07:56.450: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.263577ms)
Mar 24 08:07:56.452: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.585194ms)
Mar 24 08:07:56.458: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 5.273259ms)
Mar 24 08:07:56.460: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.764924ms)
Mar 24 08:07:56.468: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 7.816252ms)
Mar 24 08:07:56.471: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.38474ms)
Mar 24 08:07:56.473: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.351974ms)
Mar 24 08:07:56.475: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.33416ms)
Mar 24 08:07:56.478: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.357396ms)
Mar 24 08:07:56.480: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.34198ms)
Mar 24 08:07:56.483: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.548405ms)
Mar 24 08:07:56.485: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.29684ms)
Mar 24 08:07:56.487: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.342606ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:07:56.487: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mtqtd" for this suite.

• [SLOW TEST:7.071 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy logs on node with explicit kubelet port [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
------------------------------
[k8s.io] ServiceAccounts 
  should mount an API token into pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
[BeforeEach] [k8s.io] ServiceAccounts
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:08:01.495: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:08:01.497: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-rrudj
Mar 24 08:08:01.498: INFO: Get service account default in ns e2e-tests-svcaccounts-rrudj failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:08:03.500: INFO: Service account default in ns e2e-tests-svcaccounts-rrudj with secrets found. (2.003175918s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:08:03.500: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-rrudj
Mar 24 08:08:03.501: INFO: Service account default in ns e2e-tests-svcaccounts-rrudj with secrets found. (898.074µs)
[It] should mount an API token into pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Mar 24 08:08:04.010: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:08:04.011: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' yet
Mar 24 08:08:04.011: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.138449ms elapsed)
Mar 24 08:08:06.013: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:06.013: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003628149s elapsed)
Mar 24 08:08:08.016: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:08.016: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006862922s elapsed)
Mar 24 08:08:10.019: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:10.019: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.009037938s elapsed)
STEP: Saw pod success
Mar 24 08:08:12.021: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
Mar 24 08:08:12.022: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 container token-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/token": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtcnJ1ZGoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi00Z2VyOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODcyZTY5N2UtZjE5Ny0xMWU1LWE4Y2YtMDY0YTRlZDU3OTEzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy1ycnVkajpkZWZhdWx0In0.KUesmm9Ke1sNtpSdImsm9vHCwJM1n3zUN4ivaG2lwogS7GPUkTlZhCeJa3jHAi9b3lJGBWgLxTTxhgE2xxXeaK7yFkgJXG5ry10_TTc1azHlzn9AaqDbaw4Gd43sy2lt1w88_0sF4gXo1HT1xmQj8YKY6SqCjexDqg4Bp7eaFbfleOgCdUuh_u4ZrLqlICd7gPbrM_q0blHNGuf7SOloNoxXdjU_-SsHnP1eHfcrwy-Y7cGhckzZOZmPHgl8I_qfvDd7nak6eKFGNx8OYOM3Bphw4Ra7VpTzU-ArYd3522o-AoHQlLBzr8VZi-gTNhH4SLn2uzEQrmQp5JqOu4qfeQ


STEP: Creating a pod to test consume service account root CA
Mar 24 08:08:12.041: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:08:12.044: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' yet
Mar 24 08:08:12.044: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.937019ms elapsed)
Mar 24 08:08:14.047: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:14.047: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005711745s elapsed)
Mar 24 08:08:16.049: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:16.049: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007667564s elapsed)
Mar 24 08:08:18.051: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:18.051: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.00980686s elapsed)
STEP: Saw pod success
Mar 24 08:08:20.054: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
Mar 24 08:08:20.055: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 container root-ca-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt": -----BEGIN CERTIFICATE-----
MIIDSDCCAjCgAwIBAgIBATANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBcxNzIu
MzAuMC4yMjNAMTQ1ODgwNDU5NjAeFw0xNjAzMjQwNzI5NTZaFw0xNzAzMjQwNzI5
NTZaMCIxIDAeBgNVBAMMFzE3Mi4zMC4wLjIyM0AxNDU4ODA0NTk2MIIBIjANBgkq
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAm2CC4SM6kwfQTAw6fd8BwRc8zck8g/69
MSooLBd7O10caVApnNfpRafO5Tk0GnvsskbNM0bIwq8vDC4xuyguifooaJ5AFvc3
qs334Cu7sYmUo5CRE58QjC9sYrGIowkefaQhVFWbB85TYkqE2vOvdRHe1q7rXC+U
SwrA9/8/zMcnaGBgnCP+VEd1LUAWeHPSObiB15N0urhfxgSToABkx/U+K1qP4ETq
cTvgQKg2gldoMB4BbwZ1rSHX8eMRwPCnvvDqYvYswLv30CSHJL136WRnf8lGRug7
XW1jraj1W7EEa+ts3ukTnxPK2mxjCsGELGN4QU20bNaKbSa8Pu2tPwIDAQABo4GI
MIGFMA4GA1UdDwEB/wQEAwICpDATBgNVHSUEDDAKBggrBgEFBQcDATAPBgNVHRMB
Af8EBTADAQH/ME0GA1UdEQRGMESCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCEmt1
YmVybmV0ZXMuZGVmYXVsdIIKa3ViZXJuZXRlc4cErB4A34cECgAAATANBgkqhkiG
9w0BAQsFAAOCAQEAWREjJSz0FD+6YpnxR3V9loGQCJKpE7HNwEaIdPLiCv2Q2yHM
888tYmQHArpO5k33WPnQ5A82prfxuhPLTkqr9GiBek1uQdqvsdqy1asn/evSdraW
xz60qSfYPwMSUx7l/OF7g1bhV6GXfPj+DZYRyAm8W0E7UO/TH2jHMFi16AJJ6Mwa
G7llNQecsTwk1NX38sTouUlDtZv/2VXOo9mxg2lazvIk1LSyG4Px/zGuDC8knkOk
PO/qDNlxXp/WDLkPrITgVE6Ya01qKmt1Pw1igEuSE3hY/JdJpl+i2xRDkXiaoo6o
ruTyM/5RFkkPLWA0nLgi7NUrJ2z6Fcnu3w8Skw==
-----END CERTIFICATE-----



STEP: Creating a pod to test consume service account namespace
Mar 24 08:08:20.075: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:08:20.085: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' yet
Mar 24 08:08:20.085: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (9.708565ms elapsed)
Mar 24 08:08:22.087: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:22.087: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.011769734s elapsed)
Mar 24 08:08:24.089: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:24.089: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.013792194s elapsed)
Mar 24 08:08:26.091: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-svcaccounts-rrudj' so far
Mar 24 08:08:26.091: INFO: Waiting for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-svcaccounts-rrudj' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.016014145s elapsed)
STEP: Saw pod success
Mar 24 08:08:28.093: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
Mar 24 08:08:28.095: INFO: Waiting up to 5m0s for pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-88acb1ca-f197-11e5-8186-064a4ed57913 container namespace-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/namespace": e2e-tests-svcaccounts-rrudj


[AfterEach] [k8s.io] ServiceAccounts
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:08:28.107: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rrudj" for this suite.

• [SLOW TEST:31.625 seconds]
[k8s.io] ServiceAccounts
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should mount an API token into pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
[BeforeEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:08:33.120: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:08:33.122: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-e1khs
Mar 24 08:08:33.123: INFO: Get service account default in ns e2e-tests-var-expansion-e1khs failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:08:35.125: INFO: Service account default in ns e2e-tests-var-expansion-e1khs with secrets found. (2.002990781s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:08:35.125: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-e1khs
Mar 24 08:08:35.126: INFO: Service account default in ns e2e-tests-var-expansion-e1khs with secrets found. (924.331µs)
[It] should allow substituting values in a container's args [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
STEP: Creating a pod to test substitution in container's args
Mar 24 08:08:35.130: INFO: Waiting up to 5m0s for pod var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:08:35.132: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913' yet
Mar 24 08:08:35.132: INFO: Waiting for pod var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-e1khs' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.12907ms elapsed)
Mar 24 08:08:37.133: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-e1khs' so far
Mar 24 08:08:37.133: INFO: Waiting for pod var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-e1khs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002935099s elapsed)
Mar 24 08:08:39.135: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-e1khs' so far
Mar 24 08:08:39.135: INFO: Waiting for pod var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-e1khs' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.00488531s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-9b3993ee-f197-11e5-8186-064a4ed57913 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value


[AfterEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:08:41.149: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-e1khs" for this suite.

• [SLOW TEST:13.036 seconds]
[k8s.io] Variable Expansion
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should allow substituting values in a container's args [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
------------------------------
S
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:976
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:08:46.156: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:08:46.159: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dqrks
Mar 24 08:08:46.160: INFO: Get service account default in ns e2e-tests-kubectl-dqrks failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:08:48.162: INFO: Service account default in ns e2e-tests-kubectl-dqrks with secrets found. (2.003520462s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:08:48.162: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dqrks
Mar 24 08:08:48.163: INFO: Service account default in ns e2e-tests-kubectl-dqrks with secrets found. (843.564µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl run rc
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:932
[It] should create an rc from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:976
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 24 08:08:48.163: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx:1.7.9 --generator=run/v1 --namespace=e2e-tests-kubectl-dqrks'
Mar 24 08:08:48.179: INFO: stderr: ""
Mar 24 08:08:48.179: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" created"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Mar 24 08:08:50.184: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-acwsj]
Mar 24 08:08:50.184: INFO: Waiting up to 5m0s for pod e2e-test-nginx-rc-acwsj status to be running and ready
Mar 24 08:08:50.185: INFO: Waiting for pod e2e-test-nginx-rc-acwsj in namespace 'e2e-tests-kubectl-dqrks' status to be 'running and ready'(found phase: "Pending", readiness: false) (1.336286ms elapsed)
Mar 24 08:08:52.188: INFO: Waiting for pod e2e-test-nginx-rc-acwsj in namespace 'e2e-tests-kubectl-dqrks' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.003563796s elapsed)
Mar 24 08:08:54.190: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-acwsj]
Mar 24 08:08:54.190: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dqrks'
Mar 24 08:08:54.208: INFO: stderr: ""
[AfterEach] [k8s.io] Kubectl run rc
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:936
Mar 24 08:08:54.208: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dqrks'
Mar 24 08:08:56.235: INFO: stderr: ""
Mar 24 08:08:56.235: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:08:56.235: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dqrks" for this suite.

• [SLOW TEST:30.087 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run rc
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create an rc from an image [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:976
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:672
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:09:16.243: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:09:16.246: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dm4h4
Mar 24 08:09:16.246: INFO: Get service account default in ns e2e-tests-kubectl-dm4h4 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:09:18.248: INFO: Service account default in ns e2e-tests-kubectl-dm4h4 with secrets found. (2.002172875s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:09:18.248: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-dm4h4
Mar 24 08:09:18.249: INFO: Service account default in ns e2e-tests-kubectl-dm4h4 with secrets found. (805.199µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:672
Mar 24 08:09:18.249: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-dm4h4'
Mar 24 08:09:18.300: INFO: stderr: ""
Mar 24 08:09:18.300: INFO: stdout: "replicationcontroller \"redis-master\" created"
Mar 24 08:09:18.300: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-dm4h4'
Mar 24 08:09:18.370: INFO: stderr: ""
Mar 24 08:09:18.370: INFO: stdout: "service \"redis-master\" created"
Mar 24 08:09:18.372: INFO: Waiting up to 5m0s for pod redis-master-bauoh status to be running
Mar 24 08:09:18.375: INFO: Waiting for pod redis-master-bauoh in namespace 'e2e-tests-kubectl-dm4h4' status to be 'running'(found phase: "Pending", readiness: false) (2.722965ms elapsed)
Mar 24 08:09:20.377: INFO: Waiting for pod redis-master-bauoh in namespace 'e2e-tests-kubectl-dm4h4' status to be 'running'(found phase: "Pending", readiness: false) (2.004782096s elapsed)
Mar 24 08:09:22.379: INFO: Waiting for pod redis-master-bauoh in namespace 'e2e-tests-kubectl-dm4h4' status to be 'running'(found phase: "Pending", readiness: false) (4.00661177s elapsed)
Mar 24 08:09:24.381: INFO: Waiting for pod redis-master-bauoh in namespace 'e2e-tests-kubectl-dm4h4' status to be 'running'(found phase: "Pending", readiness: false) (6.008493124s elapsed)
Mar 24 08:09:26.383: INFO: Found pod 'redis-master-bauoh' on node '127.0.0.1'
Mar 24 08:09:26.383: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config describe pod redis-master-bauoh --namespace=e2e-tests-kubectl-dm4h4'
Mar 24 08:09:26.405: INFO: stderr: ""
Mar 24 08:09:26.405: INFO: stdout: "Name:\t\tredis-master-bauoh\nNamespace:\te2e-tests-kubectl-dm4h4\nNode:\t\t127.0.0.1/127.0.0.1\nStart Time:\tThu, 24 Mar 2016 08:09:18 +0000\nLabels:\t\tapp=redis,role=master\nStatus:\t\tRunning\nIP:\t\t192.168.123.74\nControllers:\tReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:\thyper://a28d8075c3241309bddbba683a2ae81598454e7b479c653d45fe18ac32cd9e65\n    Image:\t\tredis\n    Image ID:\t\tsha256:26380e1ca3560c5cd3c5243e08f5d9b13a0c0d8847b89c1372d5b69ce081e69a\n    Port:\t\t6379/TCP\n    QoS Tier:\n      cpu:\t\tBestEffort\n      memory:\t\tBestEffort\n    State:\t\tRunning\n      Started:\t\tThu, 24 Mar 2016 08:09:19 +0000\n    Ready:\t\tTrue\n    Restart Count:\t0\n    Environment Variables:\nConditions:\n  Type\t\tStatus\n  Ready \tTrue \nVolumes:\n  default-token-t7xjj:\n    Type:\tSecret (a volume populated by a Secret)\n    SecretName:\tdefault-token-t7xjj\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubobjectPath\t\t\tType\t\tReason\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t-------------\t\t\t--------\t------\t\t-------\n  8s\t\t8s\t\t1\t{default-scheduler }\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-bauoh to 127.0.0.1\n  8s\t\t8s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"redis\"\n  8s\t\t8s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"redis\""
Mar 24 08:09:26.405: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-dm4h4'
Mar 24 08:09:26.422: INFO: stderr: ""
Mar 24 08:09:26.422: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-dm4h4\nImage(s):\tredis\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis,role=master\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nNo volumes.\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n  8s\t\t8s\t\t1\t{replication-controller }\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-bauoh"
Mar 24 08:09:26.422: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-dm4h4'
Mar 24 08:09:26.437: INFO: stderr: ""
Mar 24 08:09:26.437: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-dm4h4\nLabels:\t\t\tapp=redis,role=master\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t10.0.0.194\nPort:\t\t\t<unset>\t6379/TCP\nEndpoints:\t\t192.168.123.74:6379\nSession Affinity:\tNone\nNo events."
Mar 24 08:09:26.439: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config describe node 127.0.0.1'
Mar 24 08:09:26.467: INFO: stderr: ""
Mar 24 08:09:26.467: INFO: stdout: "Name:\t\t\t127.0.0.1\nLabels:\t\t\tkubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913=42,kubernetes.io/hostname=127.0.0.1\nCreationTimestamp:\tThu, 24 Mar 2016 07:29:57 +0000\nPhase:\t\t\t\nConditions:\n  Type\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n  ----\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n  OutOfDisk \tFalse \tThu, 24 Mar 2016 08:09:22 +0000 \tThu, 24 Mar 2016 07:29:57 +0000 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n  Ready \tTrue \tThu, 24 Mar 2016 08:09:22 +0000 \tThu, 24 Mar 2016 07:29:57 +0000 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\t127.0.0.1,127.0.0.1\nCapacity:\n cpu:\t\t2\n memory:\t7748236Ki\n pods:\t\t110\nSystem Info:\n Machine ID:\t\t\t0af04d3c78a943ae8f3cc26602e374f2\n System UUID:\t\t\tEC286296-F1A1-3AB3-2979-B08D0FE5BF4A\n Boot ID:\t\t\tfdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b\n Kernel Version:\t\t3.10.0-229.20.1.el7.x86_64\n OS Image:\t\t\tCentOS Linux 7 (Core)\n Container Runtime Version:\thyper://0.5.0\n Kubelet Version:\t\tv1.3.0-alpha.0.854+d9aa83a25cdc21\n Kube-Proxy Version:\t\tv1.3.0-alpha.0.854+d9aa83a25cdc21\nExternalID:\t\t\t127.0.0.1\nNon-terminated Pods:\t\t(1 in total)\n  Namespace\t\t\tName\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ---------\t\t\t----\t\t\t\t------------\t----------\t---------------\t-------------\n  e2e-tests-kubectl-dm4h4\tredis-master-bauoh\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nAllocated resources:\n  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)\n  CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ------------\t----------\t---------------\t-------------\n  0 (0%)\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n  39m\t\t39m\t\t1\t{kube-proxy 127.0.0.1}\t\t\tNormal\t\tStarting\t\tStarting kube-proxy.\n  39m\t\t39m\t\t1\t{kubelet 127.0.0.1}\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  39m\t\t39m\t\t1\t{kubelet 127.0.0.1}\t\t\tNormal\t\tNodeHasSufficientDisk\tNode 127.0.0.1 status is now: NodeHasSufficientDisk\n  39m\t\t39m\t\t1\t{controllermanager }\t\t\tNormal\t\tRegisteredNode\t\tNode 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController"
Mar 24 08:09:26.467: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-dm4h4'
Mar 24 08:09:26.503: INFO: stderr: ""
Mar 24 08:09:26.503: INFO: stdout: "Name:\te2e-tests-kubectl-dm4h4\nLabels:\te2e-framework=kubectl,e2e-run=4525f699-f192-11e5-8186-064a4ed57913\nStatus:\tActive\n\nNo resource quota.\n\nNo resource limits."
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:09:26.503: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dm4h4" for this suite.

• [SLOW TEST:30.268 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl describe
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should check if kubectl describe prints relevant information for rc and pods [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:672
------------------------------
SSSSSS
------------------------------
[k8s.io] hostPath 
  should give a volume the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
[BeforeEach] [k8s.io] hostPath
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:09:46.511: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:09:46.513: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-o7i8u
Mar 24 08:09:46.514: INFO: Get service account default in ns e2e-tests-hostpath-o7i8u failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:09:48.516: INFO: Service account default in ns e2e-tests-hostpath-o7i8u with secrets found. (2.002199673s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:09:48.516: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-o7i8u
Mar 24 08:09:48.516: INFO: Service account default in ns e2e-tests-hostpath-o7i8u with secrets found. (853.302µs)
[BeforeEach] [k8s.io] hostPath
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:45
[It] should give a volume the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
STEP: Creating a pod to test hostPath mode
Mar 24 08:09:48.522: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Mar 24 08:09:48.524: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Mar 24 08:09:48.524: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-o7i8u' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.274756ms elapsed)
Mar 24 08:09:50.534: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-o7i8u' so far
Mar 24 08:09:50.534: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-o7i8u' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.012417234s elapsed)
Mar 24 08:09:52.541: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-o7i8u' so far
Mar 24 08:09:52.541: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-o7i8u' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.019816413s elapsed)
Mar 24 08:09:54.543: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-o7i8u' so far
Mar 24 08:09:54.543: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-o7i8u' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.021879613s elapsed)
STEP: Saw pod success
Mar 24 08:09:56.546: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-host-path-test container test-container-1: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 1481003842
mode of file "/test-volume": dtrwxrwxrwx


[AfterEach] [k8s.io] hostPath
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:09:56.564: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-o7i8u" for this suite.

• [SLOW TEST:15.059 seconds]
[k8s.io] hostPath
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should give a volume the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
------------------------------
[k8s.io] Networking 
  should provide unchanging, static URL paths for kubernetes api services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:10:01.570: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:10:01.572: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-0f0uf
Mar 24 08:10:01.573: INFO: Get service account default in ns e2e-tests-nettest-0f0uf failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:10:03.575: INFO: Service account default in ns e2e-tests-nettest-0f0uf with secrets found. (2.002850903s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:10:03.575: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-0f0uf
Mar 24 08:10:03.576: INFO: Service account default in ns e2e-tests-nettest-0f0uf with secrets found. (858.87µs)
[BeforeEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
STEP: testing: /validate
STEP: testing: /healthz
[AfterEach] [k8s.io] Networking
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:10:03.615: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-0f0uf" for this suite.

• [SLOW TEST:7.052 seconds]
[k8s.io] Networking
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide unchanging, static URL paths for kubernetes api services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
------------------------------
SSS
------------------------------
[k8s.io] ClusterDns [Feature:Example] 
  should create pod that uses dns [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150
[BeforeEach] [k8s.io] ClusterDns [Feature:Example]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:10:08.622: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:10:08.625: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-cluster-dns-grbl6
Mar 24 08:10:08.626: INFO: Get service account default in ns e2e-tests-cluster-dns-grbl6 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:10:10.627: INFO: Service account default in ns e2e-tests-cluster-dns-grbl6 with secrets found. (2.002344728s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:10:10.627: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-cluster-dns-grbl6
Mar 24 08:10:10.628: INFO: Service account default in ns e2e-tests-cluster-dns-grbl6 with secrets found. (884.261µs)
[BeforeEach] [k8s.io] ClusterDns [Feature:Example]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:50
[It] should create pod that uses dns [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150
Mar 24 08:10:10.631: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample0-7wc9j
Mar 24 08:10:10.631: INFO: Get service account default in ns e2e-tests-dnsexample0-7wc9j failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:10:12.633: INFO: Service account default in ns e2e-tests-dnsexample0-7wc9j with secrets found. (2.002383375s)
Mar 24 08:10:12.634: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample1-996pl
Mar 24 08:10:12.636: INFO: Get service account default in ns e2e-tests-dnsexample1-996pl failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:10:14.638: INFO: Service account default in ns e2e-tests-dnsexample1-996pl with secrets found. (2.003588661s)
Mar 24 08:10:14.638: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample0-7wc9j'
Mar 24 08:10:14.690: INFO: stderr: ""
Mar 24 08:10:14.690: INFO: stdout: "replicationcontroller \"dns-backend\" created"
Mar 24 08:10:14.690: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample1-996pl'
Mar 24 08:10:14.776: INFO: stderr: ""
Mar 24 08:10:14.776: INFO: stdout: "replicationcontroller \"dns-backend\" created"
Mar 24 08:10:14.776: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample0-7wc9j'
Mar 24 08:10:14.882: INFO: stderr: ""
Mar 24 08:10:14.882: INFO: stdout: "service \"dns-backend\" created"
Mar 24 08:10:14.882: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config create -f /root/src/k8s.io/kubernetes/examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample1-996pl'
Mar 24 08:10:14.956: INFO: stderr: ""
Mar 24 08:10:14.956: INFO: stdout: "service \"dns-backend\" created"
Mar 24 08:10:29.960: INFO: Service dns-backend in namespace e2e-tests-dnsexample0-7wc9j found.
Mar 24 08:10:34.962: INFO: Service dns-backend in namespace e2e-tests-dnsexample1-996pl found.
STEP: trying to dial each unique pod
Mar 24 08:10:34.999: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-8hgyf]: "Hello World!", 1 of 1 required successes so far
Mar 24 08:10:34.999: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample0-7wc9j
STEP: trying to dial the service e2e-tests-dnsexample0-7wc9j.dns-backend via the proxy
Mar 24 08:10:35.012: INFO: Service dns-backend: found nonempty answer: Hello World!
STEP: trying to dial each unique pod
Mar 24 08:10:35.046: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-leenv]: "Hello World!", 1 of 1 required successes so far
Mar 24 08:10:35.046: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample1-996pl
STEP: trying to dial the service e2e-tests-dnsexample1-996pl.dns-backend via the proxy
Mar 24 08:10:35.137: INFO: Service dns-backend: found nonempty answer: Hello World!
Mar 24 08:10:35.140: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:36.180: INFO: stderr: ""
Mar 24 08:10:36.180: INFO: stdout: "err"
Mar 24 08:10:38.180: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:38.896: INFO: stderr: ""
Mar 24 08:10:38.896: INFO: stdout: "err"
Mar 24 08:10:40.896: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:41.330: INFO: stderr: ""
Mar 24 08:10:41.330: INFO: stdout: "err"
Mar 24 08:10:43.330: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:43.955: INFO: stderr: ""
Mar 24 08:10:43.955: INFO: stdout: "err"
Mar 24 08:10:45.955: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:46.823: INFO: stderr: ""
Mar 24 08:10:46.823: INFO: stdout: "err"
Mar 24 08:10:48.823: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:49.692: INFO: stderr: ""
Mar 24 08:10:49.692: INFO: stdout: "err"
Mar 24 08:10:51.693: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:52.142: INFO: stderr: ""
Mar 24 08:10:52.142: INFO: stdout: "err"
Mar 24 08:10:54.142: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:54.573: INFO: stderr: ""
Mar 24 08:10:54.573: INFO: stdout: "err"
Mar 24 08:10:56.574: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:10:57.534: INFO: stderr: ""
Mar 24 08:10:57.534: INFO: stdout: "err"
Mar 24 08:10:59.534: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:00.012: INFO: stderr: ""
Mar 24 08:11:00.012: INFO: stdout: "err"
Mar 24 08:11:02.012: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:02.767: INFO: stderr: ""
Mar 24 08:11:02.767: INFO: stdout: "err"
Mar 24 08:11:04.767: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:05.200: INFO: stderr: ""
Mar 24 08:11:05.200: INFO: stdout: "err"
Mar 24 08:11:07.201: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:07.745: INFO: stderr: ""
Mar 24 08:11:07.745: INFO: stdout: "err"
Mar 24 08:11:09.745: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:10.854: INFO: stderr: ""
Mar 24 08:11:10.854: INFO: stdout: "err"
Mar 24 08:11:12.854: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:13.279: INFO: stderr: ""
Mar 24 08:11:13.279: INFO: stdout: "err"
Mar 24 08:11:15.279: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:16.112: INFO: stderr: ""
Mar 24 08:11:16.112: INFO: stdout: "err"
Mar 24 08:11:18.112: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:18.929: INFO: stderr: ""
Mar 24 08:11:18.929: INFO: stdout: "err"
Mar 24 08:11:20.929: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:21.408: INFO: stderr: ""
Mar 24 08:11:21.408: INFO: stdout: "err"
Mar 24 08:11:23.408: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:24.250: INFO: stderr: ""
Mar 24 08:11:24.250: INFO: stdout: "err"
Mar 24 08:11:26.251: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:26.920: INFO: stderr: ""
Mar 24 08:11:26.920: INFO: stdout: "err"
Mar 24 08:11:28.921: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:29.492: INFO: stderr: ""
Mar 24 08:11:29.492: INFO: stdout: "err"
Mar 24 08:11:31.492: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:31.969: INFO: stderr: ""
Mar 24 08:11:31.969: INFO: stdout: "err"
Mar 24 08:11:33.970: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config exec dns-backend-8hgyf --namespace=e2e-tests-dnsexample0-7wc9j -- python -c 
import socket
try:
	socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-7wc9j')
	print 'ok'
except:
	print 'err''
Mar 24 08:11:34.392: INFO: stderr: ""
Mar 24 08:11:34.392: INFO: stdout: "err"
[AfterEach] [k8s.io] ClusterDns [Feature:Example]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-cluster-dns-grbl6".
Mar 24 08:11:36.398: INFO: POD                NODE       PHASE    GRACE  CONDITIONS
Mar 24 08:11:36.398: INFO: dns-backend-8hgyf  127.0.0.1  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 08:10:27 +0000 UTC  }]
Mar 24 08:11:36.398: INFO: dns-backend-leenv  127.0.0.1  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-24 08:10:27 +0000 UTC  }]
Mar 24 08:11:36.398: INFO: 
Mar 24 08:11:36.399: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:11:36.401: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3416 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:11:33 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:11:33 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:11:36.401: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:11:36.402: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:11:36.404: INFO: dns-backend-8hgyf started at 2016-03-24 08:10:14 +0000 UTC (1 container statuses recorded)
Mar 24 08:11:36.404: INFO: 	Container dns-backend ready: true, restart count 0
Mar 24 08:11:36.404: INFO: dns-backend-leenv started at 2016-03-24 08:10:14 +0000 UTC (1 container statuses recorded)
Mar 24 08:11:36.404: INFO: 	Container dns-backend ready: true, restart count 0
Mar 24 08:11:36.424: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:11:36.424: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:21.295548s}
Mar 24 08:11:36.424: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:12.584317s}
Mar 24 08:11:36.424: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-cluster-dns-grbl6" for this suite.
STEP: Destroying namespace "e2e-tests-dnsexample0-7wc9j" for this suite.
STEP: Destroying namespace "e2e-tests-dnsexample1-996pl" for this suite.

• Failure [132.820 seconds]
[k8s.io] ClusterDns [Feature:Example]
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should create pod that uses dns [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150

  waiting for output from pod exec
  Expected error:
      <*errors.errorString | 0xc820b1efd0>: {
          s: "Failed to find \"ok\", last result: \"err\"",
      }
      Failed to find "ok", last result: "err"
  not to have occurred

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:129
------------------------------
[k8s.io] Pods 
  should be submitted and removed [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:406
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:12:21.443: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:12:21.445: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-e7ztu
Mar 24 08:12:21.446: INFO: Get service account default in ns e2e-tests-pods-e7ztu failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:12:23.447: INFO: Service account default in ns e2e-tests-pods-e7ztu with secrets found. (2.002448056s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:12:23.447: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-e7ztu
Mar 24 08:12:23.448: INFO: Service account default in ns e2e-tests-pods-e7ztu with secrets found. (925.637µs)
[It] should be submitted and removed [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:406
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Mar 24 08:12:23.457: INFO: Waiting up to 5m0s for pod pod-update-2350ccfc-f198-11e5-8186-064a4ed57913 status to be running
Mar 24 08:12:23.458: INFO: Waiting for pod pod-update-2350ccfc-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-e7ztu' status to be 'running'(found phase: "Pending", readiness: false) (1.039058ms elapsed)
Mar 24 08:12:25.460: INFO: Waiting for pod pod-update-2350ccfc-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-e7ztu' status to be 'running'(found phase: "Pending", readiness: false) (2.003221769s elapsed)
Mar 24 08:12:27.464: INFO: Waiting for pod pod-update-2350ccfc-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-e7ztu' status to be 'running'(found phase: "Pending", readiness: false) (4.007238082s elapsed)
Mar 24 08:12:29.466: INFO: Waiting for pod pod-update-2350ccfc-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-e7ztu' status to be 'running'(found phase: "Pending", readiness: false) (6.009271931s elapsed)
Mar 24 08:12:31.468: INFO: Found pod 'pod-update-2350ccfc-f198-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar 24 08:12:36.480: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:12:36.483: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-e7ztu" for this suite.

• [SLOW TEST:20.048 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be submitted and removed [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:406
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1018
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:12:41.491: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:12:41.493: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-v649y
Mar 24 08:12:41.494: INFO: Get service account default in ns e2e-tests-kubectl-v649y failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:12:43.495: INFO: Service account default in ns e2e-tests-kubectl-v649y with secrets found. (2.002630511s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:12:43.495: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-v649y
Mar 24 08:12:43.496: INFO: Service account default in ns e2e-tests-kubectl-v649y with secrets found. (924.368µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[BeforeEach] [k8s.io] Kubectl run deployment
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:986
[It] should create a deployment from an image [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1018
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 24 08:12:43.497: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-v649y'
Mar 24 08:12:43.514: INFO: stderr: ""
Mar 24 08:12:43.514: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:990
Mar 24 08:12:45.520: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-v649y'
Mar 24 08:12:47.580: INFO: stderr: ""
Mar 24 08:12:47.580: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:12:47.580: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v649y" for this suite.

• [SLOW TEST:26.101 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl run deployment
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should create a deployment from an image [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1018
------------------------------
SSSS
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0666,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:13:07.591: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:13:07.594: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nj099
Mar 24 08:13:07.595: INFO: Get service account default in ns e2e-tests-emptydir-nj099 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:13:09.596: INFO: Service account default in ns e2e-tests-emptydir-nj099 with secrets found. (2.002569728s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:13:09.596: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-nj099
Mar 24 08:13:09.597: INFO: Service account default in ns e2e-tests-emptydir-nj099 with secrets found. (852.115µs)
[It] should support (root,0666,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 24 08:13:09.598: FAIL: Failed to create pod: pods "pod-3ed28454-f198-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-nj099".
Mar 24 08:13:09.605: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:13:09.605: INFO: 
Mar 24 08:13:09.606: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:13:09.607: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3532 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:13:04 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:13:04 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:13:09.607: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:13:09.608: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:13:09.627: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:13:09.627: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m50.001792s}
Mar 24 08:13:09.627: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:21.285972s}
Mar 24 08:13:09.627: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nj099" for this suite.

• Failure [7.043 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0666,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68

  Mar 24 08:13:09.598: Failed to create pod: pods "pod-3ed28454-f198-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
[k8s.io] Pods 
  should be schedule with cpu and memory limits [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:13:14.635: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:13:14.636: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ai6ne
Mar 24 08:13:14.638: INFO: Get service account default in ns e2e-tests-pods-ai6ne failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:13:16.639: INFO: Service account default in ns e2e-tests-pods-ai6ne with secrets found. (2.002753842s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:13:16.639: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ai6ne
Mar 24 08:13:16.640: INFO: Service account default in ns e2e-tests-pods-ai6ne with secrets found. (898.064µs)
[It] should be schedule with cpu and memory limits [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263
STEP: creating the pod
Mar 24 08:13:16.644: INFO: Waiting up to 5m0s for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 status to be running
Mar 24 08:13:16.647: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3.693289ms elapsed)
Mar 24 08:13:18.649: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2.005778259s elapsed)
Mar 24 08:13:20.651: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4.007891703s elapsed)
Mar 24 08:13:22.654: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (6.009983576s elapsed)
Mar 24 08:13:24.656: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (8.012059013s elapsed)
Mar 24 08:13:26.658: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (10.014158949s elapsed)
Mar 24 08:13:28.660: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (12.016384635s elapsed)
Mar 24 08:13:30.662: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (14.018528514s elapsed)
Mar 24 08:13:32.664: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (16.020566935s elapsed)
Mar 24 08:13:34.666: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (18.022790379s elapsed)
Mar 24 08:13:36.669: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (20.025077285s elapsed)
Mar 24 08:13:38.671: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (22.027174672s elapsed)
Mar 24 08:13:40.673: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (24.029254125s elapsed)
Mar 24 08:13:42.675: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (26.031309833s elapsed)
Mar 24 08:13:44.677: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (28.033360615s elapsed)
Mar 24 08:13:46.679: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (30.03543649s elapsed)
Mar 24 08:13:48.681: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (32.03743794s elapsed)
Mar 24 08:13:50.683: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (34.039745657s elapsed)
Mar 24 08:13:52.685: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (36.04185884s elapsed)
Mar 24 08:13:54.687: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (38.043948048s elapsed)
Mar 24 08:13:56.690: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (40.045996775s elapsed)
Mar 24 08:13:58.692: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (42.048270268s elapsed)
Mar 24 08:14:00.694: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (44.050409149s elapsed)
Mar 24 08:14:02.696: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (46.052472358s elapsed)
Mar 24 08:14:04.698: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (48.054631165s elapsed)
Mar 24 08:14:06.700: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (50.056849119s elapsed)
Mar 24 08:14:08.703: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (52.05905412s elapsed)
Mar 24 08:14:10.705: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (54.061241468s elapsed)
Mar 24 08:14:12.707: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (56.063310661s elapsed)
Mar 24 08:14:14.709: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (58.065532674s elapsed)
Mar 24 08:14:16.711: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m0.067552557s elapsed)
Mar 24 08:14:18.713: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m2.069629986s elapsed)
Mar 24 08:14:20.715: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m4.071897409s elapsed)
Mar 24 08:14:22.717: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m6.073961842s elapsed)
Mar 24 08:14:24.720: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m8.075982281s elapsed)
Mar 24 08:14:26.722: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m10.078051805s elapsed)
Mar 24 08:14:28.724: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m12.080209258s elapsed)
Mar 24 08:14:30.726: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m14.082398699s elapsed)
Mar 24 08:14:32.745: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m16.101586787s elapsed)
Mar 24 08:14:34.747: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m18.103457778s elapsed)
Mar 24 08:14:36.749: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m20.105515451s elapsed)
Mar 24 08:14:38.751: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m22.107613168s elapsed)
Mar 24 08:14:40.753: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m24.109674115s elapsed)
Mar 24 08:14:42.755: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m26.111899396s elapsed)
Mar 24 08:14:44.757: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m28.113767705s elapsed)
Mar 24 08:14:46.759: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m30.1159415s elapsed)
Mar 24 08:14:48.762: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m32.118082721s elapsed)
Mar 24 08:14:50.764: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m34.12023364s elapsed)
Mar 24 08:14:52.766: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m36.122415195s elapsed)
Mar 24 08:14:54.772: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m38.127974506s elapsed)
Mar 24 08:14:56.774: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m40.130075261s elapsed)
Mar 24 08:14:58.776: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m42.13222131s elapsed)
Mar 24 08:15:00.778: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m44.134429905s elapsed)
Mar 24 08:15:02.780: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m46.136566667s elapsed)
Mar 24 08:15:04.782: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m48.138644792s elapsed)
Mar 24 08:15:06.784: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m50.140710807s elapsed)
Mar 24 08:15:08.789: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m52.14522717s elapsed)
Mar 24 08:15:10.791: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m54.147503527s elapsed)
Mar 24 08:15:12.793: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m56.149538246s elapsed)
Mar 24 08:15:14.795: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (1m58.15163517s elapsed)
Mar 24 08:15:16.797: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m0.153677876s elapsed)
Mar 24 08:15:18.799: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m2.155791399s elapsed)
Mar 24 08:15:20.801: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m4.15794115s elapsed)
Mar 24 08:15:22.803: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m6.159945188s elapsed)
Mar 24 08:15:24.806: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m8.162031556s elapsed)
Mar 24 08:15:26.808: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m10.164113439s elapsed)
Mar 24 08:15:28.810: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m12.166189645s elapsed)
Mar 24 08:15:30.812: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m14.168286391s elapsed)
Mar 24 08:15:32.814: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m16.170368878s elapsed)
Mar 24 08:15:34.816: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m18.172456737s elapsed)
Mar 24 08:15:36.818: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m20.174693678s elapsed)
Mar 24 08:15:38.820: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m22.176926401s elapsed)
Mar 24 08:15:40.823: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m24.179035417s elapsed)
Mar 24 08:15:42.825: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m26.181074258s elapsed)
Mar 24 08:15:44.827: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m28.183278254s elapsed)
Mar 24 08:15:46.829: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m30.185331423s elapsed)
Mar 24 08:15:48.831: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m32.187522051s elapsed)
Mar 24 08:15:50.833: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m34.189591977s elapsed)
Mar 24 08:15:52.835: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m36.191639804s elapsed)
Mar 24 08:15:54.837: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m38.193651555s elapsed)
Mar 24 08:15:56.839: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m40.195718401s elapsed)
Mar 24 08:15:58.841: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m42.197725362s elapsed)
Mar 24 08:16:00.843: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m44.199855729s elapsed)
Mar 24 08:16:02.845: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m46.201908794s elapsed)
Mar 24 08:16:04.847: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m48.203908725s elapsed)
Mar 24 08:16:06.850: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m50.20609306s elapsed)
Mar 24 08:16:08.852: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m52.208139688s elapsed)
Mar 24 08:16:10.854: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m54.210233586s elapsed)
Mar 24 08:16:12.856: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m56.212359575s elapsed)
Mar 24 08:16:14.858: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (2m58.214585733s elapsed)
Mar 24 08:16:16.860: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m0.216597257s elapsed)
Mar 24 08:16:18.862: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m2.218626451s elapsed)
Mar 24 08:16:20.864: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m4.220633396s elapsed)
Mar 24 08:16:22.866: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m6.222797754s elapsed)
Mar 24 08:16:24.868: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m8.224886273s elapsed)
Mar 24 08:16:26.871: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m10.22698171s elapsed)
Mar 24 08:16:28.873: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m12.229019789s elapsed)
Mar 24 08:16:30.875: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m14.231112742s elapsed)
Mar 24 08:16:32.877: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m16.233128543s elapsed)
Mar 24 08:16:34.879: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m18.235146752s elapsed)
Mar 24 08:16:36.881: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m20.237273969s elapsed)
Mar 24 08:16:38.883: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m22.239430749s elapsed)
Mar 24 08:16:40.885: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m24.241542428s elapsed)
Mar 24 08:16:42.887: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m26.243669319s elapsed)
Mar 24 08:16:44.889: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m28.245706058s elapsed)
Mar 24 08:16:46.891: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m30.247771879s elapsed)
Mar 24 08:16:48.893: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m32.24989659s elapsed)
Mar 24 08:16:50.895: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m34.251937217s elapsed)
Mar 24 08:16:52.897: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m36.253907809s elapsed)
Mar 24 08:16:54.900: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m38.256046328s elapsed)
Mar 24 08:16:56.902: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m40.258053939s elapsed)
Mar 24 08:16:58.904: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m42.260167724s elapsed)
Mar 24 08:17:00.906: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m44.262292529s elapsed)
Mar 24 08:17:02.908: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m46.264378719s elapsed)
Mar 24 08:17:04.910: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m48.266370515s elapsed)
Mar 24 08:17:06.912: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m50.268408664s elapsed)
Mar 24 08:17:08.917: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m52.273274959s elapsed)
Mar 24 08:17:10.919: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m54.275451949s elapsed)
Mar 24 08:17:12.921: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m56.27750723s elapsed)
Mar 24 08:17:14.923: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (3m58.279570578s elapsed)
Mar 24 08:17:16.925: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m0.28174032s elapsed)
Mar 24 08:17:18.927: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m2.283708404s elapsed)
Mar 24 08:17:20.929: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m4.285871606s elapsed)
Mar 24 08:17:22.931: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m6.287965945s elapsed)
Mar 24 08:17:24.934: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m8.290154577s elapsed)
Mar 24 08:17:26.936: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m10.292407075s elapsed)
Mar 24 08:17:28.938: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m12.294415415s elapsed)
Mar 24 08:17:30.940: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m14.296489235s elapsed)
Mar 24 08:17:32.942: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m16.298537727s elapsed)
Mar 24 08:17:34.944: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m18.300597601s elapsed)
Mar 24 08:17:36.946: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m20.302593437s elapsed)
Mar 24 08:17:38.948: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m22.304545224s elapsed)
Mar 24 08:17:40.950: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m24.30643216s elapsed)
Mar 24 08:17:42.952: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m26.308468694s elapsed)
Mar 24 08:17:44.954: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m28.310605753s elapsed)
Mar 24 08:17:46.956: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m30.312657166s elapsed)
Mar 24 08:17:48.958: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m32.314671679s elapsed)
Mar 24 08:17:50.960: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m34.316735686s elapsed)
Mar 24 08:17:52.962: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m36.318784401s elapsed)
Mar 24 08:17:54.964: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m38.32089601s elapsed)
Mar 24 08:17:56.966: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m40.322939168s elapsed)
Mar 24 08:17:58.969: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m42.325115214s elapsed)
Mar 24 08:18:00.971: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m44.327164967s elapsed)
Mar 24 08:18:02.973: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m46.329222818s elapsed)
Mar 24 08:18:04.975: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m48.331283002s elapsed)
Mar 24 08:18:06.977: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m50.333307202s elapsed)
Mar 24 08:18:08.979: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m52.335377508s elapsed)
Mar 24 08:18:10.981: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m54.337440672s elapsed)
Mar 24 08:18:12.983: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m56.339464021s elapsed)
Mar 24 08:18:14.985: INFO: Waiting for pod pod-update-4305389b-f198-11e5-8186-064a4ed57913 in namespace 'e2e-tests-pods-ai6ne' status to be 'running'(found phase: "Pending", readiness: false) (4m58.341540046s elapsed)
Mar 24 08:18:16.985: INFO: Unexpected error occurred: gave up waiting for pod 'pod-update-4305389b-f198-11e5-8186-064a4ed57913' to be 'running' after 5m0s
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-pods-ai6ne".
Mar 24 08:18:16.995: INFO: At 2016-03-24 08:13:16 +0000 UTC - event for pod-update-4305389b-f198-11e5-8186-064a4ed57913: {default-scheduler } Scheduled: Successfully assigned pod-update-4305389b-f198-11e5-8186-064a4ed57913 to 127.0.0.1
Mar 24 08:18:16.995: INFO: At 2016-03-24 08:13:16 +0000 UTC - event for pod-update-4305389b-f198-11e5-8186-064a4ed57913: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:2.0" already present on machine
Mar 24 08:18:16.995: INFO: At 2016-03-24 08:13:17 +0000 UTC - event for pod-update-4305389b-f198-11e5-8186-064a4ed57913: {kubelet 127.0.0.1} FailedSync: Error syncing pod, skipping: failed to SyncPod: VM response data is nil

Mar 24 08:18:16.995: INFO: At 2016-03-24 08:14:43 +0000 UTC - event for pod-update-4305389b-f198-11e5-8186-064a4ed57913: {kubelet 127.0.0.1} FailedSync: Error syncing pod, skipping: failed to SyncPod: Conflict. The name "/kube_43056d59-f198-11e5-a8cf-064a4ed57913_pod-update-4305389b-f198-11e5-8186-064a4ed57913_e2e-tests-pods-ai6ne_nginx_0_28bed539" is already in use by container 7edeb56e1aca0909f2f9f8fbb474ec4a06e3a934469d3a43331b4d3d55712e60. You have to remove (or rename) that container to be able to reuse that name.

Mar 24 08:18:16.996: INFO: POD                                              NODE       PHASE    GRACE  CONDITIONS
Mar 24 08:18:16.996: INFO: pod-update-4305389b-f198-11e5-8186-064a4ed57913  127.0.0.1  Pending  30s    [{Ready False 0001-01-01 00:00:00 +0000 UTC 2016-03-24 08:13:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]}]
Mar 24 08:18:16.996: INFO: 
Mar 24 08:18:16.997: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:18:16.999: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3627 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:18:16 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:18:16 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:18:16.999: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:18:17.000: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:18:17.005: INFO: pod-update-4305389b-f198-11e5-8186-064a4ed57913 started at 2016-03-24 08:13:16 +0000 UTC (1 container statuses recorded)
Mar 24 08:18:17.005: INFO: 	Container nginx ready: false, restart count 0
Mar 24 08:18:17.025: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:18:17.025: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m30.073438s}
Mar 24 08:18:17.025: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:12.584317s}
Mar 24 08:18:17.025: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ai6ne" for this suite.

• Failure [307.398 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be schedule with cpu and memory limits [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263

  Expected error:
      <*errors.errorString | 0xc82082e720>: {
          s: "gave up waiting for pod 'pod-update-4305389b-f198-11e5-8186-064a4ed57913' to be 'running' after 5m0s",
      }
      gave up waiting for pod 'pod-update-4305389b-f198-11e5-8186-064a4ed57913' to be 'running' after 5m0s
  not to have occurred

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:262
------------------------------
S
------------------------------
[k8s.io] Proxy version v1 
  should proxy logs on node using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:18:22.032: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:18:22.035: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lwmjr
Mar 24 08:18:22.036: INFO: Get service account default in ns e2e-tests-proxy-lwmjr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:18:24.037: INFO: Service account default in ns e2e-tests-proxy-lwmjr with secrets found. (2.002434029s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:18:24.037: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lwmjr
Mar 24 08:18:24.038: INFO: Service account default in ns e2e-tests-proxy-lwmjr with secrets found. (821.714µs)
[It] should proxy logs on node using proxy subresource [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Mar 24 08:18:24.042: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.545058ms)
Mar 24 08:18:24.044: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.251997ms)
Mar 24 08:18:24.046: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.371725ms)
Mar 24 08:18:24.049: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.196426ms)
Mar 24 08:18:24.051: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.194154ms)
Mar 24 08:18:24.053: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.252953ms)
Mar 24 08:18:24.055: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.179866ms)
Mar 24 08:18:24.057: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.145728ms)
Mar 24 08:18:24.060: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.276973ms)
Mar 24 08:18:24.062: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.187507ms)
Mar 24 08:18:24.064: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.151557ms)
Mar 24 08:18:24.066: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.289072ms)
Mar 24 08:18:24.069: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.225614ms)
Mar 24 08:18:24.071: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.271423ms)
Mar 24 08:18:24.073: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.212377ms)
Mar 24 08:18:24.075: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.219497ms)
Mar 24 08:18:24.078: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.205498ms)
Mar 24 08:18:24.080: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.151965ms)
Mar 24 08:18:24.082: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.103061ms)
Mar 24 08:18:24.084: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="tallylog">tallylog</a>
<a href="lastlog">lastlog</a>
<a href="ppp/">ppp/</a>
<a hr... (200; 2.229044ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:18:24.084: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-lwmjr" for this suite.

• [SLOW TEST:7.059 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy logs on node using proxy subresource [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0666,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:18:29.091: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:18:29.093: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-67s1o
Mar 24 08:18:29.094: INFO: Get service account default in ns e2e-tests-emptydir-67s1o failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:18:31.096: INFO: Service account default in ns e2e-tests-emptydir-67s1o with secrets found. (2.002659748s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:18:31.096: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-67s1o
Mar 24 08:18:31.096: INFO: Service account default in ns e2e-tests-emptydir-67s1o with secrets found. (839.989µs)
[It] should support (root,0666,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 24 08:18:31.098: FAIL: Failed to create pod: pods "pod-fe73712b-f198-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-67s1o".
Mar 24 08:18:31.105: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:18:31.105: INFO: 
Mar 24 08:18:31.106: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:18:31.107: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3650 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:18:26 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:18:26 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:18:31.107: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:18:31.108: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:18:31.131: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:18:31.131: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m50.001792s}
Mar 24 08:18:31.131: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:1m30.073438s}
Mar 24 08:18:31.131: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-67s1o" for this suite.

• Failure [7.048 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0666,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96

  Mar 24 08:18:31.098: Failed to create pod: pods "pod-fe73712b-f198-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
[BeforeEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:18:36.139: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:18:36.142: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-9bo57
Mar 24 08:18:36.143: INFO: Get service account default in ns e2e-tests-container-probe-9bo57 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:18:38.145: INFO: Service account default in ns e2e-tests-container-probe-9bo57 with secrets found. (2.002942865s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:18:38.145: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-9bo57
Mar 24 08:18:38.146: INFO: Service account default in ns e2e-tests-container-probe-9bo57 with secrets found. (795.185µs)
[BeforeEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:45
[It] with readiness probe that fails should never be ready and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
[AfterEach] [k8s.io] Probing container
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:21:38.153: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9bo57" for this suite.

• [SLOW TEST:202.022 seconds]
[k8s.io] Probing container
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  with readiness probe that fails should never be ready and never restart [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
------------------------------
[k8s.io] Pods 
  should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:713
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:21:58.161: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:21:58.163: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-zgioq
Mar 24 08:21:58.164: INFO: Get service account default in ns e2e-tests-pods-zgioq failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:22:00.166: INFO: Service account default in ns e2e-tests-pods-zgioq with secrets found. (2.002981055s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:22:00.166: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-zgioq
Mar 24 08:22:00.167: INFO: Service account default in ns e2e-tests-pods-zgioq with secrets found. (824.349µs)
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:713
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-zgioq
Mar 24 08:22:00.170: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Mar 24 08:22:00.173: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-zgioq' status to be '!pending'(found phase: "Pending", readiness: false) (2.941579ms elapsed)
Mar 24 08:22:02.175: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-zgioq' status to be '!pending'(found phase: "Pending", readiness: false) (2.005007828s elapsed)
Mar 24 08:22:04.177: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-zgioq' status to be '!pending'(found phase: "Pending", readiness: false) (4.007152661s elapsed)
Mar 24 08:22:06.179: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-zgioq' out of pending state (found '"Running"')
Mar 24 08:22:06.179: INFO: Started pod liveness-exec in namespace e2e-tests-pods-zgioq
STEP: checking the pod's current state and verifying that restartCount is present
Mar 24 08:22:06.180: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:24:06.309: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zgioq" for this suite.

• [SLOW TEST:133.162 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:713
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should have monotonically increasing restart count [Conformance] [Slow]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:770
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:24:11.323: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:24:11.326: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-9w2df
Mar 24 08:24:11.327: INFO: Get service account default in ns e2e-tests-pods-9w2df failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:24:13.328: INFO: Service account default in ns e2e-tests-pods-9w2df with secrets found. (2.002000387s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:24:13.328: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-9w2df
Mar 24 08:24:13.329: INFO: Service account default in ns e2e-tests-pods-9w2df with secrets found. (809.3µs)
[It] should have monotonically increasing restart count [Conformance] [Slow]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:770
STEP: Creating pod liveness-http in namespace e2e-tests-pods-9w2df
Mar 24 08:24:13.333: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 24 08:24:13.335: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-9w2df' status to be '!pending'(found phase: "Pending", readiness: false) (1.630564ms elapsed)
Mar 24 08:24:15.337: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-9w2df' status to be '!pending'(found phase: "Pending", readiness: false) (2.003586393s elapsed)
Mar 24 08:24:17.339: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-9w2df' status to be '!pending'(found phase: "Pending", readiness: false) (4.006109538s elapsed)
Mar 24 08:24:19.341: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-9w2df' out of pending state (found '"Running"')
Mar 24 08:24:19.341: INFO: Started pod liveness-http in namespace e2e-tests-pods-9w2df
STEP: checking the pod's current state and verifying that restartCount is present
Mar 24 08:24:19.343: INFO: Initial restart count of pod liveness-http is 0
Mar 24 08:24:41.370: INFO: Restart count of pod e2e-tests-pods-9w2df/liveness-http is now 1 (22.027428826s elapsed)
Mar 24 08:25:01.389: INFO: Restart count of pod e2e-tests-pods-9w2df/liveness-http is now 2 (42.046586788s elapsed)
Mar 24 08:25:21.409: INFO: Restart count of pod e2e-tests-pods-9w2df/liveness-http is now 3 (1m2.066641961s elapsed)
Mar 24 08:25:41.432: INFO: Restart count of pod e2e-tests-pods-9w2df/liveness-http is now 4 (1m22.089119628s elapsed)
Mar 24 08:26:01.452: INFO: Restart count of pod e2e-tests-pods-9w2df/liveness-http is now 5 (1m42.109031873s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:26:01.459: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9w2df" for this suite.

• [SLOW TEST:115.148 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should have monotonically increasing restart count [Conformance] [Slow]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:770
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
[BeforeEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:26:06.471: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:26:06.474: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-qah8t
Mar 24 08:26:06.476: INFO: Service account default in ns e2e-tests-var-expansion-qah8t had 0 secrets, ignoring for 2s: <nil>
Mar 24 08:26:08.477: INFO: Service account default in ns e2e-tests-var-expansion-qah8t with secrets found. (2.003354503s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:26:08.477: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-qah8t
Mar 24 08:26:08.478: INFO: Service account default in ns e2e-tests-var-expansion-qah8t with secrets found. (845.859µs)
[It] should allow substituting values in a container's command [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
STEP: Creating a pod to test substitution in container's command
Mar 24 08:26:08.481: INFO: Waiting up to 5m0s for pod var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:26:08.483: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913' yet
Mar 24 08:26:08.483: INFO: Waiting for pod var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-qah8t' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.366147ms elapsed)
Mar 24 08:26:10.486: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-qah8t' so far
Mar 24 08:26:10.486: INFO: Waiting for pod var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-qah8t' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005273587s elapsed)
Mar 24 08:26:12.488: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-qah8t' so far
Mar 24 08:26:12.488: INFO: Waiting for pod var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-qah8t' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007210633s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-0f1250aa-f19a-11e5-8186-064a4ed57913 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value


[AfterEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:26:14.503: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-qah8t" for this suite.

• [SLOW TEST:13.041 seconds]
[k8s.io] Variable Expansion
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should allow substituting values in a container's command [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
------------------------------
[k8s.io] DNS 
  should provide DNS for services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330
[BeforeEach] [k8s.io] DNS
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:26:19.512: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:26:19.514: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-1jrew
Mar 24 08:26:19.515: INFO: Get service account default in ns e2e-tests-dns-1jrew failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:26:21.517: INFO: Service account default in ns e2e-tests-dns-1jrew with secrets found. (2.002826799s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:26:21.517: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-1jrew
Mar 24 08:26:21.518: INFO: Service account default in ns e2e-tests-dns-1jrew with secrets found. (797.934µs)
[It] should provide DNS for services [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330
STEP: Waiting for DNS Service to be Running
Mar 24 08:26:21.518: FAIL: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true
[AfterEach] [k8s.io] DNS
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-dns-1jrew".
Mar 24 08:26:21.524: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:26:21.524: INFO: 
Mar 24 08:26:21.525: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:26:21.527: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3824 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:26:18 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:26:18 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:26:21.527: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:26:21.528: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:26:21.548: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:26:21.548: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:3m3.096675s}
Mar 24 08:26:21.548: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:3m3.096675s}
Mar 24 08:26:21.548: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-1jrew" for this suite.

• Failure [7.043 seconds]
[k8s.io] DNS
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should provide DNS for services [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330

  Mar 24 08:26:21.518: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:229
------------------------------
S
------------------------------
[k8s.io] EmptyDir volumes 
  volume on tmpfs should have the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:26:26.555: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:26:26.557: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-63apq
Mar 24 08:26:26.558: INFO: Get service account default in ns e2e-tests-emptydir-63apq failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:26:28.560: INFO: Service account default in ns e2e-tests-emptydir-63apq with secrets found. (2.002377341s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:26:28.560: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-63apq
Mar 24 08:26:28.561: INFO: Service account default in ns e2e-tests-emptydir-63apq with secrets found. (888.923µs)
[It] volume on tmpfs should have the correct mode [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 24 08:26:28.562: FAIL: Failed to create pod: pods "pod-1b0ab83d-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-63apq".
Mar 24 08:26:28.569: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:26:28.569: INFO: 
Mar 24 08:26:28.570: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:26:28.571: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3824 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:26:18 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:26:18 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:26:28.571: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:26:28.572: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:26:28.600: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:26:28.600: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:3m3.096675s}
Mar 24 08:26:28.600: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:3m3.096675s}
Mar 24 08:26:28.600: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-63apq" for this suite.

• Failure [7.058 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  volume on tmpfs should have the correct mode [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60

  Mar 24 08:26:28.562: Failed to create pod: pods "pod-1b0ab83d-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:686
[BeforeEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:26:33.613: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:26:33.616: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-xblpl
Mar 24 08:26:33.618: INFO: Get service account default in ns e2e-tests-pods-xblpl failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:26:35.619: INFO: Service account default in ns e2e-tests-pods-xblpl with secrets found. (2.002985025s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:26:35.619: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-xblpl
Mar 24 08:26:35.620: INFO: Service account default in ns e2e-tests-pods-xblpl with secrets found. (865.539µs)
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:686
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-xblpl
Mar 24 08:26:35.625: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Mar 24 08:26:35.627: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-xblpl' status to be '!pending'(found phase: "Pending", readiness: false) (1.257443ms elapsed)
Mar 24 08:26:37.629: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-xblpl' status to be '!pending'(found phase: "Pending", readiness: false) (2.003232974s elapsed)
Mar 24 08:26:39.631: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-xblpl' status to be '!pending'(found phase: "Pending", readiness: false) (4.005220352s elapsed)
Mar 24 08:26:41.633: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-xblpl' out of pending state (found '"Running"')
Mar 24 08:26:41.633: INFO: Started pod liveness-exec in namespace e2e-tests-pods-xblpl
STEP: checking the pod's current state and verifying that restartCount is present
Mar 24 08:26:41.634: INFO: Initial restart count of pod liveness-exec is 0
Mar 24 08:28:41.759: FAIL: pod e2e-tests-pods-xblpl/liveness-exec - expected number of restarts: %!!(MISSING)t(int=1), found restarts: %!!(MISSING)t(int=0)
STEP: deleting the pod
[AfterEach] [k8s.io] Pods
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-pods-xblpl".
Mar 24 08:28:41.782: INFO: At 2016-03-24 08:26:35 +0000 UTC - event for liveness-exec: {default-scheduler } Scheduled: Successfully assigned liveness-exec to 127.0.0.1
Mar 24 08:28:41.782: INFO: At 2016-03-24 08:26:35 +0000 UTC - event for liveness-exec: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/busybox:1.24" already present on machine
Mar 24 08:28:41.782: INFO: At 2016-03-24 08:26:55 +0000 UTC - event for liveness-exec: {kubelet 127.0.0.1} Unhealthy: Liveness probe errored: Exit code 1
Mar 24 08:28:41.792: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:28:41.792: INFO: 
Mar 24 08:28:41.793: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:28:41.794: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3879 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:28:39 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:28:39 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:28:41.794: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:28:41.795: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:28:41.824: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:28:41.824: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xblpl" for this suite.

• Failure [133.223 seconds]
[k8s.io] Pods
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:686

  Mar 24 08:28:41.759: pod e2e-tests-pods-xblpl/liveness-exec - expected number of restarts: %!t(int=1), found restarts: %!t(int=0)

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:108
------------------------------
[k8s.io] Downward API volume 
  should update labels on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
[BeforeEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:28:46.836: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:28:46.838: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-qhnmd
Mar 24 08:28:46.839: INFO: Get service account default in ns e2e-tests-downward-api-qhnmd failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:28:48.841: INFO: Service account default in ns e2e-tests-downward-api-qhnmd with secrets found. (2.002329354s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:28:48.841: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-qhnmd
Mar 24 08:28:48.842: INFO: Service account default in ns e2e-tests-downward-api-qhnmd with secrets found. (941.84µs)
[It] should update labels on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
STEP: Creating the pod
Mar 24 08:28:48.847: INFO: Waiting up to 5m0s for pod labelsupdate6ea7dd93-f19a-11e5-8186-064a4ed57913 status to be running
Mar 24 08:28:48.849: INFO: Waiting for pod labelsupdate6ea7dd93-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-qhnmd' status to be 'running'(found phase: "Pending", readiness: false) (1.782162ms elapsed)
Mar 24 08:28:50.851: INFO: Waiting for pod labelsupdate6ea7dd93-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-qhnmd' status to be 'running'(found phase: "Pending", readiness: false) (2.003863867s elapsed)
Mar 24 08:28:52.853: INFO: Waiting for pod labelsupdate6ea7dd93-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-downward-api-qhnmd' status to be 'running'(found phase: "Pending", readiness: false) (4.005807527s elapsed)
Mar 24 08:28:54.855: INFO: Found pod 'labelsupdate6ea7dd93-f19a-11e5-8186-064a4ed57913' on node '127.0.0.1'
STEP: Deleting the pod
[AfterEach] [k8s.io] Downward API volume
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:28:56.875: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qhnmd" for this suite.

• [SLOW TEST:15.051 seconds]
[k8s.io] Downward API volume
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should update labels on modification [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
------------------------------
S
------------------------------
[k8s.io] Proxy version v1 
  should proxy to cadvisor [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
[BeforeEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:01.887: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:01.889: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-2g8lr
Mar 24 08:29:01.890: INFO: Get service account default in ns e2e-tests-proxy-2g8lr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:03.892: INFO: Service account default in ns e2e-tests-proxy-2g8lr with secrets found. (2.002600723s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:03.892: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-2g8lr
Mar 24 08:29:03.892: INFO: Service account default in ns e2e-tests-proxy-2g8lr with secrets found. (825.331µs)
[It] should proxy to cadvisor [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
Mar 24 08:29:03.898: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 4.467774ms)
Mar 24 08:29:03.900: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.062912ms)
Mar 24 08:29:03.902: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.99845ms)
Mar 24 08:29:03.904: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.022247ms)
Mar 24 08:29:03.907: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.109371ms)
Mar 24 08:29:03.909: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.00644ms)
Mar 24 08:29:03.911: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.950314ms)
Mar 24 08:29:03.912: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.894108ms)
Mar 24 08:29:03.914: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.952662ms)
Mar 24 08:29:03.916: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.015486ms)
Mar 24 08:29:03.918: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.005938ms)
Mar 24 08:29:03.920: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.002529ms)
Mar 24 08:29:03.922: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.943577ms)
Mar 24 08:29:03.924: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.914969ms)
Mar 24 08:29:03.926: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.968123ms)
Mar 24 08:29:03.928: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.962796ms)
Mar 24 08:29:03.930: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.949092ms)
Mar 24 08:29:03.932: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.94344ms)
Mar 24 08:29:03.934: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 1.948275ms)
Mar 24 08:29:03.937: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/: 
<html>
  <head>
    <title>cAdvisor - /</title>
    
    <link rel="stylesheet" href="../static/... (200; 2.683743ms)
[AfterEach] version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:29:03.937: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-2g8lr" for this suite.

• [SLOW TEST:7.057 seconds]
[k8s.io] Proxy
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  version v1
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
    should proxy to cadvisor [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
------------------------------
S
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0777,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:08.944: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:08.946: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-72qu1
Mar 24 08:29:08.947: INFO: Get service account default in ns e2e-tests-emptydir-72qu1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:10.948: INFO: Service account default in ns e2e-tests-emptydir-72qu1 with secrets found. (2.002393374s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:10.948: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-72qu1
Mar 24 08:29:10.949: INFO: Service account default in ns e2e-tests-emptydir-72qu1 with secrets found. (924.389µs)
[It] should support (root,0777,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 24 08:29:10.951: FAIL: Failed to create pod: pods "pod-7bd539a2-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-72qu1".
Mar 24 08:29:10.957: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:29:10.957: INFO: 
Mar 24 08:29:10.958: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:29:10.959: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3926 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:29:09 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:29:09 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:29:10.959: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:29:10.960: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:29:10.979: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:29:10.979: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-72qu1" for this suite.

• Failure [7.046 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0777,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72

  Mar 24 08:29:10.951: Failed to create pod: pods "pod-7bd539a2-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS
------------------------------
[k8s.io] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:537
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:15.990: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:15.992: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xfvxf
Mar 24 08:29:15.993: INFO: Get service account default in ns e2e-tests-kubectl-xfvxf failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:17.994: INFO: Service account default in ns e2e-tests-kubectl-xfvxf with secrets found. (2.002550667s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:17.995: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-xfvxf
Mar 24 08:29:17.995: INFO: Service account default in ns e2e-tests-kubectl-xfvxf with secrets found. (875.595µs)
[BeforeEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:122
[It] should check if v1 is in available api versions [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:537
STEP: validating api verions
Mar 24 08:29:17.995: INFO: Running '/usr/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/root/.kube/config api-versions'
Mar 24 08:29:18.009: INFO: stderr: ""
Mar 24 08:29:18.009: INFO: stdout: "autoscaling/v1\nbatch/v1\nextensions/v1beta1\nv1"
[AfterEach] [k8s.io] Kubectl client
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:29:18.009: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xfvxf" for this suite.

• [SLOW TEST:7.026 seconds]
[k8s.io] Kubectl client
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  [k8s.io] Kubectl api-versions
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
    should check if v1 is in available api versions [Conformance]
    /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:537
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
[BeforeEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:23.016: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:23.018: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-erqql
Mar 24 08:29:23.019: INFO: Get service account default in ns e2e-tests-var-expansion-erqql failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:25.021: INFO: Service account default in ns e2e-tests-var-expansion-erqql with secrets found. (2.002784364s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:25.021: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-erqql
Mar 24 08:29:25.022: INFO: Service account default in ns e2e-tests-var-expansion-erqql with secrets found. (936.685µs)
[It] should allow composing env vars into new env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
STEP: Creating a pod to test env composition
Mar 24 08:29:25.026: INFO: Waiting up to 5m0s for pod var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913 status to be success or failure
Mar 24 08:29:25.028: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913' yet
Mar 24 08:29:25.028: INFO: Waiting for pod var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-erqql' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.139535ms elapsed)
Mar 24 08:29:27.030: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-erqql' so far
Mar 24 08:29:27.030: INFO: Waiting for pod var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-erqql' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00414814s elapsed)
Mar 24 08:29:29.032: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913' in namespace 'e2e-tests-var-expansion-erqql' so far
Mar 24 08:29:29.032: INFO: Waiting for pod var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913 in namespace 'e2e-tests-var-expansion-erqql' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005992477s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-84387eaf-f19a-11e5-8186-064a4ed57913 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOBAR=foo-value;;bar-value
SHLVL=1
HOME=/
TERM=linux
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
BAR=bar-value
PATH=/bin:/sbin/:/usr/bin/:/usr/sbin/
KUBERNETES_PORT_443_TCP_PORT=443
FOO=foo-value
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1


[AfterEach] [k8s.io] Variable Expansion
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
Mar 24 08:29:31.051: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-erqql" for this suite.

• [SLOW TEST:13.042 seconds]
[k8s.io] Variable Expansion
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should allow composing env vars into new env vars [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
------------------------------
[k8s.io] EmptyDir volumes 
  should support (root,0777,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:36.058: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:36.060: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-v1vqu
Mar 24 08:29:36.061: INFO: Get service account default in ns e2e-tests-emptydir-v1vqu failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:38.063: INFO: Service account default in ns e2e-tests-emptydir-v1vqu with secrets found. (2.002554509s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:38.063: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-v1vqu
Mar 24 08:29:38.064: INFO: Service account default in ns e2e-tests-emptydir-v1vqu with secrets found. (910.945µs)
[It] should support (root,0777,default) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 24 08:29:38.065: FAIL: Failed to create pod: pods "pod-8bfe8b8b-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-v1vqu".
Mar 24 08:29:38.072: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:29:38.072: INFO: 
Mar 24 08:29:38.073: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:29:38.074: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3953 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/hostname:127.0.0.1 kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:29:29 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:29:29 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:29:38.074: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:29:38.075: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:29:38.095: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:29:38.095: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-v1vqu" for this suite.

• Failure [7.043 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (root,0777,default) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100

  Mar 24 08:29:38.065: Failed to create pod: pods "pod-8bfe8b8b-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
S
------------------------------
[k8s.io] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76
[BeforeEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Creating a kubernetes client
Mar 24 08:29:43.101: INFO: >>> testContext.KubeConfig: /root/.kube/config

STEP: Building a namespace api object
Mar 24 08:29:43.104: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-z1kv2
Mar 24 08:29:43.105: INFO: Get service account default in ns e2e-tests-emptydir-z1kv2 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 24 08:29:45.107: INFO: Service account default in ns e2e-tests-emptydir-z1kv2 with secrets found. (2.002638352s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 24 08:29:45.107: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-z1kv2
Mar 24 08:29:45.108: INFO: Service account default in ns e2e-tests-emptydir-z1kv2 with secrets found. (826.869µs)
[It] should support (non-root,0644,tmpfs) [Conformance]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 24 08:29:45.109: FAIL: Failed to create pod: pods "pod-903162bb-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden
[AfterEach] [k8s.io] EmptyDir volumes
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:102
STEP: Collecting events from namespace "e2e-tests-emptydir-z1kv2".
Mar 24 08:29:45.116: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 24 08:29:45.116: INFO: 
Mar 24 08:29:45.117: INFO: 
Logging node info for node 127.0.0.1
Mar 24 08:29:45.118: INFO: Node Info: &{{ } {127.0.0.1   /api/v1/nodes/127.0.0.1 3601f765-f192-11e5-a8cf-064a4ed57913 3975 0 2016-03-24 07:29:57 +0000 UTC <nil> <nil> map[kubernetes.io/e2e-4f3052eb-f196-11e5-8186-064a4ed57913:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1  false} {map[cpu:{2.000 DecimalSI} memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI}] map[memory:{7934193664.000 BinarySI} pods:{110.000 DecimalSI} cpu:{2.000 DecimalSI}]  [{OutOfDisk False 2016-03-24 08:29:40 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-24 08:29:40 +0000 UTC 2016-03-24 07:29:57 +0000 UTC KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {0af04d3c78a943ae8f3cc26602e374f2 EC286296-F1A1-3AB3-2979-B08D0FE5BF4A fdfc86c6-a0ae-41b4-8ab8-29b0b8fd1e8b 3.10.0-229.20.1.el7.x86_64 CentOS Linux 7 (Core) hyper://0.5.0 v1.3.0-alpha.0.854+d9aa83a25cdc21 v1.3.0-alpha.0.854+d9aa83a25cdc21} []}}
Mar 24 08:29:45.118: INFO: 
Logging kubelet events for node 127.0.0.1
Mar 24 08:29:45.119: INFO: 
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 24 08:29:45.137: INFO: 
Latency metrics for node 127.0.0.1
Mar 24 08:29:45.137: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z1kv2" for this suite.

• Failure [7.047 seconds]
[k8s.io] EmptyDir volumes
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
  should support (non-root,0644,tmpfs) [Conformance] [It]
  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76

  Mar 24 08:29:45.109: Failed to create pod: pods "pod-903162bb-f19a-11e5-8186-064a4ed57913" is forbidden: pod.Spec.SecurityContext.SELinuxOptions is forbidden

  /root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685
------------------------------
SS

Summarizing 22 Failures:

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0644,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] Kubectl client [k8s.io] Kubectl logs [It] should be able to retrieve and filter logs [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0777,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0644,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0666,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] Kubectl client [k8s.io] Guestbook application [It] should create and stop a working application [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1235

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0777,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] volume on default medium should have the correct mode [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0666,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] DNS [It] should provide DNS for the cluster [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:229

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0644,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] Kubectl client [k8s.io] Proxy server [It] should support proxy with --port 0 [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1112

[Fail] [k8s.io] ClusterDns [Feature:Example] [It] should create pod that uses dns [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:129

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0666,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] Pods [It] should be schedule with cpu and memory limits [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:262

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0666,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] DNS [It] should provide DNS for services [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:229

[Fail] [k8s.io] EmptyDir volumes [It] volume on tmpfs should have the correct mode [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] Pods [It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:108

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0777,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0777,default) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0644,tmpfs) [Conformance] 
/root/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1685

Ran 90 of 268 Specs in 3566.194 seconds
FAIL! -- 68 Passed | 22 Failed | 0 Pending | 178 Skipped --- FAIL: TestE2E (3566.23s)
FAIL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment