Skip to content

Instantly share code, notes, and snippets.

@reflection
Created March 28, 2016 18:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save reflection/746dc9290eccd327d0bb to your computer and use it in GitHub Desktop.
Save reflection/746dc9290eccd327d0bb to your computer and use it in GitHub Desktop.
[root@totoro local]# ./bin/linux/amd64/e2e.test -kubeconfig /root/.kube/config --ginkgo.focus=Conformance
I0328 10:55:14.415333 11167 e2e.go:102] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
I0328 10:55:14.415373 11167 e2e.go:274] Starting e2e run "39433e52-f50e-11e5-b1e1-0862662cf845" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1459187714 - Will randomize all specs
Will run 93 of 265 specs
Mar 28 10:55:14.422: INFO: >>> testContext.KubeConfig: /root/.kube/config
Mar 28 10:55:14.425: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 28 10:55:14.434: INFO: 1 / 1 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 28 10:55:14.434: INFO: expected 1 pod replicas in namespace 'kube-system', 1 are Running and Ready.
S
------------------------------
ConfigMap
updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
[BeforeEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 10:55:14.434: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 10:55:14.436: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-sised
Mar 28 10:55:14.436: INFO: Get service account default in ns e2e-tests-configmap-sised failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 10:55:16.440: INFO: Service account default in ns e2e-tests-configmap-sised with secrets found. (2.003990622s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 10:55:16.440: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-sised
Mar 28 10:55:16.443: INFO: Service account default in ns e2e-tests-configmap-sised with secrets found. (3.16616ms)
[It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
STEP: Creating configMap with name configmap-test-upd-3a7f43b3-f50e-11e5-b1e1-0862662cf845
STEP: Creating the pod
Mar 28 10:55:16.457: INFO: Waiting up to 5m0s for pod pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845 status to be running
Mar 28 10:55:16.458: INFO: Waiting for pod pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-configmap-sised' status to be 'running'(found phase: "Pending", readiness: false) (835.486µs elapsed)
Mar 28 10:55:18.463: INFO: Found pod 'pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: Deleting the pod
STEP: Cleaning up the configMap
[AfterEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-configmap-sised".
Mar 28 11:00:18.485: INFO: At 2016-03-28 10:55:16 -0700 PDT - event for pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:00:18.485: INFO: At 2016-03-28 10:55:17 -0700 PDT - event for pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:00:18.485: INFO: At 2016-03-28 10:55:17 -0700 PDT - event for pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id 0b8b03d44ed7
Mar 28 11:00:18.485: INFO: At 2016-03-28 10:55:17 -0700 PDT - event for pod-configmaps-3a8024d7-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id 0b8b03d44ed7
Mar 28 11:00:18.488: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:00:18.488: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:00:18.488: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:00:18.488: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:00:18.488: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:00:18.488: INFO:
Mar 28 11:00:18.493: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:00:18.495: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 131 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI} cpu:{4.000 DecimalSI}] map[memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI} cpu:{4.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:00:13 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:00:13 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:00:18.495: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:00:18.497: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:00:18.502: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:00:18.502: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:18.502: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:18.502: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:18.554: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 2 @[0]
Mar 28 11:00:18.554: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:00:18.554: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sised" for this suite.
• Failure [309.127 seconds]
ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:333
updates should be reflected in volume [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:272
Timed out after 300.000s.
Expected
<string>: Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
Error reading file /etc/configmap-volume/data-1: open /etc/configmap-volume/data-1: no such file or directory, retrying
to contain substring
<string>: value-1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
------------------------------
SS
------------------------------
Downward API volume
should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
[BeforeEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:00:23.561: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:00:23.564: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-jpge8
Mar 28 11:00:23.566: INFO: Get service account default in ns e2e-tests-downward-api-jpge8 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:00:25.567: INFO: Service account default in ns e2e-tests-downward-api-jpge8 with secrets found. (2.002410299s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:00:25.567: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-jpge8
Mar 28 11:00:25.567: INFO: Service account default in ns e2e-tests-downward-api-jpge8 with secrets found. (656.725µs)
[It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
STEP: Creating a pod to test downward API volume plugin
Mar 28 11:00:25.569: INFO: Waiting up to 5m0s for pod downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:00:25.571: INFO: No Status.Info for container 'client-container' in pod 'downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845' yet
Mar 28 11:00:25.571: INFO: Waiting for pod downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-jpge8' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.404654ms elapsed)
Mar 28 11:00:27.572: INFO: Nil State.Terminated for container 'client-container' in pod 'downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-downward-api-jpge8' so far
Mar 28 11:00:27.572: INFO: Waiting for pod downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-jpge8' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002774205s elapsed)
Mar 28 11:00:29.574: INFO: Unexpected error occurred: pod 'downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:00:26 -0700 PDT FinishedAt:2016-03-28 11:00:26 -0700 PDT ContainerID:docker://bf20a7976fd7773a9331d17211f7e36426f7d796d4b4d718dfe866ece9ac1c35}
[AfterEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-downward-api-jpge8".
Mar 28 11:00:29.591: INFO: At 2016-03-28 11:00:25 -0700 PDT - event for downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:00:29.591: INFO: At 2016-03-28 11:00:26 -0700 PDT - event for downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:00:29.591: INFO: At 2016-03-28 11:00:26 -0700 PDT - event for downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id bf20a7976fd7
Mar 28 11:00:29.591: INFO: At 2016-03-28 11:00:26 -0700 PDT - event for downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id bf20a7976fd7
Mar 28 11:00:29.591: INFO: At 2016-03-28 11:00:26 -0700 PDT - event for downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} FailedSync: Error syncing pod, skipping: failed to "StartContainer" for "client-container" with RunContainerError: "failed to apply oom-score-adj to container \"exceeded maxTries, some processes might not have desired OOM score\"- /k8s_client-container.fb103d19_downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845_e2e-tests-downward-api-jpge8_f2c01300-f50e-11e5-a3fb-0862662cf845_1ecb2da0"
Mar 28 11:00:29.593: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:00:29.593: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:00:29.593: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:00:29.593: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:00:29.593: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:00:29.593: INFO:
Mar 28 11:00:29.594: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:00:29.596: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 149 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:00:23 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:00:23 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:00:29.596: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:00:29.597: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:00:29.601: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:00:29.601: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:29.601: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:29.601: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:00:29.659: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 2 @[0]
Mar 28 11:00:29.659: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:00:29.659: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jpge8" for this suite.
• Failure [11.103 seconds]
Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:133
should provide podname only [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:42
Expected error:
<*errors.errorString | 0xc8209b7e60>: {
s: "pod 'downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:00:26 -0700 PDT FinishedAt:2016-03-28 11:00:26 -0700 PDT ContainerID:docker://bf20a7976fd7773a9331d17211f7e36426f7d796d4b4d718dfe866ece9ac1c35}",
}
pod 'downwardapi-volume-f2bfed93-f50e-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:00:26 -0700 PDT FinishedAt:2016-03-28 11:00:26 -0700 PDT ContainerID:docker://bf20a7976fd7773a9331d17211f7e36426f7d796d4b4d718dfe866ece9ac1c35}
not to have occurred
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
------------------------------
Kubectl client Kubectl run --rm job
should create a job from an image, then delete the job [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1092
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:00:34.664: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:00:34.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-u0pnn
Mar 28 11:00:34.666: INFO: Get service account default in ns e2e-tests-kubectl-u0pnn failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:00:36.667: INFO: Service account default in ns e2e-tests-kubectl-u0pnn with secrets found. (2.001552559s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:00:36.667: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-u0pnn
Mar 28 11:00:36.668: INFO: Service account default in ns e2e-tests-kubectl-u0pnn with secrets found. (741.626µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should create a job from an image, then delete the job [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1092
STEP: executing a command with run --rm and attach with stdin
Mar 28 11:00:36.668: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config --namespace= run e2e-test-rm-busybox-job --image=busybox --rm=true --restart=Never --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Mar 28 11:00:42.955: INFO: stderr: ""
Mar 28 11:00:42.955: INFO: stdout: "Waiting for pod default/e2e-test-rm-busybox-job-sa1kj to be running, status is Pending, pod ready: false\nabcd1234stdin closed\njob \"e2e-test-rm-busybox-job\" deleted"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:00:42.955: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-u0pnn" for this suite.
• [SLOW TEST:13.307 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run --rm job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1093
should create a job from an image, then delete the job [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1092
------------------------------
SSSS
------------------------------
EmptyDir volumes
volume on default medium should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:00:47.972: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:00:47.974: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ydhg4
Mar 28 11:00:47.975: INFO: Get service account default in ns e2e-tests-emptydir-ydhg4 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:00:49.976: INFO: Service account default in ns e2e-tests-emptydir-ydhg4 with secrets found. (2.002081043s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:00:49.976: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ydhg4
Mar 28 11:00:49.977: INFO: Service account default in ns e2e-tests-emptydir-ydhg4 with secrets found. (607.095µs)
[It] volume on default medium should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 28 11:00:49.978: INFO: Waiting up to 5m0s for pod pod-014c7468-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:00:49.979: INFO: No Status.Info for container 'test-container' in pod 'pod-014c7468-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:00:49.979: INFO: Waiting for pod pod-014c7468-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-ydhg4' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.042972ms elapsed)
Mar 28 11:00:51.983: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-014c7468-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-ydhg4' so far
Mar 28 11:00:51.983: INFO: Waiting for pod pod-014c7468-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-ydhg4' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004990354s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-014c7468-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:00:54.004: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ydhg4" for this suite.
• [SLOW TEST:11.041 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
volume on default medium should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:88
------------------------------
SS
------------------------------
EmptyDir volumes
should support (non-root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:00:59.013: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:00:59.014: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-jfpzs
Mar 28 11:00:59.015: INFO: Get service account default in ns e2e-tests-emptydir-jfpzs failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:01:01.016: INFO: Service account default in ns e2e-tests-emptydir-jfpzs with secrets found. (2.00149329s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:01:01.016: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-jfpzs
Mar 28 11:01:01.016: INFO: Service account default in ns e2e-tests-emptydir-jfpzs with secrets found. (612.261µs)
[It] should support (non-root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 28 11:01:01.018: INFO: Waiting up to 5m0s for pod pod-07e10076-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:01:01.019: INFO: No Status.Info for container 'test-container' in pod 'pod-07e10076-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:01:01.019: INFO: Waiting for pod pod-07e10076-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-jfpzs' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.020775ms elapsed)
Mar 28 11:01:03.021: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-07e10076-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-jfpzs' so far
Mar 28 11:01:03.021: INFO: Waiting for pod pod-07e10076-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-jfpzs' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002330427s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-07e10076-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:01:05.037: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jfpzs" for this suite.
• [SLOW TEST:11.037 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:80
------------------------------
SSSSSSSS
------------------------------
SchedulerPredicates [Serial]
validates that NodeSelector is respected if matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:01:10.049: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:01:10.051: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-z6wyz
Mar 28 11:01:10.051: INFO: Get service account default in ns e2e-tests-sched-pred-z6wyz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:01:12.052: INFO: Service account default in ns e2e-tests-sched-pred-z6wyz with secrets found. (2.001450007s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:01:12.052: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-z6wyz
Mar 28 11:01:12.053: INFO: Service account default in ns e2e-tests-sched-pred-z6wyz with secrets found. (593.739µs)
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 28 11:01:12.056: INFO: Waiting for terminating namespaces to be deleted...
Mar 28 11:01:12.058: INFO: >>> testContext.KubeConfig: /root/.kube/config
Mar 28 11:01:12.059: INFO: Waiting up to 2m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready
Mar 28 11:01:12.061: INFO: 1 / 1 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 28 11:01:12.061: INFO: expected 1 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Mar 28 11:01:12.061: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1 before test
Mar 28 11:01:12.064: INFO: k8s-etcd-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:01:12.064: INFO: k8s-proxy-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:01:12.064: INFO: k8s-master-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:01:12.064: INFO: kube-dns-v10-d8w4s from kube-system started at 2016-03-28 10:54:41 -0700 PDT (4 container statuses recorded)
Mar 28 11:01:12.064: INFO: Container etcd ready: true, restart count 0
Mar 28 11:01:12.064: INFO: Container healthz ready: true, restart count 0
Mar 28 11:01:12.064: INFO: Container kube2sky ready: true, restart count 0
Mar 28 11:01:12.064: INFO: Container skydns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
STEP: Trying to launch a pod without a label to get a node which can launch it.
Mar 28 11:01:12.066: INFO: Waiting up to 5m0s for pod without-label status to be running
Mar 28 11:01:12.067: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-z6wyz' status to be 'running'(found phase: "Pending", readiness: false) (805.331µs elapsed)
Mar 28 11:01:14.068: INFO: Found pod 'without-label' on node '127.0.0.1'
STEP: Trying to apply a random label on the found node.
STEP: Trying to relaunch the pod, now with labels.
Mar 28 11:01:14.105: INFO: Waiting up to 5m0s for pod with-labels status to be !pending
Mar 28 11:01:14.106: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-z6wyz' status to be '!pending'(found phase: "Pending", readiness: false) (901.057µs elapsed)
Mar 28 11:01:16.112: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-z6wyz' status to be '!pending'(found phase: "Pending", readiness: false) (2.006540515s elapsed)
Mar 28 11:01:18.113: INFO: Saw pod 'with-labels' in namespace 'e2e-tests-sched-pred-z6wyz' out of pending state (found '"Running"')
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:01:18.127: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-z6wyz" for this suite.
• [SLOW TEST:13.096 seconds]
SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:729
validates that NodeSelector is respected if matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:515
------------------------------
SSSS
------------------------------
Kubectl client Kubectl run deployment
should create a deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1016
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:01:23.146: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:01:23.147: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-p2bgg
Mar 28 11:01:23.148: INFO: Get service account default in ns e2e-tests-kubectl-p2bgg failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:01:25.149: INFO: Service account default in ns e2e-tests-kubectl-p2bgg with secrets found. (2.00214878s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:01:25.149: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-p2bgg
Mar 28 11:01:25.150: INFO: Service account default in ns e2e-tests-kubectl-p2bgg with secrets found. (940.861µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl run deployment
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:984
[It] should create a deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1016
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 28 11:01:25.151: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-p2bgg'
Mar 28 11:01:25.170: INFO: stderr: ""
Mar 28 11:01:25.170: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] Kubectl run deployment
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:988
Mar 28 11:01:25.184: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-p2bgg'
Mar 28 11:01:27.261: INFO: stderr: ""
Mar 28 11:01:27.261: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:01:27.261: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p2bgg" for this suite.
• [SLOW TEST:24.127 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run deployment
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1017
should create a deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1016
------------------------------
S
------------------------------
Proxy version v1
should proxy to cadvisor [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:57
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:01:47.273: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:01:47.274: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-v9zoz
Mar 28 11:01:47.275: INFO: Get service account default in ns e2e-tests-proxy-v9zoz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:01:49.276: INFO: Service account default in ns e2e-tests-proxy-v9zoz with secrets found. (2.001647798s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:01:49.276: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-v9zoz
Mar 28 11:01:49.277: INFO: Service account default in ns e2e-tests-proxy-v9zoz with secrets found. (763.023µs)
[It] should proxy to cadvisor [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:57
Mar 28 11:01:49.283: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.986658ms)
Mar 28 11:01:49.284: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.928735ms)
Mar 28 11:01:49.286: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.719026ms)
Mar 28 11:01:49.288: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.700866ms)
Mar 28 11:01:49.290: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.662618ms)
Mar 28 11:01:49.291: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.633335ms)
Mar 28 11:01:49.293: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.642414ms)
Mar 28 11:01:49.295: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.650631ms)
Mar 28 11:01:49.296: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.584087ms)
Mar 28 11:01:49.298: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.614027ms)
Mar 28 11:01:49.299: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.591167ms)
Mar 28 11:01:49.301: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.624413ms)
Mar 28 11:01:49.303: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.576032ms)
Mar 28 11:01:49.304: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.714155ms)
Mar 28 11:01:49.306: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.655379ms)
Mar 28 11:01:49.308: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.604892ms)
Mar 28 11:01:49.309: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.56895ms)
Mar 28 11:01:49.311: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.598115ms)
Mar 28 11:01:49.312: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.601352ms)
Mar 28 11:01:49.314: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.616223ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:01:49.314: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-v9zoz" for this suite.
• [SLOW TEST:7.049 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy to cadvisor [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:57
------------------------------
S
------------------------------
Kubectl client Kubectl run default
should create an rc or deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:01:54.322: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:01:54.326: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7lnw3
Mar 28 11:01:54.327: INFO: Get service account default in ns e2e-tests-kubectl-7lnw3 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:01:56.328: INFO: Service account default in ns e2e-tests-kubectl-7lnw3 with secrets found. (2.002575227s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:01:56.328: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7lnw3
Mar 28 11:01:56.329: INFO: Service account default in ns e2e-tests-kubectl-7lnw3 with secrets found. (598.943µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl run default
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:900
[It] should create an rc or deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 28 11:01:56.329: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-7lnw3'
Mar 28 11:01:56.345: INFO: stderr: ""
Mar 28 11:01:56.345: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] Kubectl run default
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:904
Mar 28 11:01:58.347: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7lnw3'
Mar 28 11:02:00.409: INFO: stderr: ""
Mar 28 11:02:00.409: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:02:00.409: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7lnw3" for this suite.
• [SLOW TEST:26.104 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run default
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:921
should create an rc or deployment from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
------------------------------
SSS
------------------------------
Events
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
[BeforeEach] Events
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:02:20.426: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:02:20.430: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-ghwub
Mar 28 11:02:20.432: INFO: Get service account default in ns e2e-tests-events-ghwub failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:02:22.435: INFO: Service account default in ns e2e-tests-events-ghwub with secrets found. (2.004997081s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:02:22.435: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-ghwub
Mar 28 11:02:22.438: INFO: Service account default in ns e2e-tests-events-ghwub with secrets found. (2.303639ms)
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 28 11:02:22.444: INFO: Waiting up to 5m0s for pod send-events-3868e364-f50f-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:02:22.447: INFO: Waiting for pod send-events-3868e364-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-events-ghwub' status to be 'running'(found phase: "Pending", readiness: false) (3.623974ms elapsed)
Mar 28 11:02:24.449: INFO: Found pod 'send-events-3868e364-f50f-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
&{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:send-events-3868e364-f50f-11e5-b1e1-0862662cf845 GenerateName: Namespace:e2e-tests-events-ghwub SelfLink:/api/v1/namespaces/e2e-tests-events-ghwub/pods/send-events-3868e364-f50f-11e5-b1e1-0862662cf845 UID:386961a2-f50f-11e5-a3fb-0862662cf845 ResourceVersion:420 Generation:0 CreationTimestamp:2016-03-28 11:02:22 -0700 PDT DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:foo time:438089161] Annotations:map[]} Spec:{Volumes:[{Name:default-token-n8wrc VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc82021b230 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil> AzureFile:<nil> ConfigMap:<nil>}}] Containers:[{Name:p Image:gcr.io/google_containers/serve_hostname:1.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-n8wrc ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc82021b260 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:127.0.0.1 SecurityContext:0xc820380fc0 ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-03-28 11:02:23 -0700 PDT Reason: Message:}] Message: Reason: HostIP:127.0.0.1 PodIP:172.17.0.3 StartTime:2016-03-28 11:02:22 -0700 PDT ContainerStatuses:[{Name:p State:{Waiting:<nil> Running:0xc820010d40 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:gcr.io/google_containers/serve_hostname:1.1 ImageID:docker://sha256:8a830e5071bb6399dc65181a3ead54ac16d704f941267cb90173377930911566 ContainerID:docker://db0d8df119293f3e8f32f2fd8e9c047a7d8f08d03b566b1d68c7afebccb3a240}]}}
STEP: checking for scheduler event about the pod
Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] Events
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:02:28.460: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-events-ghwub" for this suite.
• [SLOW TEST:28.046 seconds]
Events
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:128
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
------------------------------
SS
------------------------------
Kubectl client Update Demo
should scale a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:149
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:02:48.472: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:02:48.473: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-quls9
Mar 28 11:02:48.474: INFO: Get service account default in ns e2e-tests-kubectl-quls9 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:02:50.476: INFO: Service account default in ns e2e-tests-kubectl-quls9 with secrets found. (2.002876883s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:02:50.476: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-quls9
Mar 28 11:02:50.479: INFO: Service account default in ns e2e-tests-kubectl-quls9 with secrets found. (2.383083ms)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128
[It] should scale a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:149
STEP: creating a replication controller
Mar 28 11:02:50.479: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:50.569: INFO: stderr: ""
Mar 28 11:02:50.569: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:02:50.569: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:50.587: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:02:50.587: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Mar 28 11:02:55.587: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:55.606: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:02:55.606: INFO: stdout: "update-demo-nautilus-6hwmw update-demo-nautilus-z7xsb"
Mar 28 11:02:55.606: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:55.618: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:02:55.618: INFO: stdout: "true"
Mar 28 11:02:55.618: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:55.630: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:02:55.630: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:02:55.630: INFO: validating pod update-demo-nautilus-6hwmw
Mar 28 11:02:55.650: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:02:55.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:02:55.650: INFO: update-demo-nautilus-6hwmw is verified up and running
Mar 28 11:02:55.650: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7xsb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:02:55.665: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:02:55.665: INFO: stdout: ""
Mar 28 11:02:55.665: INFO: update-demo-nautilus-z7xsb is created but not running
Mar 28 11:03:00.665: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:00.690: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:00.690: INFO: stdout: "update-demo-nautilus-6hwmw update-demo-nautilus-z7xsb"
Mar 28 11:03:00.690: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:00.703: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:00.703: INFO: stdout: "true"
Mar 28 11:03:00.703: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:00.717: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:00.717: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:00.717: INFO: validating pod update-demo-nautilus-6hwmw
Mar 28 11:03:00.718: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:00.718: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:00.718: INFO: update-demo-nautilus-6hwmw is verified up and running
Mar 28 11:03:00.718: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7xsb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:00.730: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:00.731: INFO: stdout: "true"
Mar 28 11:03:00.731: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7xsb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:00.742: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:00.742: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:00.742: INFO: validating pod update-demo-nautilus-z7xsb
Mar 28 11:03:00.744: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:00.744: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:00.744: INFO: update-demo-nautilus-z7xsb is verified up and running
STEP: scaling down the replication controller
Mar 28 11:03:00.744: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:02.770: INFO: stderr: ""
Mar 28 11:03:02.770: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:03:02.770: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:02.783: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:02.783: INFO: stdout: "update-demo-nautilus-6hwmw"
Mar 28 11:03:02.783: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:02.795: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:02.795: INFO: stdout: "true"
Mar 28 11:03:02.795: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:02.807: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:02.808: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:02.808: INFO: validating pod update-demo-nautilus-6hwmw
Mar 28 11:03:02.809: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:02.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:02.809: INFO: update-demo-nautilus-6hwmw is verified up and running
STEP: scaling up the replication controller
Mar 28 11:03:02.809: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:04.829: INFO: stderr: ""
Mar 28 11:03:04.829: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:03:04.829: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:04.842: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:04.842: INFO: stdout: "update-demo-nautilus-6hwmw update-demo-nautilus-x162m"
Mar 28 11:03:04.842: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:04.854: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:04.854: INFO: stdout: "true"
Mar 28 11:03:04.854: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:04.868: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:04.868: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:04.868: INFO: validating pod update-demo-nautilus-6hwmw
Mar 28 11:03:04.870: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:04.870: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:04.870: INFO: update-demo-nautilus-6hwmw is verified up and running
Mar 28 11:03:04.870: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x162m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:04.882: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:04.882: INFO: stdout: ""
Mar 28 11:03:04.882: INFO: update-demo-nautilus-x162m is created but not running
Mar 28 11:03:09.882: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:09.901: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:09.901: INFO: stdout: "update-demo-nautilus-6hwmw update-demo-nautilus-x162m"
Mar 28 11:03:09.901: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:09.913: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:09.913: INFO: stdout: "true"
Mar 28 11:03:09.913: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hwmw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:09.925: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:09.925: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:09.925: INFO: validating pod update-demo-nautilus-6hwmw
Mar 28 11:03:09.926: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:09.926: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:09.926: INFO: update-demo-nautilus-6hwmw is verified up and running
Mar 28 11:03:09.926: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x162m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:09.938: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:09.938: INFO: stdout: "true"
Mar 28 11:03:09.938: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x162m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:09.950: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:03:09.950: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:03:09.950: INFO: validating pod update-demo-nautilus-x162m
Mar 28 11:03:09.951: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:03:09.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:03:09.951: INFO: update-demo-nautilus-x162m is verified up and running
STEP: using delete to clean up resources
Mar 28 11:03:09.951: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 -f ../../docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:11.971: INFO: stderr: ""
Mar 28 11:03:11.971: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted"
Mar 28 11:03:11.971: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-quls9'
Mar 28 11:03:11.983: INFO: stderr: ""
Mar 28 11:03:11.983: INFO: stdout: ""
Mar 28 11:03:11.983: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-quls9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:03:11.995: INFO: stderr: ""
Mar 28 11:03:11.995: INFO: stdout: ""
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:03:11.995: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-quls9" for this suite.
• [SLOW TEST:43.529 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:160
should scale a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:149
------------------------------
SchedulerPredicates [Serial]
validates resource limits of pods that are allowed to run [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:03:32.001: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:03:32.002: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-bv9fi
Mar 28 11:03:32.003: INFO: Get service account default in ns e2e-tests-sched-pred-bv9fi failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:03:34.006: INFO: Service account default in ns e2e-tests-sched-pred-bv9fi with secrets found. (2.003553317s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:03:34.006: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-bv9fi
Mar 28 11:03:34.008: INFO: Service account default in ns e2e-tests-sched-pred-bv9fi with secrets found. (2.374603ms)
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 28 11:03:34.013: INFO: Waiting for terminating namespaces to be deleted...
Mar 28 11:03:34.019: INFO: >>> testContext.KubeConfig: /root/.kube/config
Mar 28 11:03:34.020: INFO: Waiting up to 2m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready
Mar 28 11:03:34.026: INFO: 1 / 1 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 28 11:03:34.026: INFO: expected 1 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Mar 28 11:03:34.026: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1 before test
Mar 28 11:03:34.035: INFO: k8s-etcd-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:03:34.035: INFO: k8s-proxy-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:03:34.035: INFO: k8s-master-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:03:34.035: INFO: kube-dns-v10-d8w4s from kube-system started at 2016-03-28 10:54:41 -0700 PDT (4 container statuses recorded)
Mar 28 11:03:34.035: INFO: Container etcd ready: true, restart count 0
Mar 28 11:03:34.035: INFO: Container healthz ready: true, restart count 0
Mar 28 11:03:34.035: INFO: Container kube2sky ready: true, restart count 0
Mar 28 11:03:34.035: INFO: Container skydns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
Mar 28 11:03:34.047: INFO: Pod k8s-etcd-127.0.0.1 requesting resource 0 on Node 127.0.0.1
Mar 28 11:03:34.047: INFO: Pod k8s-master-127.0.0.1 requesting resource 0 on Node 127.0.0.1
Mar 28 11:03:34.047: INFO: Pod k8s-proxy-127.0.0.1 requesting resource 0 on Node 127.0.0.1
Mar 28 11:03:34.047: INFO: Pod kube-dns-v10-d8w4s requesting resource 310 on Node 127.0.0.1
Mar 28 11:03:34.047: INFO: Node: 127.0.0.1 has capacity: 3690
STEP: Starting additional 7 Pods to fully saturate the cluster CPU and trying to start another one
Mar 28 11:03:34.144: INFO: Waiting for running...
Mar 28 11:03:44.147: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-bv9fi
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:03:54.382: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-bv9fi" for this suite.
• [SLOW TEST:27.401 seconds]
SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:729
validates resource limits of pods that are allowed to run [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:355
------------------------------
Kubectl client Kubectl label
should update the label on a resource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:776
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:03:59.402: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:03:59.403: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-90sdt
Mar 28 11:03:59.404: INFO: Get service account default in ns e2e-tests-kubectl-90sdt failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:04:01.405: INFO: Service account default in ns e2e-tests-kubectl-90sdt with secrets found. (2.001740462s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:04:01.405: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-90sdt
Mar 28 11:04:01.406: INFO: Service account default in ns e2e-tests-kubectl-90sdt with secrets found. (620.046µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl label
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:752
STEP: creating the pod
Mar 28 11:04:01.406: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:01.448: INFO: stderr: ""
Mar 28 11:04:01.448: INFO: stdout: "pod \"nginx\" created"
Mar 28 11:04:01.448: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx]
Mar 28 11:04:01.448: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Mar 28 11:04:01.449: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-90sdt' status to be 'running and ready'(found phase: "Pending", readiness: false) (1.017489ms elapsed)
Mar 28 11:04:03.450: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-90sdt' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.002321673s elapsed)
Mar 28 11:04:05.451: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-90sdt' status to be 'running and ready'(found phase: "Pending", readiness: false) (4.003551962s elapsed)
Mar 28 11:04:07.453: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-90sdt' status to be 'running and ready'(found phase: "Pending", readiness: false) (6.004912184s elapsed)
Mar 28 11:04:09.454: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-90sdt' status to be 'running and ready'(found phase: "Pending", readiness: false) (8.006237882s elapsed)
Mar 28 11:04:11.456: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should update the label on a resource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:776
STEP: adding the label testing-label with value testing-label-value to a pod
Mar 28 11:04:11.456: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config label pods nginx testing-label=testing-label-value --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.473: INFO: stderr: ""
Mar 28 11:04:11.473: INFO: stdout: "pod \"nginx\" labeled"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar 28 11:04:11.473: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.485: INFO: stderr: ""
Mar 28 11:04:11.485: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\nnginx 1/1 Running 0 10s testing-label-value"
STEP: removing the label testing-label of a pod
Mar 28 11:04:11.485: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config label pods nginx testing-label- --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.499: INFO: stderr: ""
Mar 28 11:04:11.499: INFO: stdout: "pod \"nginx\" labeled"
STEP: verifying the pod doesn't have the label testing-label
Mar 28 11:04:11.499: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.511: INFO: stderr: ""
Mar 28 11:04:11.511: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\nnginx 1/1 Running 0 10s <none>"
[AfterEach] Kubectl label
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:755
STEP: using delete to clean up resources
Mar 28 11:04:11.511: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 -f ../../docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.532: INFO: stderr: ""
Mar 28 11:04:11.532: INFO: stdout: "pod \"nginx\" deleted"
Mar 28 11:04:11.532: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-90sdt'
Mar 28 11:04:11.544: INFO: stderr: ""
Mar 28 11:04:11.544: INFO: stdout: ""
Mar 28 11:04:11.544: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-90sdt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:04:11.554: INFO: stderr: ""
Mar 28 11:04:11.554: INFO: stdout: ""
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:04:11.554: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-90sdt" for this suite.
• [SLOW TEST:17.157 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl label
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:777
should update the label on a resource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:776
------------------------------
Kubectl client Kubectl version
should check is all data is printed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:878
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:04:16.559: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:04:16.561: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-47s1a
Mar 28 11:04:16.561: INFO: Get service account default in ns e2e-tests-kubectl-47s1a failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:04:18.562: INFO: Service account default in ns e2e-tests-kubectl-47s1a with secrets found. (2.001581456s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:04:18.562: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-47s1a
Mar 28 11:04:18.563: INFO: Service account default in ns e2e-tests-kubectl-47s1a with secrets found. (604.066µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should check is all data is printed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:878
Mar 28 11:04:18.563: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config version'
Mar 28 11:04:18.573: INFO: stderr: ""
Mar 28 11:04:18.573: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"2\", GitVersion:\"v1.2.0\", GitCommit:\"5cb86ee022267586db386f62781338b0483733b3\", GitTreeState:\"clean\"}\nServer Version: version.Info{Major:\"1\", Minor:\"2\", GitVersion:\"v1.2.0\", GitCommit:\"5cb86ee022267586db386f62781338b0483733b3\", GitTreeState:\"clean\"}"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:04:18.573: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-47s1a" for this suite.
• [SLOW TEST:7.021 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl version
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:879
should check is all data is printed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:878
------------------------------
Docker Containers
should be able to override the image's default commmand (docker entrypoint) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:04:23.580: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:04:23.583: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-749bv
Mar 28 11:04:23.585: INFO: Get service account default in ns e2e-tests-containers-749bv failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:04:25.586: INFO: Service account default in ns e2e-tests-containers-749bv with secrets found. (2.002472275s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:04:25.586: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-749bv
Mar 28 11:04:25.586: INFO: Service account default in ns e2e-tests-containers-749bv with secrets found. (615.165µs)
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default commmand (docker entrypoint) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
STEP: Creating a pod to test override command
Mar 28 11:04:25.588: INFO: Waiting up to 5m0s for pod client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:04:25.590: INFO: No Status.Info for container 'test-container' in pod 'client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:04:25.590: INFO: Waiting for pod client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-749bv' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.33194ms elapsed)
Mar 28 11:04:27.591: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-containers-749bv' so far
Mar 28 11:04:27.591: INFO: Waiting for pod client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-749bv' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002756497s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-81cfe9be-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2]
[AfterEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:04:29.609: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-749bv" for this suite.
• [SLOW TEST:11.041 seconds]
Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72
should be able to override the image's default commmand (docker entrypoint) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:61
------------------------------
S
------------------------------
EmptyDir volumes
should support (root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:04:34.622: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:04:34.627: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ddcrr
Mar 28 11:04:34.629: INFO: Get service account default in ns e2e-tests-emptydir-ddcrr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:04:36.629: INFO: Service account default in ns e2e-tests-emptydir-ddcrr with secrets found. (2.002436361s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:04:36.630: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ddcrr
Mar 28 11:04:36.630: INFO: Service account default in ns e2e-tests-emptydir-ddcrr with secrets found. (577.9µs)
[It] should support (root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 28 11:04:36.632: INFO: Waiting up to 5m0s for pod pod-88650b75-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:04:36.633: INFO: No Status.Info for container 'test-container' in pod 'pod-88650b75-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:04:36.633: INFO: Waiting for pod pod-88650b75-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-ddcrr' status to be 'success or failure'(found phase: "Pending", readiness: false) (978.141µs elapsed)
Mar 28 11:04:38.634: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-88650b75-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-ddcrr' so far
Mar 28 11:04:38.634: INFO: Waiting for pod pod-88650b75-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-ddcrr' status to be 'success or failure'(found phase: "Running", readiness: true) (2.00224314s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-88650b75-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:04:40.649: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ddcrr" for this suite.
• [SLOW TEST:11.037 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:100
------------------------------
S
------------------------------
ServiceAccounts
should mount an API token into pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
[BeforeEach] ServiceAccounts
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:04:45.659: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:04:45.660: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-1ordn
Mar 28 11:04:45.661: INFO: Get service account default in ns e2e-tests-svcaccounts-1ordn failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:04:47.662: INFO: Service account default in ns e2e-tests-svcaccounts-1ordn with secrets found. (2.001613328s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:04:47.662: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-1ordn
Mar 28 11:04:47.662: INFO: Service account default in ns e2e-tests-svcaccounts-1ordn with secrets found. (596.583µs)
[It] should mount an API token into pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Mar 28 11:04:48.166: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:04:48.168: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:04:48.168: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.949449ms elapsed)
Mar 28 11:04:50.170: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-svcaccounts-1ordn' so far
Mar 28 11:04:50.170: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003420666s elapsed)
STEP: Saw pod success
Mar 28 11:04:52.171: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
Mar 28 11:04:52.173: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 container token-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/token": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtMW9yZG4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1waGQwbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOGRjNmY3MzAtZjUwZi0xMWU1LWEzZmItMDg2MjY2MmNmODQ1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy0xb3JkbjpkZWZhdWx0In0.pBPFoOD9WypDF3WuzpA4k7fP89-75tIkwoy_ksguP0eJglzHW5eaKAbC373c5GQGNvB4pTJiaNRu0fWjZzjbUBMAJurvj5-hWv--yWmwIzztAtYA-iwWCcaz2ozYZRFwZcqWlGp_uUc2bo5QQ_JxmCjfOoCpid4_hUYAHd_JRL0A3izNIO0vlZGLLB19iRt9aYfLjgg2gcqrXOcCQ_GiT2qYIYwUzNBq8Ixa7kWb_84YPcbPy-Ea_D4W9HZ9MZGSHJHiDj2ef15CqsF-QefLtR74UW8s73OtL138zDZDvCIwE3A2KmheGxv2JEBwFalxqrziZjBfMFVg4fpgshP7Xg
STEP: Creating a pod to test consume service account root CA
Mar 28 11:04:52.197: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:04:52.198: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:04:52.198: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.001469ms elapsed)
Mar 28 11:04:54.199: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-svcaccounts-1ordn' so far
Mar 28 11:04:54.199: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002497061s elapsed)
STEP: Saw pod success
Mar 28 11:04:56.201: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
Mar 28 11:04:56.202: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 container root-ca-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt": -----BEGIN CERTIFICATE-----
MIIDVjCCAj6gAwIBAgIJAKnvxlDsDEbHMA0GCSqGSIb3DQEBCwUAMCExHzAdBgNV
BAMMFjEwLjcuNjQuMTQ3QDE0NTkxODc2NDcwHhcNMTYwMzI4MTc1NDA3WhcNMjYw
MzI2MTc1NDA3WjAhMR8wHQYDVQQDDBYxMC43LjY0LjE0N0AxNDU5MTg3NjQ3MIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3GqFJFil4R3eHJUYFbxh7DBT
alTlGAqQKyJ5zAlGidygj0iSamu0/DRliUBUXc9ZTRJDluq+ixQJZSilBoiNrr0n
71oS4uVsfNA3kvty++IQJvfdSUCNu6zjvZv9owhvzm1g/QpsYJDaRNc1STRNAj4P
Q1TaQfcT2jiDW333eddwtK0qkhhsqIv51BB3WceanQVW9lo4yUEmHmvDZKeolCpO
cx9ywanV2DKql5IyvBdyf1F0pf64cxtbVwzKKoJ6CN7x5bEfdAAYYyudS79MQuTX
um2iGmIeDxcvjasw8MQ/JbxjI9M94AZ2hu3LQWO4zRXMKuIuqhDsEbMpbWF+vQID
AQABo4GQMIGNMB0GA1UdDgQWBBQt4hI9GEKXooOIgCU+V+CfwGL7HTBRBgNVHSME
SjBIgBQt4hI9GEKXooOIgCU+V+CfwGL7HaElpCMwITEfMB0GA1UEAwwWMTAuNy42
NC4xNDdAMTQ1OTE4NzY0N4IJAKnvxlDsDEbHMAwGA1UdEwQFMAMBAf8wCwYDVR0P
BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQDAgE5MJy52cWnPuOOqgskFH+s9VIbP
tn6IqrowZnrOzWszzo/QlWQwLHThNNE2IPPK14xjFsCEsklYRjUyhPiYeaxloLez
Sh+/5vPsx6x3QLZs+pltvaHdCSdrQEEgn+JjnihWcNr138iZeNhRUFJqloRQgoDz
yktmDHq+t+TLPYVGKsUa93gwh3ahES0QJqLIXaVhoMOeYBr037zGAQeNmKRzkSdR
/PZvk6LP5lFsmOm+0ZcTdZkz10UTz+7KogbJQhQukk0uGykToFAn29mJyOLKdj/O
eE5XOuoGOhm8E3aNyu4FzMecpHeSIopxB3+OwNtNRkIzp6L+vgCq3xnX
-----END CERTIFICATE-----
STEP: Creating a pod to test consume service account namespace
Mar 28 11:04:56.228: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:04:56.230: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:04:56.230: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.489626ms elapsed)
Mar 28 11:04:58.231: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-svcaccounts-1ordn' so far
Mar 28 11:04:58.231: INFO: Waiting for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-svcaccounts-1ordn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00391919s elapsed)
STEP: Saw pod success
Mar 28 11:05:00.234: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
Mar 28 11:05:00.236: INFO: Waiting up to 5m0s for pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-8f4501ad-f50f-11e5-b1e1-0862662cf845 container namespace-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/namespace": e2e-tests-svcaccounts-1ordn
[AfterEach] ServiceAccounts
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:05:00.261: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-1ordn" for this suite.
• [SLOW TEST:19.613 seconds]
ServiceAccounts
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
should mount an API token into pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:239
------------------------------
SSSS
------------------------------
EmptyDir volumes
should support (root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:05:05.271: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:05:05.273: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-x1tt9
Mar 28 11:05:05.273: INFO: Get service account default in ns e2e-tests-emptydir-x1tt9 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:05:07.276: INFO: Service account default in ns e2e-tests-emptydir-x1tt9 with secrets found. (2.003464249s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:05:07.276: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-x1tt9
Mar 28 11:05:07.277: INFO: Service account default in ns e2e-tests-emptydir-x1tt9 with secrets found. (665.204µs)
[It] should support (root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 28 11:05:07.279: INFO: Waiting up to 5m0s for pod pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:05:07.280: INFO: No Status.Info for container 'test-container' in pod 'pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:05:07.280: INFO: Waiting for pod pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-x1tt9' status to be 'success or failure'(found phase: "Pending", readiness: false) (864.473µs elapsed)
Mar 28 11:05:09.282: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-x1tt9' so far
Mar 28 11:05:09.282: INFO: Waiting for pod pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-x1tt9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002555314s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-9aa95ec8-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:05:11.302: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x1tt9" for this suite.
• [SLOW TEST:11.040 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0666,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:68
------------------------------
SS
------------------------------
PreStop
should call prestop when killing a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
[BeforeEach] PreStop
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:05:16.311: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:05:16.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-o2mqk
Mar 28 11:05:16.313: INFO: Get service account default in ns e2e-tests-prestop-o2mqk failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:05:18.314: INFO: Service account default in ns e2e-tests-prestop-o2mqk with secrets found. (2.001532101s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:05:18.314: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-o2mqk
Mar 28 11:05:18.315: INFO: Service account default in ns e2e-tests-prestop-o2mqk with secrets found. (691.225µs)
[It] should call prestop when killing a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
STEP: Creating server pod server in namespace e2e-tests-prestop-o2mqk
STEP: Waiting for pods to come up.
Mar 28 11:05:18.317: INFO: Waiting up to 5m0s for pod server status to be running
Mar 28 11:05:18.318: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-o2mqk' status to be 'running'(found phase: "Pending", readiness: false) (1.39899ms elapsed)
Mar 28 11:05:20.320: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-o2mqk' status to be 'running'(found phase: "Pending", readiness: false) (2.002711551s elapsed)
Mar 28 11:05:22.321: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-o2mqk' status to be 'running'(found phase: "Pending", readiness: false) (4.004028493s elapsed)
Mar 28 11:05:24.322: INFO: Found pod 'server' on node '127.0.0.1'
STEP: Creating tester pod tester in namespace e2e-tests-prestop-o2mqk
Mar 28 11:05:24.325: INFO: Waiting up to 5m0s for pod tester status to be running
Mar 28 11:05:24.326: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-o2mqk' status to be 'running'(found phase: "Pending", readiness: false) (1.140804ms elapsed)
Mar 28 11:05:26.327: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-o2mqk' status to be 'running'(found phase: "Pending", readiness: false) (2.00243635s elapsed)
Mar 28 11:05:28.328: INFO: Found pod 'tester' on node '127.0.0.1'
STEP: Deleting pre-stop pod
Mar 28 11:05:33.335: INFO: Saw: {
"Hostname": "server",
"Sent": null,
"Received": {
"prestop": 1
},
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] PreStop
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:05:33.345: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-o2mqk" for this suite.
• [SLOW TEST:52.042 seconds]
PreStop
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
should call prestop when killing a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:166
------------------------------
S
------------------------------
EmptyDir volumes
should support (root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:06:08.354: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:06:08.355: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-phrpo
Mar 28 11:06:08.356: INFO: Get service account default in ns e2e-tests-emptydir-phrpo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:06:10.359: INFO: Service account default in ns e2e-tests-emptydir-phrpo with secrets found. (2.003432971s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:06:10.359: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-phrpo
Mar 28 11:06:10.361: INFO: Service account default in ns e2e-tests-emptydir-phrpo with secrets found. (2.316953ms)
[It] should support (root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 28 11:06:10.367: INFO: Waiting up to 5m0s for pod pod-c0433e26-f50f-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:06:10.370: INFO: No Status.Info for container 'test-container' in pod 'pod-c0433e26-f50f-11e5-b1e1-0862662cf845' yet
Mar 28 11:06:10.370: INFO: Waiting for pod pod-c0433e26-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-phrpo' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.164464ms elapsed)
Mar 28 11:06:12.375: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-c0433e26-f50f-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-phrpo' so far
Mar 28 11:06:12.375: INFO: Waiting for pod pod-c0433e26-f50f-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-phrpo' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.007513522s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-c0433e26-f50f-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:06:14.392: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-phrpo" for this suite.
• [SLOW TEST:11.047 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:96
------------------------------
S
------------------------------
Kubectl client Update Demo
should create and stop a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:06:19.401: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:06:19.402: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-37q0w
Mar 28 11:06:19.403: INFO: Get service account default in ns e2e-tests-kubectl-37q0w failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:06:21.405: INFO: Service account default in ns e2e-tests-kubectl-37q0w with secrets found. (2.002355156s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:06:21.405: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-37q0w
Mar 28 11:06:21.406: INFO: Service account default in ns e2e-tests-kubectl-37q0w with secrets found. (1.414622ms)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128
[It] should create and stop a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
STEP: creating a replication controller
Mar 28 11:06:21.406: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:21.452: INFO: stderr: ""
Mar 28 11:06:21.452: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:06:21.452: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:21.471: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:21.471: INFO: stdout: "update-demo-nautilus-sjdrm update-demo-nautilus-uhsw7"
Mar 28 11:06:21.471: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjdrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:21.485: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:21.485: INFO: stdout: ""
Mar 28 11:06:21.485: INFO: update-demo-nautilus-sjdrm is created but not running
Mar 28 11:06:26.485: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:26.498: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:26.498: INFO: stdout: "update-demo-nautilus-sjdrm update-demo-nautilus-uhsw7"
Mar 28 11:06:26.498: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjdrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:26.516: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:26.516: INFO: stdout: ""
Mar 28 11:06:26.516: INFO: update-demo-nautilus-sjdrm is created but not running
Mar 28 11:06:31.517: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:31.530: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:31.530: INFO: stdout: "update-demo-nautilus-sjdrm update-demo-nautilus-uhsw7"
Mar 28 11:06:31.530: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjdrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:31.542: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:31.542: INFO: stdout: "true"
Mar 28 11:06:31.542: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjdrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:31.555: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:31.555: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:06:31.555: INFO: validating pod update-demo-nautilus-sjdrm
Mar 28 11:06:31.556: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:06:31.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:06:31.556: INFO: update-demo-nautilus-sjdrm is verified up and running
Mar 28 11:06:31.556: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-uhsw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:31.573: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:31.573: INFO: stdout: "true"
Mar 28 11:06:31.573: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-uhsw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:31.585: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:06:31.585: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:06:31.585: INFO: validating pod update-demo-nautilus-uhsw7
Mar 28 11:06:31.587: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:06:31.587: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:06:31.587: INFO: update-demo-nautilus-uhsw7 is verified up and running
STEP: using delete to clean up resources
Mar 28 11:06:31.587: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 -f ../../docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:33.607: INFO: stderr: ""
Mar 28 11:06:33.607: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted"
Mar 28 11:06:33.607: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-37q0w'
Mar 28 11:06:33.620: INFO: stderr: ""
Mar 28 11:06:33.620: INFO: stdout: ""
Mar 28 11:06:33.620: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-37q0w -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:06:33.632: INFO: stderr: ""
Mar 28 11:06:33.632: INFO: stdout: ""
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:06:33.632: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-37q0w" for this suite.
• [SLOW TEST:19.237 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:160
should create and stop a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
------------------------------
ClusterDns [Feature:Example]
should create pod that uses dns [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150
[BeforeEach] ClusterDns [Feature:Example]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:06:38.638: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:06:38.641: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-cluster-dns-r3vou
Mar 28 11:06:38.641: INFO: Get service account default in ns e2e-tests-cluster-dns-r3vou failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:06:40.644: INFO: Service account default in ns e2e-tests-cluster-dns-r3vou with secrets found. (2.003152138s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:06:40.644: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-cluster-dns-r3vou
Mar 28 11:06:40.646: INFO: Service account default in ns e2e-tests-cluster-dns-r3vou with secrets found. (1.81532ms)
[BeforeEach] ClusterDns [Feature:Example]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:50
[It] should create pod that uses dns [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150
Mar 28 11:06:40.648: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample0-57syr
Mar 28 11:06:40.649: INFO: Get service account default in ns e2e-tests-dnsexample0-57syr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:06:42.652: INFO: Service account default in ns e2e-tests-dnsexample0-57syr with secrets found. (2.004006939s)
Mar 28 11:06:42.655: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample1-rly9y
Mar 28 11:06:42.657: INFO: Get service account default in ns e2e-tests-dnsexample1-rly9y failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:06:44.660: INFO: Service account default in ns e2e-tests-dnsexample1-rly9y with secrets found. (2.004302884s)
Mar 28 11:06:44.660: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample0-57syr'
Mar 28 11:06:44.746: INFO: stderr: ""
Mar 28 11:06:44.746: INFO: stdout: "replicationcontroller \"dns-backend\" created"
Mar 28 11:06:44.746: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample1-rly9y'
Mar 28 11:06:44.809: INFO: stderr: ""
Mar 28 11:06:44.809: INFO: stdout: "replicationcontroller \"dns-backend\" created"
Mar 28 11:06:44.809: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample0-57syr'
Mar 28 11:06:44.878: INFO: stderr: ""
Mar 28 11:06:44.878: INFO: stdout: "service \"dns-backend\" created"
Mar 28 11:06:44.878: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample1-rly9y'
Mar 28 11:06:44.930: INFO: stderr: ""
Mar 28 11:06:44.930: INFO: stdout: "service \"dns-backend\" created"
E0328 11:07:14.931818 11167 iowatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
Mar 28 11:07:24.932: INFO: Service dns-backend in namespace e2e-tests-dnsexample0-57syr found.
Mar 28 11:07:29.933: INFO: Service dns-backend in namespace e2e-tests-dnsexample1-rly9y found.
STEP: trying to dial each unique pod
Mar 28 11:07:29.938: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-wgyx0]: "Hello World!", 1 of 1 required successes so far
Mar 28 11:07:29.938: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample0-57syr
STEP: trying to dial the service e2e-tests-dnsexample0-57syr.dns-backend via the proxy
Mar 28 11:07:29.939: INFO: Service dns-backend: found nonempty answer: Hello World!
STEP: trying to dial each unique pod
Mar 28 11:07:29.943: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-oefuf]: "Hello World!", 1 of 1 required successes so far
Mar 28 11:07:29.943: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample1-rly9y
STEP: trying to dial the service e2e-tests-dnsexample1-rly9y.dns-backend via the proxy
Mar 28 11:07:29.945: INFO: Service dns-backend: found nonempty answer: Hello World!
Mar 28 11:07:29.946: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config exec dns-backend-wgyx0 --namespace=e2e-tests-dnsexample0-57syr -- python -c
import socket
try:
socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-57syr')
print 'ok'
except:
print 'err''
Mar 28 11:07:30.015: INFO: stderr: ""
Mar 28 11:07:30.015: INFO: stdout: "ok"
Mar 28 11:07:30.016: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-dnsexample0-57syr'
Mar 28 11:07:30.058: INFO: stderr: ""
Mar 28 11:07:30.058: INFO: stdout: "pod \"dns-frontend\" created"
Mar 28 11:07:30.058: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-dnsexample1-rly9y'
Mar 28 11:07:30.126: INFO: stderr: ""
Mar 28 11:07:30.126: INFO: stdout: "pod \"dns-frontend\" created"
Mar 28 11:07:30.126: INFO: Waiting up to 5m0s for pod dns-frontend status to be !pending
Mar 28 11:07:30.128: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-57syr' status to be '!pending'(found phase: "Pending", readiness: false) (2.191011ms elapsed)
Mar 28 11:07:32.130: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-57syr' status to be '!pending'(found phase: "Pending", readiness: false) (2.003496757s elapsed)
Mar 28 11:07:34.131: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-57syr' status to be '!pending'(found phase: "Pending", readiness: false) (4.00485828s elapsed)
Mar 28 11:07:36.132: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-57syr' status to be '!pending'(found phase: "Pending", readiness: false) (6.006176584s elapsed)
Mar 28 11:07:38.134: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-57syr' status to be '!pending'(found phase: "Pending", readiness: false) (8.007485876s elapsed)
Mar 28 11:07:40.135: INFO: Saw pod 'dns-frontend' in namespace 'e2e-tests-dnsexample0-57syr' out of pending state (found '"Succeeded"')
Mar 28 11:07:40.135: INFO: Waiting up to 5m0s for pod dns-frontend status to be !pending
Mar 28 11:07:40.137: INFO: Saw pod 'dns-frontend' in namespace 'e2e-tests-dnsexample1-rly9y' out of pending state (found '"Succeeded"')
Mar 28 11:07:40.137: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log dns-frontend dns-frontend --namespace=e2e-tests-dnsexample0-57syr'
Mar 28 11:07:40.151: INFO: stderr: ""
Mar 28 11:07:40.151: INFO: stdout: "10.0.0.98\nSend request to: http://dns-backend.e2e-tests-dnsexample0-57syr.svc.cluster.local:8000\n<Response [200]>\nHello World!"
Mar 28 11:07:40.151: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log dns-frontend dns-frontend --namespace=e2e-tests-dnsexample1-rly9y'
Mar 28 11:07:40.165: INFO: stderr: ""
Mar 28 11:07:40.165: INFO: stdout: "10.0.0.98\nSend request to: http://dns-backend.e2e-tests-dnsexample0-57syr.svc.cluster.local:8000\n<Response [200]>\nHello World!"
[AfterEach] ClusterDns [Feature:Example]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:07:40.165: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-cluster-dns-r3vou" for this suite.
STEP: Destroying namespace "e2e-tests-dnsexample0-57syr" for this suite.
STEP: Destroying namespace "e2e-tests-dnsexample1-rly9y" for this suite.
• [SLOW TEST:136.556 seconds]
ClusterDns [Feature:Example]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:151
should create pod that uses dns [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:150
------------------------------
S
------------------------------
Docker Containers
should be able to override the image's default arguments (docker cmd) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:08:55.194: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:08:55.197: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-n5ssh
Mar 28 11:08:55.198: INFO: Get service account default in ns e2e-tests-containers-n5ssh failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:08:57.199: INFO: Service account default in ns e2e-tests-containers-n5ssh with secrets found. (2.002128446s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:08:57.199: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-n5ssh
Mar 28 11:08:57.200: INFO: Service account default in ns e2e-tests-containers-n5ssh with secrets found. (637.292µs)
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default arguments (docker cmd) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
STEP: Creating a pod to test override arguments
Mar 28 11:08:57.202: INFO: Waiting up to 5m0s for pod client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:08:57.205: INFO: No Status.Info for container 'test-container' in pod 'client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:08:57.205: INFO: Waiting for pod client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-n5ssh' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.680095ms elapsed)
Mar 28 11:08:59.209: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-containers-n5ssh' so far
Mar 28 11:08:59.209: INFO: Waiting for pod client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-n5ssh' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.007545187s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-23b4cec8-f510-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep override arguments]
[AfterEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:09:01.240: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-n5ssh" for this suite.
• [SLOW TEST:11.060 seconds]
Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72
should be able to override the image's default arguments (docker cmd) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:50
------------------------------
SSSS
------------------------------
EmptyDir volumes
should support (root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:09:06.254: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:09:06.258: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-i9l65
Mar 28 11:09:06.260: INFO: Get service account default in ns e2e-tests-emptydir-i9l65 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:09:08.261: INFO: Service account default in ns e2e-tests-emptydir-i9l65 with secrets found. (2.003077726s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:09:08.261: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-i9l65
Mar 28 11:09:08.262: INFO: Service account default in ns e2e-tests-emptydir-i9l65 with secrets found. (587.168µs)
[It] should support (root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 28 11:09:08.264: INFO: Waiting up to 5m0s for pod pod-2a4cc099-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:09:08.265: INFO: No Status.Info for container 'test-container' in pod 'pod-2a4cc099-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:09:08.265: INFO: Waiting for pod pod-2a4cc099-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-i9l65' status to be 'success or failure'(found phase: "Pending", readiness: false) (844.786µs elapsed)
Mar 28 11:09:10.266: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-2a4cc099-f510-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-i9l65' so far
Mar 28 11:09:10.266: INFO: Waiting for pod pod-2a4cc099-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-i9l65' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00238655s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-2a4cc099-f510-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:09:12.298: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-i9l65" for this suite.
• [SLOW TEST:11.060 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:92
------------------------------
S
------------------------------
EmptyDir volumes
should support (non-root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:09:17.314: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:09:17.319: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-kschx
Mar 28 11:09:17.321: INFO: Get service account default in ns e2e-tests-emptydir-kschx failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:09:19.323: INFO: Service account default in ns e2e-tests-emptydir-kschx with secrets found. (2.004119468s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:09:19.323: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-kschx
Mar 28 11:09:19.325: INFO: Service account default in ns e2e-tests-emptydir-kschx with secrets found. (1.77414ms)
[It] should support (non-root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 28 11:09:19.330: INFO: Waiting up to 5m0s for pod pod-30e4e5b1-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:09:19.332: INFO: No Status.Info for container 'test-container' in pod 'pod-30e4e5b1-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:09:19.332: INFO: Waiting for pod pod-30e4e5b1-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-kschx' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.961385ms elapsed)
Mar 28 11:09:21.334: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-30e4e5b1-f510-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-kschx' so far
Mar 28 11:09:21.334: INFO: Waiting for pod pod-30e4e5b1-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-kschx' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003452714s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-30e4e5b1-f510-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:09:23.351: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kschx" for this suite.
• [SLOW TEST:11.046 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:84
------------------------------
SSS
------------------------------
Secrets
should be consumable from pods in env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
[BeforeEach] Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:09:28.361: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:09:28.362: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-dhfhw
Mar 28 11:09:28.363: INFO: Get service account default in ns e2e-tests-secrets-dhfhw failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:09:30.364: INFO: Service account default in ns e2e-tests-secrets-dhfhw with secrets found. (2.001709904s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:09:30.364: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-dhfhw
Mar 28 11:09:30.365: INFO: Service account default in ns e2e-tests-secrets-dhfhw with secrets found. (668.348µs)
[It] should be consumable from pods in env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
STEP: Creating secret with name secret-test-377964a7-f510-11e5-b1e1-0862662cf845
STEP: Creating a pod to test consume secrets
Mar 28 11:09:30.378: INFO: Waiting up to 5m0s for pod pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:09:30.379: INFO: No Status.Info for container 'secret-env-test' in pod 'pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:09:30.379: INFO: Waiting for pod pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-secrets-dhfhw' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.166146ms elapsed)
Mar 28 11:09:32.381: INFO: Nil State.Terminated for container 'secret-env-test' in pod 'pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-secrets-dhfhw' so far
Mar 28 11:09:32.381: INFO: Waiting for pod pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-secrets-dhfhw' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003428179s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845 container secret-env-test: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.0.0.1:443
HOSTNAME=pod-secrets-3779917f-f510-11e5-b1e1-0862662cf845
SHLVL=1
HOME=/root
SECRET_DATA=value-1
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
STEP: Cleaning up the secret
[AfterEach] Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:09:34.406: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dhfhw" for this suite.
• [SLOW TEST:11.062 seconds]
Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:159
should be consumable from pods in env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:158
------------------------------
Pods
should get a host IP [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:09:39.423: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:09:39.425: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-irrt6
Mar 28 11:09:39.425: INFO: Get service account default in ns e2e-tests-pods-irrt6 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:09:41.428: INFO: Service account default in ns e2e-tests-pods-irrt6 with secrets found. (2.002985912s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:09:41.428: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-irrt6
Mar 28 11:09:41.429: INFO: Service account default in ns e2e-tests-pods-irrt6 with secrets found. (1.378634ms)
[It] should get a host IP [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
STEP: creating pod
STEP: ensuring that pod is running and has a hostIP
Mar 28 11:09:41.433: INFO: Waiting up to 5m0s for pod pod-hostip-3e11a9d5-f510-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:09:41.434: INFO: Waiting for pod pod-hostip-3e11a9d5-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-irrt6' status to be 'running'(found phase: "Pending", readiness: false) (1.605288ms elapsed)
Mar 28 11:09:43.436: INFO: Found pod 'pod-hostip-3e11a9d5-f510-11e5-b1e1-0862662cf845' on node '127.0.0.1'
Mar 28 11:09:43.437: INFO: Pod pod-hostip-3e11a9d5-f510-11e5-b1e1-0862662cf845 has hostIP: 127.0.0.1
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:09:43.449: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-irrt6" for this suite.
• [SLOW TEST:9.038 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should get a host IP [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:226
------------------------------
Downward API volume
should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
[BeforeEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:09:48.462: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:09:48.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-8tqkx
Mar 28 11:09:48.468: INFO: Get service account default in ns e2e-tests-downward-api-8tqkx failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:09:50.469: INFO: Service account default in ns e2e-tests-downward-api-8tqkx with secrets found. (2.003134106s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:09:50.469: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-8tqkx
Mar 28 11:09:50.470: INFO: Service account default in ns e2e-tests-downward-api-8tqkx with secrets found. (687.947µs)
[It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
STEP: Creating the pod
Mar 28 11:09:50.472: INFO: Waiting up to 5m0s for pod labelsupdate43753604-f510-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:09:50.473: INFO: Waiting for pod labelsupdate43753604-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-8tqkx' status to be 'running'(found phase: "Pending", readiness: false) (1.242208ms elapsed)
Mar 28 11:09:52.477: INFO: Waiting for pod labelsupdate43753604-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-8tqkx' status to be 'running'(found phase: "Pending", readiness: false) (2.004559962s elapsed)
Mar 28 11:09:54.478: INFO: Found pod 'labelsupdate43753604-f510-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: Deleting the pod
[AfterEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-downward-api-8tqkx".
Mar 28 11:10:39.495: INFO: At 2016-03-28 11:09:50 -0700 PDT - event for labelsupdate43753604-f510-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned labelsupdate43753604-f510-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:10:39.495: INFO: At 2016-03-28 11:09:51 -0700 PDT - event for labelsupdate43753604-f510-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:10:39.495: INFO: At 2016-03-28 11:09:51 -0700 PDT - event for labelsupdate43753604-f510-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id 24c2c37ebcec
Mar 28 11:10:39.495: INFO: At 2016-03-28 11:09:51 -0700 PDT - event for labelsupdate43753604-f510-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id 24c2c37ebcec
Mar 28 11:10:39.498: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:10:39.498: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:10:39.498: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:10:39.498: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:10:39.498: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:10:39.498: INFO:
Mar 28 11:10:39.499: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:10:39.501: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 1269 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:10:39 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:10:39 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[nginx:latest] 190461396} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/example-dns-frontend:v1] 677794775} {[gcr.io/google_containers/example-dns-backend:v1] 675068800} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:10:39.501: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:10:39.502: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:10:39.507: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:10:39.507: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:10:39.507: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:10:39.507: INFO: labelsupdate43753604-f510-11e5-b1e1-0862662cf845 started at <nil> (0 container statuses recorded)
Mar 28 11:10:39.507: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:10:39.565: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 20 @[0]
Mar 28 11:10:39.565: INFO: ERROR kubelet_docker_errors{operation_type="inspect_image"} => 7 @[0]
Mar 28 11:10:39.565: INFO: ERROR kubelet_docker_errors{operation_type="start_container"} => 2 @[0]
Mar 28 11:10:39.565: INFO: ERROR kubelet_docker_errors{operation_type="stop_container"} => 19 @[0]
Mar 28 11:10:39.565: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:10:39.565: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m31.907034s}
Mar 28 11:10:39.565: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:39.797619s}
Mar 28 11:10:39.565: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:38.072852s}
Mar 28 11:10:39.565: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:30.39271s}
Mar 28 11:10:39.565: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:30.358656s}
Mar 28 11:10:39.565: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.1036s}
Mar 28 11:10:39.565: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:12.787063s}
Mar 28 11:10:39.565: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8tqkx" for this suite.
• Failure [56.112 seconds]
Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:133
should update labels on modification [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
Timed out after 45.000s.
Expected
<string>: Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
Error reading file /etc/labels: open /etc/labels: no such file or directory, retrying
to contain substring
<string>: key1="value1"
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
------------------------------
SSSS
------------------------------
Services
should serve a basic endpoint from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:10:44.574: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:10:44.577: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-1feyv
Mar 28 11:10:44.579: INFO: Get service account default in ns e2e-tests-services-1feyv failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:10:46.580: INFO: Service account default in ns e2e-tests-services-1feyv with secrets found. (2.002807645s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:10:46.580: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-1feyv
Mar 28 11:10:46.581: INFO: Service account default in ns e2e-tests-services-1feyv with secrets found. (858.271µs)
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 28 11:10:46.581: INFO: >>> testContext.KubeConfig: /root/.kube/config
[It] should serve a basic endpoint from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
STEP: creating service endpoint-test2 in namespace e2e-tests-services-1feyv
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-1feyv to expose endpoints map[]
Mar 28 11:10:46.594: INFO: Get endpoints failed (1.491851ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 28 11:10:47.595: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-1feyv exposes endpoints map[] (1.002593313s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-1feyv
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-1feyv to expose endpoints map[pod1:[80]]
Mar 28 11:10:49.603: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-1feyv exposes endpoints map[pod1:[80]] (2.00562938s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-1feyv
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-1feyv to expose endpoints map[pod1:[80] pod2:[80]]
Mar 28 11:10:51.618: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-1feyv exposes endpoints map[pod1:[80] pod2:[80]] (2.010175069s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-1feyv
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-1feyv to expose endpoints map[pod2:[80]]
Mar 28 11:10:52.626: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-1feyv exposes endpoints map[pod2:[80]] (1.0051941s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-1feyv
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-1feyv to expose endpoints map[]
Mar 28 11:10:53.631: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-1feyv exposes endpoints map[] (1.003140225s elapsed)
[AfterEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:10:53.659: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-1feyv" for this suite.
• [SLOW TEST:29.100 seconds]
Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:902
should serve a basic endpoint from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:140
------------------------------
SSS
------------------------------
Port forwarding With a server that expects a client request
should support a client that connects, sends data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:215
[BeforeEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:11:13.674: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:11:13.675: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-n6x4d
Mar 28 11:11:13.676: INFO: Get service account default in ns e2e-tests-port-forwarding-n6x4d failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:11:15.677: INFO: Service account default in ns e2e-tests-port-forwarding-n6x4d with secrets found. (2.001469408s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:11:15.677: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-n6x4d
Mar 28 11:11:15.677: INFO: Service account default in ns e2e-tests-port-forwarding-n6x4d with secrets found. (602.926µs)
[It] should support a client that connects, sends data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:215
STEP: creating the target pod
Mar 28 11:11:15.679: INFO: Waiting up to 5m0s for pod pfpod status to be running
Mar 28 11:11:15.680: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-n6x4d' status to be 'running'(found phase: "Pending", readiness: false) (1.002405ms elapsed)
Mar 28 11:11:17.681: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Mar 28 11:11:17.682: INFO: starting port-forward command and streaming output
Mar 28 11:11:17.682: INFO: Asynchronously running '/bin/kubectl kubectl --kubeconfig=/root/.kube/config port-forward --namespace=e2e-tests-port-forwarding-n6x4d pfpod :80'
Mar 28 11:11:17.682: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Sending the expected data to the local port
STEP: Closing the write half of the client's connection
STEP: Reading data from the local port
STEP: Waiting for the target pod to stop running
Mar 28 11:11:18.617: INFO: Waiting up to 30s for pod pfpod status to be no longer running
Mar 28 11:11:18.618: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-n6x4d' status to be 'no longer running'(found phase: "Running", readiness: true) (1.459872ms elapsed)
Mar 28 11:11:20.619: INFO: Found pod 'pfpod' with status 'Succeeded' on node '127.0.0.1'
STEP: Retrieving logs from the target pod
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:11:20.622: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-port-forwarding-n6x4d" for this suite.
• [SLOW TEST:11.956 seconds]
Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:268
With a server that expects a client request
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:216
should support a client that connects, sends data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:215
------------------------------
SSS
------------------------------
hostPath
should give a volume the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
[BeforeEach] hostPath
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:11:25.630: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:11:25.634: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-tme3s
Mar 28 11:11:25.636: INFO: Get service account default in ns e2e-tests-hostpath-tme3s failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:11:27.637: INFO: Service account default in ns e2e-tests-hostpath-tme3s with secrets found. (2.003043695s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:11:27.637: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-tme3s
Mar 28 11:11:27.638: INFO: Service account default in ns e2e-tests-hostpath-tme3s with secrets found. (684.531µs)
[BeforeEach] hostPath
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:45
[It] should give a volume the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
STEP: Creating a pod to test hostPath mode
Mar 28 11:11:27.640: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Mar 28 11:11:27.641: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Mar 28 11:11:27.641: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-tme3s' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.081649ms elapsed)
Mar 28 11:11:29.642: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-tme3s' so far
Mar 28 11:11:29.642: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-tme3s' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00253311s elapsed)
STEP: Saw pod success
Mar 28 11:11:31.644: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-host-path-test container test-container-1: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
mode of file "/test-volume": dtrwxrwxrwx
[AfterEach] hostPath
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:11:31.661: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-tme3s" for this suite.
• [SLOW TEST:11.043 seconds]
hostPath
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:90
should give a volume the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:62
------------------------------
ReplicationController
should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
[BeforeEach] ReplicationController
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:11:36.673: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:11:36.674: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-0lztk
Mar 28 11:11:36.675: INFO: Get service account default in ns e2e-tests-replication-controller-0lztk failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:11:38.676: INFO: Service account default in ns e2e-tests-replication-controller-0lztk with secrets found. (2.001437132s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:11:38.676: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-0lztk
Mar 28 11:11:38.676: INFO: Service account default in ns e2e-tests-replication-controller-0lztk with secrets found. (594.098µs)
[It] should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
STEP: Creating replication controller my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845
Mar 28 11:11:38.678: INFO: Pod name my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845: Found 0 pods out of 2
Mar 28 11:11:43.680: INFO: Pod name my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845: Found 2 pods out of 2
STEP: Ensuring each pod is running
Mar 28 11:11:43.680: INFO: Waiting up to 5m0s for pod my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-d8qwf status to be running
Mar 28 11:11:43.681: INFO: Found pod 'my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-d8qwf' on node '127.0.0.1'
Mar 28 11:11:43.681: INFO: Waiting up to 5m0s for pod my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-keopt status to be running
Mar 28 11:11:43.682: INFO: Found pod 'my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-keopt' on node '127.0.0.1'
STEP: Trying to dial each unique pod
Mar 28 11:11:48.686: INFO: Controller my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845: Got expected result from replica 1 [my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-d8qwf]: "my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-d8qwf", 1 of 2 required successes so far
Mar 28 11:11:48.687: INFO: Controller my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845: Got expected result from replica 2 [my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-keopt]: "my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845-keopt", 2 of 2 required successes so far
STEP: deleting replication controller my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845 in namespace e2e-tests-replication-controller-0lztk
Mar 28 11:11:50.702: INFO: Deleting RC my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845 took: 2.012793912s
Mar 28 11:11:50.702: INFO: Terminating RC my-hostname-basic-83f42aaf-f510-11e5-b1e1-0862662cf845 pods took: 38.972µs
[AfterEach] ReplicationController
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:11:50.702: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-0lztk" for this suite.
• [SLOW TEST:34.045 seconds]
ReplicationController
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:37
------------------------------
S
------------------------------
EmptyDir volumes
volume on tmpfs should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:10.718: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:10.719: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-pqnt3
Mar 28 11:12:10.720: INFO: Get service account default in ns e2e-tests-emptydir-pqnt3 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:12:12.722: INFO: Service account default in ns e2e-tests-emptydir-pqnt3 with secrets found. (2.003138796s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:12:12.722: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-pqnt3
Mar 28 11:12:12.724: INFO: Service account default in ns e2e-tests-emptydir-pqnt3 with secrets found. (1.918486ms)
[It] volume on tmpfs should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 28 11:12:12.730: INFO: Waiting up to 5m0s for pod pod-983f826a-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:12:12.731: INFO: No Status.Info for container 'test-container' in pod 'pod-983f826a-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:12:12.731: INFO: Waiting for pod pod-983f826a-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-pqnt3' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.685883ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-983f826a-f510-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:12:14.748: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pqnt3" for this suite.
• [SLOW TEST:9.040 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
volume on tmpfs should have the correct mode [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
------------------------------
Pods
should allow activeDeadlineSeconds to be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:550
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:19.758: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:19.760: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1y6sz
Mar 28 11:12:19.760: INFO: Get service account default in ns e2e-tests-pods-1y6sz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:12:21.761: INFO: Service account default in ns e2e-tests-pods-1y6sz with secrets found. (2.001533787s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:12:21.761: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1y6sz
Mar 28 11:12:21.762: INFO: Service account default in ns e2e-tests-pods-1y6sz with secrets found. (579.498µs)
[It] should allow activeDeadlineSeconds to be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:550
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 28 11:12:21.764: INFO: Waiting up to 5m0s for pod pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:12:21.766: INFO: Waiting for pod pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-1y6sz' status to be 'running'(found phase: "Pending", readiness: false) (1.953678ms elapsed)
Mar 28 11:12:23.767: INFO: Found pod 'pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 28 11:12:24.269: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Mar 28 11:12:24.773: INFO: Successfully updated pod
Mar 28 11:12:24.773: INFO: Waiting up to 5m0s for pod pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845 status to be terminated due to deadline exceeded
Mar 28 11:12:24.774: INFO: Waiting for pod pod-update-activedeadlineseconds-9da28303-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-1y6sz' status to be 'terminated due to deadline exceeded'(found phase: "Running", readiness: true) (1.017894ms elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:12:26.788: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-1y6sz" for this suite.
• [SLOW TEST:12.039 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should allow activeDeadlineSeconds to be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:550
------------------------------
Variable Expansion
should allow substituting values in a container's args [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
[BeforeEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:31.798: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:31.799: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-q7xqr
Mar 28 11:12:31.800: INFO: Get service account default in ns e2e-tests-var-expansion-q7xqr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:12:33.801: INFO: Service account default in ns e2e-tests-var-expansion-q7xqr with secrets found. (2.001524234s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:12:33.801: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-q7xqr
Mar 28 11:12:33.801: INFO: Service account default in ns e2e-tests-var-expansion-q7xqr with secrets found. (686.981µs)
[It] should allow substituting values in a container's args [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
STEP: Creating a pod to test substitution in container's args
Mar 28 11:12:33.803: INFO: Waiting up to 5m0s for pod var-expansion-a4cf9c0c-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:12:33.804: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-a4cf9c0c-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:12:33.804: INFO: Waiting for pod var-expansion-a4cf9c0c-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-var-expansion-q7xqr' status to be 'success or failure'(found phase: "Pending", readiness: false) (872.84µs elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-a4cf9c0c-f510-11e5-b1e1-0862662cf845 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:12:35.821: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-q7xqr" for this suite.
• [SLOW TEST:9.032 seconds]
Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131
should allow substituting values in a container's args [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:130
------------------------------
Variable Expansion
should allow substituting values in a container's command [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
[BeforeEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:40.830: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:40.831: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-wkpnj
Mar 28 11:12:40.832: INFO: Get service account default in ns e2e-tests-var-expansion-wkpnj failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:12:42.833: INFO: Service account default in ns e2e-tests-var-expansion-wkpnj with secrets found. (2.001635732s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:12:42.833: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-wkpnj
Mar 28 11:12:42.833: INFO: Service account default in ns e2e-tests-var-expansion-wkpnj with secrets found. (589.716µs)
[It] should allow substituting values in a container's command [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
STEP: Creating a pod to test substitution in container's command
Mar 28 11:12:42.835: INFO: Waiting up to 5m0s for pod var-expansion-aa31c083-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:12:42.837: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-aa31c083-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:12:42.837: INFO: Waiting for pod var-expansion-aa31c083-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-var-expansion-wkpnj' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.134818ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-aa31c083-f510-11e5-b1e1-0862662cf845 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:12:44.852: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wkpnj" for this suite.
• [SLOW TEST:9.034 seconds]
Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131
should allow substituting values in a container's command [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:99
------------------------------
S
------------------------------
Secrets
should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
[BeforeEach] Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:49.863: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:49.866: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-uu11d
Mar 28 11:12:49.866: INFO: Get service account default in ns e2e-tests-secrets-uu11d failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:12:51.867: INFO: Service account default in ns e2e-tests-secrets-uu11d with secrets found. (2.001581827s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:12:51.867: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-uu11d
Mar 28 11:12:51.868: INFO: Service account default in ns e2e-tests-secrets-uu11d with secrets found. (699.634µs)
[It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
STEP: Creating secret with name secret-test-af945619-f510-11e5-b1e1-0862662cf845
STEP: Creating a pod to test consume secrets
Mar 28 11:12:51.879: INFO: Waiting up to 5m0s for pod pod-secrets-af948481-f510-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:12:51.880: INFO: No Status.Info for container 'secret-volume-test' in pod 'pod-secrets-af948481-f510-11e5-b1e1-0862662cf845' yet
Mar 28 11:12:51.880: INFO: Waiting for pod pod-secrets-af948481-f510-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-secrets-uu11d' status to be 'success or failure'(found phase: "Pending", readiness: false) (958.001µs elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-secrets-af948481-f510-11e5-b1e1-0862662cf845 container secret-volume-test: <nil>
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r--
content of file "/etc/secret-volume/data-1": value-1
STEP: Cleaning up the secret
[AfterEach] Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:12:53.906: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-uu11d" for this suite.
• [SLOW TEST:9.052 seconds]
Secrets
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:159
should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
------------------------------
SS
------------------------------
Pods
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:664
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:12:58.916: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:12:58.917: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-5nnow
Mar 28 11:12:58.918: INFO: Get service account default in ns e2e-tests-pods-5nnow failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:13:00.921: INFO: Service account default in ns e2e-tests-pods-5nnow with secrets found. (2.003718754s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:13:00.921: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-5nnow
Mar 28 11:13:00.924: INFO: Service account default in ns e2e-tests-pods-5nnow with secrets found. (2.592391ms)
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:664
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-5nnow
Mar 28 11:13:00.931: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Mar 28 11:13:00.939: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-5nnow' status to be '!pending'(found phase: "Pending", readiness: false) (8.158995ms elapsed)
Mar 28 11:13:02.941: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-5nnow' out of pending state (found '"Running"')
Mar 28 11:13:02.941: INFO: Started pod liveness-exec in namespace e2e-tests-pods-5nnow
STEP: checking the pod's current state and verifying that restartCount is present
Mar 28 11:13:02.942: INFO: Initial restart count of pod liveness-exec is 0
Mar 28 11:13:54.997: INFO: Restart count of pod e2e-tests-pods-5nnow/liveness-exec is now 1 (52.055264621s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:13:55.013: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5nnow" for this suite.
• [SLOW TEST:61.106 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:664
------------------------------
SS
------------------------------
Probing container
with readiness probe should not be ready before initial delay and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
[BeforeEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:14:00.022: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:14:00.023: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-56pqn
Mar 28 11:14:00.024: INFO: Get service account default in ns e2e-tests-container-probe-56pqn failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:14:02.026: INFO: Service account default in ns e2e-tests-container-probe-56pqn with secrets found. (2.003141225s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:14:02.026: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-56pqn
Mar 28 11:14:02.029: INFO: Service account default in ns e2e-tests-container-probe-56pqn with secrets found. (2.17045ms)
[BeforeEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:45
[It] with readiness probe should not be ready before initial delay and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
Mar 28 11:14:04.036: INFO: pod is not yet ready; pod has phase "Pending".
Mar 28 11:14:06.038: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:08.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:10.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:12.037: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:14.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:16.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:18.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:20.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:22.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:24.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:26.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:28.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:30.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:32.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:34.037: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:36.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:38.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:40.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:42.036: INFO: pod is not yet ready; pod has phase "Running".
Mar 28 11:14:44.041: INFO: Container started at 2016-03-28 11:14:03 -0700 PDT, pod became ready at 2016-03-28 11:14:42 -0700 PDT
[AfterEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:14:44.041: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-56pqn" for this suite.
• [SLOW TEST:64.030 seconds]
Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:112
with readiness probe should not be ready before initial delay and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:85
------------------------------
SSSS
------------------------------
Pods
should be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:719
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:15:04.051: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:15:04.053: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-nucf7
Mar 28 11:15:04.054: INFO: Get service account default in ns e2e-tests-pods-nucf7 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:15:06.055: INFO: Service account default in ns e2e-tests-pods-nucf7 with secrets found. (2.001978885s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:15:06.055: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-nucf7
Mar 28 11:15:06.056: INFO: Service account default in ns e2e-tests-pods-nucf7 with secrets found. (1.039374ms)
[It] should be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:719
STEP: Creating pod liveness-http in namespace e2e-tests-pods-nucf7
Mar 28 11:15:06.059: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 28 11:15:06.060: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-nucf7' status to be '!pending'(found phase: "Pending", readiness: false) (1.059804ms elapsed)
Mar 28 11:15:08.062: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-nucf7' out of pending state (found '"Running"')
Mar 28 11:15:08.062: INFO: Started pod liveness-http in namespace e2e-tests-pods-nucf7
STEP: checking the pod's current state and verifying that restartCount is present
Mar 28 11:15:08.063: INFO: Initial restart count of pod liveness-http is 0
Mar 28 11:15:28.090: INFO: Restart count of pod e2e-tests-pods-nucf7/liveness-http is now 1 (20.027401128s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:15:28.107: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nucf7" for this suite.
• [SLOW TEST:29.067 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:719
------------------------------
SS
------------------------------
Kubectl client Proxy server
should support --unix-socket=/path [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1143
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:15:33.119: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:15:33.122: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-x8782
Mar 28 11:15:33.123: INFO: Get service account default in ns e2e-tests-kubectl-x8782 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:15:35.125: INFO: Service account default in ns e2e-tests-kubectl-x8782 with secrets found. (2.002892176s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:15:35.125: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-x8782
Mar 28 11:15:35.126: INFO: Service account default in ns e2e-tests-kubectl-x8782 with secrets found. (1.310894ms)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should support --unix-socket=/path [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1143
STEP: Starting the proxy
Mar 28 11:15:35.126: INFO: Asynchronously running '/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix129208507/test'
STEP: retrieving proxy /api/ output
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:15:35.139: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x8782" for this suite.
• [SLOW TEST:7.025 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Proxy server
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1144
should support --unix-socket=/path [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1143
------------------------------
Kubectl client Kubectl run rc
should create an rc from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:974
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:15:40.144: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:15:40.146: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-zjvrc
Mar 28 11:15:40.146: INFO: Get service account default in ns e2e-tests-kubectl-zjvrc failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:15:42.147: INFO: Service account default in ns e2e-tests-kubectl-zjvrc with secrets found. (2.001531591s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:15:42.147: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-zjvrc
Mar 28 11:15:42.148: INFO: Service account default in ns e2e-tests-kubectl-zjvrc with secrets found. (624.644µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl run rc
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:930
[It] should create an rc from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:974
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 28 11:15:42.148: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx:1.7.9 --generator=run/v1 --namespace=e2e-tests-kubectl-zjvrc'
Mar 28 11:15:42.160: INFO: stderr: ""
Mar 28 11:15:42.160: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" created"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Mar 28 11:15:44.163: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-0w02a]
Mar 28 11:15:44.163: INFO: Waiting up to 5m0s for pod e2e-test-nginx-rc-0w02a status to be running and ready
Mar 28 11:15:44.164: INFO: Waiting for pod e2e-test-nginx-rc-0w02a in namespace 'e2e-tests-kubectl-zjvrc' status to be 'running and ready'(found phase: "Pending", readiness: false) (952.281µs elapsed)
Mar 28 11:15:46.167: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-0w02a]
Mar 28 11:15:46.167: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zjvrc'
Mar 28 11:15:46.186: INFO: stderr: ""
[AfterEach] Kubectl run rc
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:934
Mar 28 11:15:46.187: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zjvrc'
Mar 28 11:15:48.206: INFO: stderr: ""
Mar 28 11:15:48.206: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:15:48.206: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zjvrc" for this suite.
• [SLOW TEST:28.074 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run rc
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:975
should create an rc from an image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:974
------------------------------
SSSSSSS
------------------------------
Variable Expansion
should allow composing env vars into new env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
[BeforeEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:16:08.218: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:16:08.221: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-4cszr
Mar 28 11:16:08.223: INFO: Get service account default in ns e2e-tests-var-expansion-4cszr failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:16:10.224: INFO: Service account default in ns e2e-tests-var-expansion-4cszr with secrets found. (2.002947674s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:16:10.224: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-4cszr
Mar 28 11:16:10.225: INFO: Service account default in ns e2e-tests-var-expansion-4cszr with secrets found. (710.124µs)
[It] should allow composing env vars into new env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
STEP: Creating a pod to test env composition
Mar 28 11:16:10.227: INFO: Waiting up to 5m0s for pod var-expansion-25cf3787-f511-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:16:10.228: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-25cf3787-f511-11e5-b1e1-0862662cf845' yet
Mar 28 11:16:10.228: INFO: Waiting for pod var-expansion-25cf3787-f511-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-var-expansion-4cszr' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.507259ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-25cf3787-f511-11e5-b1e1-0862662cf845 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOBAR=foo-value;;bar-value
HOSTNAME=var-expansion-25cf3787-f511-11e5-b1e1-0862662cf845
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
BAR=bar-value
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
FOO=foo-value
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
[AfterEach] Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:16:12.245: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-4cszr" for this suite.
• [SLOW TEST:9.037 seconds]
Variable Expansion
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131
should allow composing env vars into new env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:69
------------------------------
Kubectl client Update Demo
should do a rolling update of a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:159
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:16:17.255: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:16:17.257: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ckqa1
Mar 28 11:16:17.257: INFO: Get service account default in ns e2e-tests-kubectl-ckqa1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:16:19.258: INFO: Service account default in ns e2e-tests-kubectl-ckqa1 with secrets found. (2.001566415s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:16:19.258: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ckqa1
Mar 28 11:16:19.259: INFO: Service account default in ns e2e-tests-kubectl-ckqa1 with secrets found. (602.597µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:128
[It] should do a rolling update of a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:159
STEP: creating the initial replication controller
Mar 28 11:16:19.259: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:19.298: INFO: stderr: ""
Mar 28 11:16:19.298: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:16:19.298: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:19.311: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:19.311: INFO: stdout: "update-demo-nautilus-rh600 update-demo-nautilus-s614a"
Mar 28 11:16:19.311: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rh600 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:19.326: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:19.326: INFO: stdout: ""
Mar 28 11:16:19.326: INFO: update-demo-nautilus-rh600 is created but not running
Mar 28 11:16:24.326: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:24.340: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:24.340: INFO: stdout: "update-demo-nautilus-rh600 update-demo-nautilus-s614a"
Mar 28 11:16:24.340: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rh600 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:24.352: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:24.352: INFO: stdout: "true"
Mar 28 11:16:24.352: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rh600 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:24.364: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:24.364: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:16:24.364: INFO: validating pod update-demo-nautilus-rh600
Mar 28 11:16:24.366: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:16:24.366: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:16:24.366: INFO: update-demo-nautilus-rh600 is verified up and running
Mar 28 11:16:24.366: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s614a -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:24.378: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:24.378: INFO: stdout: "true"
Mar 28 11:16:24.378: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s614a -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:24.390: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:24.390: INFO: stdout: "gcr.io/google_containers/update-demo:nautilus"
Mar 28 11:16:24.390: INFO: validating pod update-demo-nautilus-s614a
Mar 28 11:16:24.392: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 28 11:16:24.392: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 28 11:16:24.392: INFO: update-demo-nautilus-s614a is verified up and running
STEP: rolling-update to new replication controller
Mar 28 11:16:24.392: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f ../../docs/user-guide/update-demo/kitten-rc.yaml --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.510: INFO: stderr: ""
Mar 28 11:16:54.510: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting update-demo-nautilus\nreplicationcontroller \"update-demo-nautilus\" rolling updated to \"update-demo-kitten\""
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 28 11:16:54.510: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.523: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:54.523: INFO: stdout: "update-demo-kitten-nz48f update-demo-kitten-pso3r"
Mar 28 11:16:54.523: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nz48f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.535: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:54.535: INFO: stdout: "true"
Mar 28 11:16:54.535: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nz48f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.547: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:54.547: INFO: stdout: "gcr.io/google_containers/update-demo:kitten"
Mar 28 11:16:54.547: INFO: validating pod update-demo-kitten-nz48f
Mar 28 11:16:54.549: INFO: got data: {
"image": "kitten.jpg"
}
Mar 28 11:16:54.549: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 28 11:16:54.549: INFO: update-demo-kitten-nz48f is verified up and running
Mar 28 11:16:54.549: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pso3r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.561: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:54.561: INFO: stdout: "true"
Mar 28 11:16:54.561: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pso3r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-ckqa1'
Mar 28 11:16:54.573: INFO: stderr: "Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release\n"
Mar 28 11:16:54.573: INFO: stdout: "gcr.io/google_containers/update-demo:kitten"
Mar 28 11:16:54.573: INFO: validating pod update-demo-kitten-pso3r
Mar 28 11:16:54.575: INFO: got data: {
"image": "kitten.jpg"
}
Mar 28 11:16:54.575: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 28 11:16:54.575: INFO: update-demo-kitten-pso3r is verified up and running
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:16:54.575: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ckqa1" for this suite.
• [SLOW TEST:57.325 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Update Demo
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:160
should do a rolling update of a replication controller [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:159
------------------------------
SSSS
------------------------------
Port forwarding With a server that expects no client request
should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:266
[BeforeEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:17:14.580: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:17:14.582: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-8wrte
Mar 28 11:17:14.582: INFO: Get service account default in ns e2e-tests-port-forwarding-8wrte failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:17:16.585: INFO: Service account default in ns e2e-tests-port-forwarding-8wrte with secrets found. (2.003793204s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:17:16.586: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-8wrte
Mar 28 11:17:16.588: INFO: Service account default in ns e2e-tests-port-forwarding-8wrte with secrets found. (2.563067ms)
[It] should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:266
STEP: creating the target pod
Mar 28 11:17:16.596: INFO: Waiting up to 5m0s for pod pfpod status to be running
Mar 28 11:17:16.599: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-8wrte' status to be 'running'(found phase: "Pending", readiness: false) (3.389705ms elapsed)
Mar 28 11:17:18.601: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Mar 28 11:17:18.601: INFO: starting port-forward command and streaming output
Mar 28 11:17:18.601: INFO: Asynchronously running '/bin/kubectl kubectl --kubeconfig=/root/.kube/config port-forward --namespace=e2e-tests-port-forwarding-8wrte pfpod :80'
Mar 28 11:17:18.602: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Reading data from the local port
STEP: Waiting for the target pod to stop running
Mar 28 11:17:20.035: INFO: Waiting up to 30s for pod pfpod status to be no longer running
Mar 28 11:17:20.036: INFO: Found pod 'pfpod' with status 'Succeeded' on node '127.0.0.1'
STEP: Retrieving logs from the target pod
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:17:20.038: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-port-forwarding-8wrte" for this suite.
• [SLOW TEST:10.463 seconds]
Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:268
With a server that expects no client request
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:267
should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:266
------------------------------
SS
------------------------------
Proxy version v1
should proxy through a service and a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:244
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:17:25.043: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:17:25.044: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-xck2s
Mar 28 11:17:25.045: INFO: Get service account default in ns e2e-tests-proxy-xck2s failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:17:27.046: INFO: Service account default in ns e2e-tests-proxy-xck2s with secrets found. (2.001844476s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:17:27.046: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-xck2s
Mar 28 11:17:27.047: INFO: Service account default in ns e2e-tests-proxy-xck2s with secrets found. (698.289µs)
[It] should proxy through a service and a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:244
STEP: creating replication controller proxy-service-3xqmo in namespace e2e-tests-proxy-xck2s
Mar 28 11:17:27.076: INFO: Created replication controller with name: proxy-service-3xqmo, namespace: e2e-tests-proxy-xck2s, replica count: 1
Mar 28 11:17:28.076: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Mar 28 11:17:29.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
Mar 28 11:17:30.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
Mar 28 11:17:31.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
Mar 28 11:17:32.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
Mar 28 11:17:33.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady
Mar 28 11:17:34.077: INFO: proxy-service-3xqmo Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Mar 28 11:17:34.080: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.68966ms)
Mar 28 11:17:34.280: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.566583ms)
Mar 28 11:17:34.485: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 5.660569ms)
Mar 28 11:17:34.680: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.164984ms)
Mar 28 11:17:34.881: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.512594ms)
Mar 28 11:17:35.081: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.172118ms)
Mar 28 11:17:35.280: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.021686ms)
Mar 28 11:17:35.481: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 971.028µs)
Mar 28 11:17:35.686: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 6.308806ms)
Mar 28 11:17:35.881: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 988.977µs)
Mar 28 11:17:36.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 2.82194ms)
Mar 28 11:17:36.282: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.643144ms)
Mar 28 11:17:36.481: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.191532ms)
Mar 28 11:17:36.682: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.227898ms)
Mar 28 11:17:36.882: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.257536ms)
Mar 28 11:17:37.082: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.01531ms)
Mar 28 11:17:37.282: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.295111ms)
Mar 28 11:17:37.482: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.574886ms)
Mar 28 11:17:37.682: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.218617ms)
Mar 28 11:17:37.887: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 6.325728ms)
Mar 28 11:17:38.082: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.186717ms)
Mar 28 11:17:38.282: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.189883ms)
Mar 28 11:17:38.485: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 3.039237ms)
Mar 28 11:17:38.683: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.061714ms)
Mar 28 11:17:38.883: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.02378ms)
Mar 28 11:17:39.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.188406ms)
Mar 28 11:17:39.283: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.147804ms)
Mar 28 11:17:39.483: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.164381ms)
Mar 28 11:17:39.683: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.188623ms)
Mar 28 11:17:39.884: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 2.179432ms)
Mar 28 11:17:40.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.089921ms)
Mar 28 11:17:40.284: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.075656ms)
Mar 28 11:17:40.484: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.008655ms)
Mar 28 11:17:40.684: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.187522ms)
Mar 28 11:17:40.884: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.345318ms)
Mar 28 11:17:41.084: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.258091ms)
Mar 28 11:17:41.284: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.08627ms)
Mar 28 11:17:41.484: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.119829ms)
Mar 28 11:17:41.684: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.010563ms)
Mar 28 11:17:41.885: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.386944ms)
Mar 28 11:17:42.085: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 989.96µs)
Mar 28 11:17:42.285: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.034117ms)
Mar 28 11:17:42.486: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.844987ms)
Mar 28 11:17:42.685: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.004092ms)
Mar 28 11:17:42.885: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.307895ms)
Mar 28 11:17:43.086: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.349703ms)
Mar 28 11:17:43.286: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.561193ms)
Mar 28 11:17:43.486: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.120621ms)
Mar 28 11:17:43.686: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.270167ms)
Mar 28 11:17:43.886: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.474489ms)
Mar 28 11:17:44.087: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.874738ms)
Mar 28 11:17:44.287: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.491004ms)
Mar 28 11:17:44.486: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.117022ms)
Mar 28 11:17:44.686: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.177354ms)
Mar 28 11:17:44.887: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.311905ms)
Mar 28 11:17:45.088: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 2.064058ms)
Mar 28 11:17:45.287: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.386367ms)
Mar 28 11:17:45.487: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.091549ms)
Mar 28 11:17:45.687: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.201062ms)
Mar 28 11:17:45.887: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.188159ms)
Mar 28 11:17:46.087: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.149481ms)
Mar 28 11:17:46.288: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.145251ms)
Mar 28 11:17:46.488: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.000519ms)
Mar 28 11:17:46.688: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.051396ms)
Mar 28 11:17:46.888: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.069833ms)
Mar 28 11:17:47.088: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.138642ms)
Mar 28 11:17:47.288: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.164644ms)
Mar 28 11:17:47.488: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.142915ms)
Mar 28 11:17:47.689: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.264296ms)
Mar 28 11:17:47.889: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.175585ms)
Mar 28 11:17:48.091: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 3.268416ms)
Mar 28 11:17:48.292: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 3.799533ms)
Mar 28 11:17:48.490: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.845522ms)
Mar 28 11:17:48.689: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 997.235µs)
Mar 28 11:17:48.891: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 2.835924ms)
Mar 28 11:17:49.090: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.630129ms)
Mar 28 11:17:49.290: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.138626ms)
Mar 28 11:17:49.490: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.147677ms)
Mar 28 11:17:49.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.052112ms)
Mar 28 11:17:49.890: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 931.118µs)
Mar 28 11:17:50.090: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 951.991µs)
Mar 28 11:17:50.290: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.111891ms)
Mar 28 11:17:50.490: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.053274ms)
Mar 28 11:17:50.690: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.142692ms)
Mar 28 11:17:50.891: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.381659ms)
Mar 28 11:17:51.091: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.197552ms)
Mar 28 11:17:51.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.270107ms)
Mar 28 11:17:51.491: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.327894ms)
Mar 28 11:17:51.691: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.048419ms)
Mar 28 11:17:51.891: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.192911ms)
Mar 28 11:17:52.091: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 989.078µs)
Mar 28 11:17:52.291: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 949.443µs)
Mar 28 11:17:52.491: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.084173ms)
Mar 28 11:17:52.691: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 931.294µs)
Mar 28 11:17:52.892: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.136684ms)
Mar 28 11:17:53.092: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.363877ms)
Mar 28 11:17:53.292: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.265212ms)
Mar 28 11:17:53.492: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.092879ms)
Mar 28 11:17:53.692: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.064807ms)
Mar 28 11:17:53.892: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.138621ms)
Mar 28 11:17:54.093: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.718351ms)
Mar 28 11:17:54.292: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 974.092µs)
Mar 28 11:17:54.493: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.268873ms)
Mar 28 11:17:54.693: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.397568ms)
Mar 28 11:17:54.894: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 2.256167ms)
Mar 28 11:17:55.093: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.190832ms)
Mar 28 11:17:55.293: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.349034ms)
Mar 28 11:17:55.493: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 997.852µs)
Mar 28 11:17:55.694: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.277226ms)
Mar 28 11:17:55.894: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.604905ms)
Mar 28 11:17:56.094: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.179861ms)
Mar 28 11:17:56.294: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.354159ms)
Mar 28 11:17:56.494: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.185566ms)
Mar 28 11:17:56.694: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.319324ms)
Mar 28 11:17:56.894: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.221084ms)
Mar 28 11:17:57.094: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.126077ms)
Mar 28 11:17:57.295: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.350055ms)
Mar 28 11:17:57.495: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.34236ms)
Mar 28 11:17:57.695: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.17787ms)
Mar 28 11:17:57.895: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.152508ms)
Mar 28 11:17:58.095: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.226365ms)
Mar 28 11:17:58.295: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.240481ms)
Mar 28 11:17:58.495: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.07213ms)
Mar 28 11:17:58.695: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 902.167µs)
Mar 28 11:17:58.895: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.16452ms)
Mar 28 11:17:59.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.490185ms)
Mar 28 11:17:59.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.253073ms)
Mar 28 11:17:59.496: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.300887ms)
Mar 28 11:17:59.696: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.011428ms)
Mar 28 11:17:59.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.080899ms)
Mar 28 11:18:00.096: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.330108ms)
Mar 28 11:18:00.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.111327ms)
Mar 28 11:18:00.496: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.100814ms)
Mar 28 11:18:00.696: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.08844ms)
Mar 28 11:18:00.897: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.282926ms)
Mar 28 11:18:01.097: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.027252ms)
Mar 28 11:18:01.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.030276ms)
Mar 28 11:18:01.497: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.394541ms)
Mar 28 11:18:01.697: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.249388ms)
Mar 28 11:18:01.897: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.137739ms)
Mar 28 11:18:02.097: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.264166ms)
Mar 28 11:18:02.297: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.273442ms)
Mar 28 11:18:02.497: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.058382ms)
Mar 28 11:18:02.698: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.932166ms)
Mar 28 11:18:02.898: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.255366ms)
Mar 28 11:18:03.098: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.263719ms)
Mar 28 11:18:03.298: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.283111ms)
Mar 28 11:18:03.498: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.389763ms)
Mar 28 11:18:03.698: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 992.902µs)
Mar 28 11:18:03.898: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.002463ms)
Mar 28 11:18:04.098: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.195871ms)
Mar 28 11:18:04.298: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.239357ms)
Mar 28 11:18:04.499: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.154321ms)
Mar 28 11:18:04.699: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.170671ms)
Mar 28 11:18:04.900: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.981677ms)
Mar 28 11:18:05.099: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.01401ms)
Mar 28 11:18:05.299: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.000867ms)
Mar 28 11:18:05.499: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.154035ms)
Mar 28 11:18:05.700: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.358011ms)
Mar 28 11:18:05.900: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.484394ms)
Mar 28 11:18:06.100: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.119112ms)
Mar 28 11:18:06.300: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.26197ms)
Mar 28 11:18:06.500: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.26909ms)
Mar 28 11:18:06.700: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.253458ms)
Mar 28 11:18:06.900: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.140276ms)
Mar 28 11:18:07.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 960.357µs)
Mar 28 11:18:07.300: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.016891ms)
Mar 28 11:18:07.500: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.152696ms)
Mar 28 11:18:07.700: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.043965ms)
Mar 28 11:18:07.901: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.119993ms)
Mar 28 11:18:08.101: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.195748ms)
Mar 28 11:18:08.301: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.025303ms)
Mar 28 11:18:08.501: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 988.204µs)
Mar 28 11:18:08.701: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.10375ms)
Mar 28 11:18:08.901: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 993.524µs)
Mar 28 11:18:09.101: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 942.092µs)
Mar 28 11:18:09.301: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.060518ms)
Mar 28 11:18:09.502: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.108091ms)
Mar 28 11:18:09.702: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.345237ms)
Mar 28 11:18:09.902: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 988.126µs)
Mar 28 11:18:10.102: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.152205ms)
Mar 28 11:18:10.302: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.34255ms)
Mar 28 11:18:10.502: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.191962ms)
Mar 28 11:18:10.703: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.84417ms)
Mar 28 11:18:10.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.132813ms)
Mar 28 11:18:11.103: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.20736ms)
Mar 28 11:18:11.303: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.45916ms)
Mar 28 11:18:11.503: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.112734ms)
Mar 28 11:18:11.703: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.238642ms)
Mar 28 11:18:11.903: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.239371ms)
Mar 28 11:18:12.103: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 930.694µs)
Mar 28 11:18:12.303: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 900.249µs)
Mar 28 11:18:12.504: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.33582ms)
Mar 28 11:18:12.704: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.092774ms)
Mar 28 11:18:12.904: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.38148ms)
Mar 28 11:18:13.104: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.470631ms)
Mar 28 11:18:13.304: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 981.812µs)
Mar 28 11:18:13.504: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 916.378µs)
Mar 28 11:18:13.704: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.183177ms)
Mar 28 11:18:13.904: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.10195ms)
Mar 28 11:18:14.104: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.093616ms)
Mar 28 11:18:14.305: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.113924ms)
Mar 28 11:18:14.507: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 3.555152ms)
Mar 28 11:18:14.705: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.1661ms)
Mar 28 11:18:14.905: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.123839ms)
Mar 28 11:18:15.105: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.117009ms)
Mar 28 11:18:15.306: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.488501ms)
Mar 28 11:18:15.506: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.312977ms)
Mar 28 11:18:15.705: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 990.643µs)
Mar 28 11:18:15.906: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.106615ms)
Mar 28 11:18:16.106: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.160782ms)
Mar 28 11:18:16.306: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.092084ms)
Mar 28 11:18:16.506: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.211003ms)
Mar 28 11:18:16.706: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.155ms)
Mar 28 11:18:16.906: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.085324ms)
Mar 28 11:18:17.106: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 933.593µs)
Mar 28 11:18:17.306: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 978.268µs)
Mar 28 11:18:17.507: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.224591ms)
Mar 28 11:18:17.707: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.31336ms)
Mar 28 11:18:17.907: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.69418ms)
Mar 28 11:18:18.107: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.001515ms)
Mar 28 11:18:18.307: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.090892ms)
Mar 28 11:18:18.508: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.593749ms)
Mar 28 11:18:18.707: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 916.642µs)
Mar 28 11:18:18.907: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 934.053µs)
Mar 28 11:18:19.107: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 988.728µs)
Mar 28 11:18:19.307: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.006574ms)
Mar 28 11:18:19.508: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 940.886µs)
Mar 28 11:18:19.708: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.280786ms)
Mar 28 11:18:19.908: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.124623ms)
Mar 28 11:18:20.108: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.111335ms)
Mar 28 11:18:20.308: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.275102ms)
Mar 28 11:18:20.508: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.189138ms)
Mar 28 11:18:20.708: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 994.036µs)
Mar 28 11:18:20.909: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.478835ms)
Mar 28 11:18:21.109: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.111031ms)
Mar 28 11:18:21.309: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.16181ms)
Mar 28 11:18:21.509: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.204432ms)
Mar 28 11:18:21.709: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.41596ms)
Mar 28 11:18:21.909: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.2998ms)
Mar 28 11:18:22.109: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 995.408µs)
Mar 28 11:18:22.309: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 958.916µs)
Mar 28 11:18:22.509: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 929.313µs)
Mar 28 11:18:22.710: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.165735ms)
Mar 28 11:18:22.910: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.060705ms)
Mar 28 11:18:23.110: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.716346ms)
Mar 28 11:18:23.310: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.178674ms)
Mar 28 11:18:23.510: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.464571ms)
Mar 28 11:18:23.710: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.260577ms)
Mar 28 11:18:23.910: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.011102ms)
Mar 28 11:18:24.110: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.13394ms)
Mar 28 11:18:24.311: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.239959ms)
Mar 28 11:18:24.511: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.259276ms)
Mar 28 11:18:24.711: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.11104ms)
Mar 28 11:18:24.911: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.56156ms)
Mar 28 11:18:25.112: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 2.25231ms)
Mar 28 11:18:25.312: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.47494ms)
Mar 28 11:18:25.512: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.269201ms)
Mar 28 11:18:25.711: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.029038ms)
Mar 28 11:18:25.912: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.028926ms)
Mar 28 11:18:26.112: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.190318ms)
Mar 28 11:18:26.312: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.265021ms)
Mar 28 11:18:26.512: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.268682ms)
Mar 28 11:18:26.712: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.2999ms)
Mar 28 11:18:26.912: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.238239ms)
Mar 28 11:18:27.112: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.064904ms)
Mar 28 11:18:27.312: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.101602ms)
Mar 28 11:18:27.513: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.10957ms)
Mar 28 11:18:27.713: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.15511ms)
Mar 28 11:18:27.913: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.206759ms)
Mar 28 11:18:28.113: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.287694ms)
Mar 28 11:18:28.313: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.087583ms)
Mar 28 11:18:28.513: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.389213ms)
Mar 28 11:18:28.713: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.199003ms)
Mar 28 11:18:28.913: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.225582ms)
Mar 28 11:18:29.113: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.237081ms)
Mar 28 11:18:29.314: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.274047ms)
Mar 28 11:18:29.514: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.199021ms)
Mar 28 11:18:29.714: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.351738ms)
Mar 28 11:18:29.914: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.262549ms)
Mar 28 11:18:30.114: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.31865ms)
Mar 28 11:18:30.314: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.292404ms)
Mar 28 11:18:30.514: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.235ms)
Mar 28 11:18:30.715: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.378146ms)
Mar 28 11:18:30.914: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.070304ms)
Mar 28 11:18:31.115: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.041398ms)
Mar 28 11:18:31.315: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.159893ms)
Mar 28 11:18:31.515: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.162753ms)
Mar 28 11:18:31.715: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.17167ms)
Mar 28 11:18:31.915: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.171637ms)
Mar 28 11:18:32.115: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.153548ms)
Mar 28 11:18:32.315: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.195279ms)
Mar 28 11:18:32.515: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 996.507µs)
Mar 28 11:18:32.716: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.300779ms)
Mar 28 11:18:32.916: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.161628ms)
Mar 28 11:18:33.116: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.090094ms)
Mar 28 11:18:33.316: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.212157ms)
Mar 28 11:18:33.516: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.123667ms)
Mar 28 11:18:33.716: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 973.867µs)
Mar 28 11:18:33.916: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.099215ms)
Mar 28 11:18:34.116: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 927.644µs)
Mar 28 11:18:34.317: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.862215ms)
Mar 28 11:18:34.517: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 921.316µs)
Mar 28 11:18:34.717: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.049721ms)
Mar 28 11:18:34.917: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.119309ms)
Mar 28 11:18:35.117: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 934.596µs)
Mar 28 11:18:35.318: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.488051ms)
Mar 28 11:18:35.517: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.002264ms)
Mar 28 11:18:35.718: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.118336ms)
Mar 28 11:18:35.918: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.207177ms)
Mar 28 11:18:36.118: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 900.255µs)
Mar 28 11:18:36.318: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.021283ms)
Mar 28 11:18:36.518: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 963.115µs)
Mar 28 11:18:36.718: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.14718ms)
Mar 28 11:18:36.918: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 986.331µs)
Mar 28 11:18:37.118: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 936.942µs)
Mar 28 11:18:37.319: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.117718ms)
Mar 28 11:18:37.519: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.112077ms)
Mar 28 11:18:37.719: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.138263ms)
Mar 28 11:18:37.919: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.543249ms)
Mar 28 11:18:38.119: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 991.695µs)
Mar 28 11:18:38.319: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.247917ms)
Mar 28 11:18:38.519: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.072287ms)
Mar 28 11:18:38.719: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.027299ms)
Mar 28 11:18:38.920: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.350112ms)
Mar 28 11:18:39.120: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.733487ms)
Mar 28 11:18:39.320: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 981.678µs)
Mar 28 11:18:39.520: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 983.08µs)
Mar 28 11:18:39.720: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.126954ms)
Mar 28 11:18:39.920: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.139152ms)
Mar 28 11:18:40.120: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.102407ms)
Mar 28 11:18:40.320: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.258499ms)
Mar 28 11:18:40.520: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 991.975µs)
Mar 28 11:18:40.720: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 907.533µs)
Mar 28 11:18:40.921: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.138671ms)
Mar 28 11:18:41.121: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.191133ms)
Mar 28 11:18:41.321: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.220092ms)
Mar 28 11:18:41.522: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.72345ms)
Mar 28 11:18:41.721: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.240802ms)
Mar 28 11:18:41.921: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 964.512µs)
Mar 28 11:18:42.121: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.136237ms)
Mar 28 11:18:42.321: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 974.433µs)
Mar 28 11:18:42.521: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 932.973µs)
Mar 28 11:18:42.722: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 969.352µs)
Mar 28 11:18:42.922: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.050354ms)
Mar 28 11:18:43.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.191786ms)
Mar 28 11:18:43.322: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.319695ms)
Mar 28 11:18:43.522: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.084667ms)
Mar 28 11:18:43.722: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.191973ms)
Mar 28 11:18:43.923: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.478564ms)
Mar 28 11:18:44.123: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.285634ms)
Mar 28 11:18:44.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.110282ms)
Mar 28 11:18:44.525: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 2.837193ms)
Mar 28 11:18:44.724: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.928894ms)
Mar 28 11:18:44.923: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.185341ms)
Mar 28 11:18:45.123: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.181074ms)
Mar 28 11:18:45.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.145275ms)
Mar 28 11:18:45.524: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.170409ms)
Mar 28 11:18:45.724: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.287492ms)
Mar 28 11:18:45.924: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.088383ms)
Mar 28 11:18:46.124: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.48089ms)
Mar 28 11:18:46.324: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.095102ms)
Mar 28 11:18:46.524: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.207633ms)
Mar 28 11:18:46.724: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.162973ms)
Mar 28 11:18:46.924: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.144687ms)
Mar 28 11:18:47.124: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.140257ms)
Mar 28 11:18:47.324: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.001182ms)
Mar 28 11:18:47.525: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 995.352µs)
Mar 28 11:18:47.725: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 961.154µs)
Mar 28 11:18:47.925: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.146941ms)
Mar 28 11:18:48.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.595454ms)
Mar 28 11:18:48.326: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.669192ms)
Mar 28 11:18:48.525: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 961.182µs)
Mar 28 11:18:48.725: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.034194ms)
Mar 28 11:18:48.925: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.066834ms)
Mar 28 11:18:49.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.387995ms)
Mar 28 11:18:49.326: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.0452ms)
Mar 28 11:18:49.526: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.489021ms)
Mar 28 11:18:49.726: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.186112ms)
Mar 28 11:18:49.926: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 992.692µs)
Mar 28 11:18:50.126: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.309613ms)
Mar 28 11:18:50.326: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.119816ms)
Mar 28 11:18:50.531: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 6.00756ms)
Mar 28 11:18:50.727: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.186183ms)
Mar 28 11:18:50.927: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.235736ms)
Mar 28 11:18:51.127: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 971.41µs)
Mar 28 11:18:51.327: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.25181ms)
Mar 28 11:18:51.527: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.146085ms)
Mar 28 11:18:51.728: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.497239ms)
Mar 28 11:18:51.928: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.596113ms)
Mar 28 11:18:52.128: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.254787ms)
Mar 28 11:18:52.328: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.091972ms)
Mar 28 11:18:52.528: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 955.261µs)
Mar 28 11:18:52.728: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.096316ms)
Mar 28 11:18:52.928: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.179272ms)
Mar 28 11:18:53.128: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.086329ms)
Mar 28 11:18:53.328: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.136206ms)
Mar 28 11:18:53.529: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.518102ms)
Mar 28 11:18:53.728: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 979.747µs)
Mar 28 11:18:53.929: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.302177ms)
Mar 28 11:18:54.129: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.412813ms)
Mar 28 11:18:54.329: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.109371ms)
Mar 28 11:18:54.529: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.066785ms)
Mar 28 11:18:54.729: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.377289ms)
Mar 28 11:18:54.929: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.237842ms)
Mar 28 11:18:55.129: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.238855ms)
Mar 28 11:18:55.329: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.12405ms)
Mar 28 11:18:55.530: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.095045ms)
Mar 28 11:18:55.730: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.226266ms)
Mar 28 11:18:55.930: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 999.557µs)
Mar 28 11:18:56.131: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.812916ms)
Mar 28 11:18:56.330: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.181009ms)
Mar 28 11:18:56.530: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.219514ms)
Mar 28 11:18:56.730: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.277159ms)
Mar 28 11:18:56.931: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.357089ms)
Mar 28 11:18:57.131: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.136522ms)
Mar 28 11:18:57.331: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.341787ms)
Mar 28 11:18:57.531: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.115134ms)
Mar 28 11:18:57.731: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.044613ms)
Mar 28 11:18:57.931: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.18065ms)
Mar 28 11:18:58.132: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.547267ms)
Mar 28 11:18:58.331: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.172749ms)
Mar 28 11:18:58.531: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.226735ms)
Mar 28 11:18:58.732: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.207694ms)
Mar 28 11:18:58.931: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.022197ms)
Mar 28 11:18:59.132: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.046035ms)
Mar 28 11:18:59.332: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.032744ms)
Mar 28 11:18:59.532: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.517352ms)
Mar 28 11:18:59.732: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.300575ms)
Mar 28 11:18:59.932: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.29496ms)
Mar 28 11:19:00.132: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.080718ms)
Mar 28 11:19:00.332: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.168838ms)
Mar 28 11:19:00.533: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.28286ms)
Mar 28 11:19:00.733: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.057914ms)
Mar 28 11:19:00.933: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.06511ms)
Mar 28 11:19:01.133: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.077966ms)
Mar 28 11:19:01.333: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 982.972µs)
Mar 28 11:19:01.533: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.022735ms)
Mar 28 11:19:01.733: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.321256ms)
Mar 28 11:19:01.933: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.154677ms)
Mar 28 11:19:02.134: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.385494ms)
Mar 28 11:19:02.334: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.172362ms)
Mar 28 11:19:02.534: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.008347ms)
Mar 28 11:19:02.734: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.000375ms)
Mar 28 11:19:02.934: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.255173ms)
Mar 28 11:19:03.134: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 986.306µs)
Mar 28 11:19:03.334: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.151499ms)
Mar 28 11:19:03.534: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.061066ms)
Mar 28 11:19:03.734: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.24773ms)
Mar 28 11:19:03.934: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.201346ms)
Mar 28 11:19:04.135: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.145556ms)
Mar 28 11:19:04.335: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.263757ms)
Mar 28 11:19:04.535: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.310215ms)
Mar 28 11:19:04.735: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.042793ms)
Mar 28 11:19:04.935: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.268159ms)
Mar 28 11:19:05.135: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.226758ms)
Mar 28 11:19:05.335: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.242006ms)
Mar 28 11:19:05.535: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.170692ms)
Mar 28 11:19:05.736: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.221021ms)
Mar 28 11:19:05.935: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.003066ms)
Mar 28 11:19:06.136: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.003582ms)
Mar 28 11:19:06.336: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.166845ms)
Mar 28 11:19:06.536: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.129047ms)
Mar 28 11:19:06.736: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.171461ms)
Mar 28 11:19:06.936: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.134822ms)
Mar 28 11:19:07.136: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.204755ms)
Mar 28 11:19:07.336: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 962.545µs)
Mar 28 11:19:07.536: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.033347ms)
Mar 28 11:19:07.736: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.001931ms)
Mar 28 11:19:07.937: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.155275ms)
Mar 28 11:19:08.137: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.108967ms)
Mar 28 11:19:08.337: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.145685ms)
Mar 28 11:19:08.537: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.352198ms)
Mar 28 11:19:08.737: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.343281ms)
Mar 28 11:19:08.937: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.139671ms)
Mar 28 11:19:09.138: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.464466ms)
Mar 28 11:19:09.337: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.311949ms)
Mar 28 11:19:09.538: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.32108ms)
Mar 28 11:19:09.738: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.094588ms)
Mar 28 11:19:09.938: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.282643ms)
Mar 28 11:19:10.138: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.086189ms)
Mar 28 11:19:10.338: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.216578ms)
Mar 28 11:19:10.538: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.051388ms)
Mar 28 11:19:10.738: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.09382ms)
Mar 28 11:19:10.938: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.178497ms)
Mar 28 11:19:11.138: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.032554ms)
Mar 28 11:19:11.339: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.367289ms)
Mar 28 11:19:11.539: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.217915ms)
Mar 28 11:19:11.739: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.227033ms)
Mar 28 11:19:11.939: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.233649ms)
Mar 28 11:19:12.139: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.321421ms)
Mar 28 11:19:12.339: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.089132ms)
Mar 28 11:19:12.539: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.338045ms)
Mar 28 11:19:12.739: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.13497ms)
Mar 28 11:19:12.939: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.12483ms)
Mar 28 11:19:13.139: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.14742ms)
Mar 28 11:19:13.340: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.246945ms)
Mar 28 11:19:13.540: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.088865ms)
Mar 28 11:19:13.740: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.040743ms)
Mar 28 11:19:13.940: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.153572ms)
Mar 28 11:19:14.140: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.081752ms)
Mar 28 11:19:14.340: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.165933ms)
Mar 28 11:19:14.540: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.141408ms)
Mar 28 11:19:14.740: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.069631ms)
Mar 28 11:19:14.940: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.00115ms)
Mar 28 11:19:15.141: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 996.6µs)
Mar 28 11:19:15.341: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.141577ms)
Mar 28 11:19:15.541: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.182523ms)
Mar 28 11:19:15.741: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.145254ms)
Mar 28 11:19:15.941: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.189022ms)
Mar 28 11:19:16.141: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.012688ms)
Mar 28 11:19:16.341: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.067394ms)
Mar 28 11:19:16.541: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 988.216µs)
Mar 28 11:19:16.741: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 981.387µs)
Mar 28 11:19:16.942: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.146019ms)
Mar 28 11:19:17.142: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.11428ms)
Mar 28 11:19:17.342: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.301943ms)
Mar 28 11:19:17.542: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.193705ms)
Mar 28 11:19:17.742: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.257959ms)
Mar 28 11:19:17.942: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.191412ms)
Mar 28 11:19:18.142: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.301251ms)
Mar 28 11:19:18.342: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 988.763µs)
Mar 28 11:19:18.543: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.249099ms)
Mar 28 11:19:18.743: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.18374ms)
Mar 28 11:19:18.943: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.168083ms)
Mar 28 11:19:19.143: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.232628ms)
Mar 28 11:19:19.343: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.362291ms)
Mar 28 11:19:19.543: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.005157ms)
Mar 28 11:19:19.743: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.01806ms)
Mar 28 11:19:19.944: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.504644ms)
Mar 28 11:19:20.144: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.138464ms)
Mar 28 11:19:20.344: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.155862ms)
Mar 28 11:19:20.544: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.189431ms)
Mar 28 11:19:20.744: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.053893ms)
Mar 28 11:19:20.944: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.1201ms)
Mar 28 11:19:21.144: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.061855ms)
Mar 28 11:19:21.344: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.1718ms)
Mar 28 11:19:21.544: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.119747ms)
Mar 28 11:19:21.745: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.170503ms)
Mar 28 11:19:21.945: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.162891ms)
Mar 28 11:19:22.145: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.237381ms)
Mar 28 11:19:22.345: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.255318ms)
Mar 28 11:19:22.545: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.000515ms)
Mar 28 11:19:22.745: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.17593ms)
Mar 28 11:19:22.945: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.181132ms)
Mar 28 11:19:23.145: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 944.025µs)
Mar 28 11:19:23.345: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.027707ms)
Mar 28 11:19:23.546: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.100827ms)
Mar 28 11:19:23.746: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.136891ms)
Mar 28 11:19:23.946: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.017094ms)
Mar 28 11:19:24.146: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.353992ms)
Mar 28 11:19:24.346: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.20185ms)
Mar 28 11:19:24.546: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.149098ms)
Mar 28 11:19:24.746: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.205879ms)
Mar 28 11:19:24.947: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.279608ms)
Mar 28 11:19:25.146: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.068238ms)
Mar 28 11:19:25.347: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.279351ms)
Mar 28 11:19:25.547: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.216525ms)
Mar 28 11:19:25.747: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.193764ms)
Mar 28 11:19:25.947: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.206419ms)
Mar 28 11:19:26.147: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.218312ms)
Mar 28 11:19:26.347: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.224857ms)
Mar 28 11:19:26.547: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.029892ms)
Mar 28 11:19:26.747: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.074133ms)
Mar 28 11:19:26.948: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.334046ms)
Mar 28 11:19:27.148: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.292415ms)
Mar 28 11:19:27.348: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.134359ms)
Mar 28 11:19:27.548: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.236179ms)
Mar 28 11:19:27.748: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.165793ms)
Mar 28 11:19:27.948: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 999.652µs)
Mar 28 11:19:28.148: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.138224ms)
Mar 28 11:19:28.348: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.165649ms)
Mar 28 11:19:28.548: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.200629ms)
Mar 28 11:19:28.748: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.242255ms)
Mar 28 11:19:28.948: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.206803ms)
Mar 28 11:19:29.149: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.239481ms)
Mar 28 11:19:29.348: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.019128ms)
Mar 28 11:19:29.549: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.149572ms)
Mar 28 11:19:29.749: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.206246ms)
Mar 28 11:19:29.949: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.026839ms)
Mar 28 11:19:30.149: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.11545ms)
Mar 28 11:19:30.349: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.143182ms)
Mar 28 11:19:30.549: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.169964ms)
Mar 28 11:19:30.749: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.166651ms)
Mar 28 11:19:30.950: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.171763ms)
Mar 28 11:19:31.150: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.241959ms)
Mar 28 11:19:31.350: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.228824ms)
Mar 28 11:19:31.550: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.218577ms)
Mar 28 11:19:31.750: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.054259ms)
Mar 28 11:19:31.950: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.107467ms)
Mar 28 11:19:32.150: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.011448ms)
Mar 28 11:19:32.350: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.045998ms)
Mar 28 11:19:32.550: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 982.98µs)
Mar 28 11:19:32.750: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.024339ms)
Mar 28 11:19:32.951: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.114825ms)
Mar 28 11:19:33.151: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 968.071µs)
Mar 28 11:19:33.351: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.220824ms)
Mar 28 11:19:33.551: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.141042ms)
Mar 28 11:19:33.751: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.111455ms)
Mar 28 11:19:33.951: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.137815ms)
Mar 28 11:19:34.151: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.223329ms)
Mar 28 11:19:34.351: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.020362ms)
Mar 28 11:19:34.552: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.614376ms)
Mar 28 11:19:34.752: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.148707ms)
Mar 28 11:19:34.952: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.110501ms)
Mar 28 11:19:35.152: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.178106ms)
Mar 28 11:19:35.352: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.211874ms)
Mar 28 11:19:35.552: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 997.401µs)
Mar 28 11:19:35.752: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.069192ms)
Mar 28 11:19:35.952: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.202719ms)
Mar 28 11:19:36.152: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.166359ms)
Mar 28 11:19:36.352: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.120961ms)
Mar 28 11:19:36.553: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.270152ms)
Mar 28 11:19:36.753: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.057105ms)
Mar 28 11:19:36.953: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.023366ms)
Mar 28 11:19:37.153: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.249415ms)
Mar 28 11:19:37.353: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.305119ms)
Mar 28 11:19:37.553: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.346852ms)
Mar 28 11:19:37.753: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.187437ms)
Mar 28 11:19:37.953: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.027944ms)
Mar 28 11:19:38.153: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.000651ms)
Mar 28 11:19:38.354: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.152124ms)
Mar 28 11:19:38.554: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.182413ms)
Mar 28 11:19:38.754: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.219245ms)
Mar 28 11:19:38.954: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.20748ms)
Mar 28 11:19:39.154: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.207099ms)
Mar 28 11:19:39.354: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.265169ms)
Mar 28 11:19:39.554: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.26754ms)
Mar 28 11:19:39.754: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.081393ms)
Mar 28 11:19:39.954: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.165874ms)
Mar 28 11:19:40.154: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.119016ms)
Mar 28 11:19:40.354: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.021179ms)
Mar 28 11:19:40.554: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 994.31µs)
Mar 28 11:19:40.755: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 987.753µs)
Mar 28 11:19:40.955: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.170575ms)
Mar 28 11:19:41.155: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 977.981µs)
Mar 28 11:19:41.355: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.347171ms)
Mar 28 11:19:41.555: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.223316ms)
Mar 28 11:19:41.755: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.160867ms)
Mar 28 11:19:41.955: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 1.182175ms)
Mar 28 11:19:42.156: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.402864ms)
Mar 28 11:19:42.356: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.121737ms)
Mar 28 11:19:42.556: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.299886ms)
Mar 28 11:19:42.756: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.179205ms)
Mar 28 11:19:42.956: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.189785ms)
Mar 28 11:19:43.156: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.22656ms)
Mar 28 11:19:43.360: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:460/proxy/: tls baz (200; 5.271462ms)
Mar 28 11:19:43.556: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:444/: tls qux (200; 1.242977ms)
Mar 28 11:19:43.756: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/re... (200; 1.149913ms)
Mar 28 11:19:43.956: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.14072ms)
Mar 28 11:19:44.157: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:81/: bar (200; 1.268021ms)
Mar 28 11:19:44.357: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/: tls baz (200; 1.035001ms)
Mar 28 11:19:44.557: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:443/proxy/... (200; 1.161808ms)
Mar 28 11:19:44.757: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:80/: foo (200; 1.232621ms)
Mar 28 11:19:44.957: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919/proxy/rewriteme"... (200; 1.245906ms)
Mar 28 11:19:45.157: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:80/proxy/re... (200; 1.183744ms)
Mar 28 11:19:45.357: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/: foo (200; 1.143629ms)
Mar 28 11:19:45.557: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:160/: foo (200; 1.130634ms)
Mar 28 11:19:45.757: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/proxy/rewrite... (200; 1.056757ms)
Mar 28 11:19:45.958: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:443/: tls baz (200; 1.219653ms)
Mar 28 11:19:46.157: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/proxy/: foo (200; 1.050132ms)
Mar 28 11:19:46.358: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/proxy/: bar (200; 1.049804ms)
Mar 28 11:19:46.558: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:80/rewrite... (200; 1.18516ms)
Mar 28 11:19:46.758: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/: bar (200; 1.168554ms)
Mar 28 11:19:46.958: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.154955ms)
Mar 28 11:19:47.158: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/http:proxy-service-3xqmo-2e919:162/proxy/: bar (200; 1.18808ms)
Mar 28 11:19:47.358: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/https:proxy-service-3xqmo-2e919:462/proxy/: tls qux (200; 1.185261ms)
Mar 28 11:19:47.558: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname1/: foo (200; 1.050505ms)
Mar 28 11:19:47.758: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/: tls qux (200; 1.141714ms)
Mar 28 11:19:47.958: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/proxy/: bar (200; 1.050556ms)
Mar 28 11:19:48.159: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:162/: bar (200; 1.22444ms)
Mar 28 11:19:48.359: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:80/: foo (200; 1.257146ms)
Mar 28 11:19:48.559: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:81/: bar (200; 1.428265ms)
Mar 28 11:19:48.759: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname2/proxy/: tls qux (200; 1.072248ms)
Mar 28 11:19:48.959: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/https:proxy-service-3xqmo:tlsportname1/proxy/: tls baz (200; 1.020356ms)
Mar 28 11:19:49.159: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/pods/proxy-service-3xqmo-2e919:160/proxy/: foo (200; 1.176413ms)
Mar 28 11:19:49.359: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname2/: bar (200; 1.018767ms)
Mar 28 11:19:49.559: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/http:proxy-service-3xqmo:portname2/: bar (200; 1.122466ms)
Mar 28 11:19:49.759: INFO: /api/v1/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/proxy/: foo (200; 1.011978ms)
Mar 28 11:19:49.959: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-xck2s/services/proxy-service-3xqmo:portname1/: foo (200; 1.037531ms)
STEP: deleting replication controller proxy-service-3xqmo in namespace e2e-tests-proxy-xck2s
Mar 28 11:19:52.169: INFO: Deleting RC proxy-service-3xqmo took: 2.009352443s
Mar 28 11:19:52.169: INFO: Terminating RC proxy-service-3xqmo pods took: 39.299µs
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:19:52.206: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-xck2s" for this suite.
• [SLOW TEST:152.179 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy through a service and a pod [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:244
------------------------------
S
------------------------------
Kubectl client Kubectl describe
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:670
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:19:57.222: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:19:57.223: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fs8m1
Mar 28 11:19:57.224: INFO: Get service account default in ns e2e-tests-kubectl-fs8m1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:19:59.225: INFO: Service account default in ns e2e-tests-kubectl-fs8m1 with secrets found. (2.001769669s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:19:59.225: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fs8m1
Mar 28 11:19:59.226: INFO: Service account default in ns e2e-tests-kubectl-fs8m1 with secrets found. (723.377µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:670
Mar 28 11:19:59.226: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-fs8m1'
Mar 28 11:19:59.270: INFO: stderr: ""
Mar 28 11:19:59.270: INFO: stdout: "replicationcontroller \"redis-master\" created"
Mar 28 11:19:59.270: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-fs8m1'
Mar 28 11:19:59.336: INFO: stderr: ""
Mar 28 11:19:59.336: INFO: stdout: "service \"redis-master\" created"
Mar 28 11:19:59.338: INFO: Waiting up to 5m0s for pod redis-master-u78kv status to be running
Mar 28 11:19:59.339: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (1.12706ms elapsed)
Mar 28 11:20:01.340: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (2.002640056s elapsed)
Mar 28 11:20:03.342: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (4.003896047s elapsed)
Mar 28 11:20:05.343: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (6.005394778s elapsed)
Mar 28 11:20:07.344: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (8.006773586s elapsed)
Mar 28 11:20:09.346: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (10.008133414s elapsed)
Mar 28 11:20:11.347: INFO: Waiting for pod redis-master-u78kv in namespace 'e2e-tests-kubectl-fs8m1' status to be 'running'(found phase: "Pending", readiness: false) (12.009485743s elapsed)
Mar 28 11:20:13.349: INFO: Found pod 'redis-master-u78kv' on node '127.0.0.1'
Mar 28 11:20:13.349: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-u78kv --namespace=e2e-tests-kubectl-fs8m1'
Mar 28 11:20:13.365: INFO: stderr: ""
Mar 28 11:20:13.365: INFO: stdout: "Name:\t\tredis-master-u78kv\nNamespace:\te2e-tests-kubectl-fs8m1\nNode:\t\t127.0.0.1/127.0.0.1\nStart Time:\tMon, 28 Mar 2016 11:19:59 -0700\nLabels:\t\tapp=redis,role=master\nStatus:\t\tRunning\nIP:\t\t172.17.0.3\nControllers:\tReplicationController/redis-master\nContainers:\n redis-master:\n Container ID:\tdocker://24b115b0d7fdf04e4ff320ebf10de2247adb3dcd76773e2901eb749c244f8b3c\n Image:\t\tredis\n Image ID:\t\tdocker://sha256:4f5f397d4b7ca414891bd2959ef71c83bb7010d095efb2497f0b2f407cb50f0d\n Port:\t\t6379/TCP\n QoS Tier:\n cpu:\t\tBestEffort\n memory:\t\tBestEffort\n State:\t\tRunning\n Started:\t\tMon, 28 Mar 2016 11:20:11 -0700\n Ready:\t\tTrue\n Restart Count:\t0\n Environment Variables:\nConditions:\n Type\t\tStatus\n Ready \tTrue \nVolumes:\n default-token-9p8zy:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-9p8zy\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubobjectPath\t\t\tType\t\tReason\t\tMessage\n ---------\t--------\t-----\t----\t\t\t-------------\t\t\t--------\t------\t\t-------\n 14s\t\t14s\t\t1\t{default-scheduler }\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-u78kv to 127.0.0.1\n 13s\t\t13s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"redis\"\n 2s\t\t2s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"redis\"\n 2s\t\t2s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tCreated\t\tCreated container with docker id 24b115b0d7fd\n 2s\t\t2s\t\t1\t{kubelet 127.0.0.1}\tspec.containers{redis-master}\tNormal\t\tStarted\t\tStarted container with docker id 24b115b0d7fd"
Mar 28 11:20:13.365: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-fs8m1'
Mar 28 11:20:13.380: INFO: stderr: ""
Mar 28 11:20:13.380: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-fs8m1\nImage(s):\tredis\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis,role=master\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nNo volumes.\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n 14s\t\t14s\t\t1\t{replication-controller }\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-u78kv"
Mar 28 11:20:13.380: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-fs8m1'
Mar 28 11:20:13.394: INFO: stderr: ""
Mar 28 11:20:13.394: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-fs8m1\nLabels:\t\t\tapp=redis,role=master\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t10.0.0.59\nPort:\t\t\t<unset>\t6379/TCP\nEndpoints:\t\t172.17.0.3:6379\nSession Affinity:\tNone\nNo events."
Mar 28 11:20:13.396: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config describe node 127.0.0.1'
Mar 28 11:20:13.417: INFO: stderr: ""
Mar 28 11:20:13.417: INFO: stdout: "Name:\t\t\t127.0.0.1\nLabels:\t\t\tkubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845=42,kubernetes.io/hostname=127.0.0.1\nCreationTimestamp:\tMon, 28 Mar 2016 10:54:10 -0700\nPhase:\t\t\t\nConditions:\n Type\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n ----\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n OutOfDisk \tFalse \tMon, 28 Mar 2016 11:20:03 -0700 \tMon, 28 Mar 2016 10:54:10 -0700 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n Ready \tTrue \tMon, 28 Mar 2016 11:20:03 -0700 \tMon, 28 Mar 2016 10:54:10 -0700 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\t127.0.0.1,127.0.0.1\nCapacity:\n cpu:\t\t4\n memory:\t16374176Ki\n pods:\t\t110\nSystem Info:\n Machine ID:\t\t\t12fd47e6d4c54914a88972c6cc8d81de\n System UUID:\t\t\t44EC9C00-D7DA-11DD-9398-0862662CF845\n Boot ID:\t\t\tf0f5728b-196f-47d9-9672-2a11ecdb3d77\n Kernel Version:\t\t4.4.6-300.fc23.x86_64\n OS Image:\t\t\tDebian GNU/Linux 8 (jessie)\n Container Runtime Version:\tdocker://1.10.3\n Kubelet Version:\t\tv1.2.0\n Kube-Proxy Version:\t\tv1.2.0\nExternalID:\t\t\t127.0.0.1\nNon-terminated Pods:\t\t(5 in total)\n Namespace\t\t\tName\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ---------\t\t\t----\t\t\t\t------------\t----------\t---------------\t-------------\n default\t\t\tk8s-etcd-127.0.0.1\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n default\t\t\tk8s-master-127.0.0.1\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n default\t\t\tk8s-proxy-127.0.0.1\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n e2e-tests-kubectl-fs8m1\tredis-master-u78kv\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n kube-system\t\t\tkube-dns-v10-d8w4s\t\t310m (7%)\t310m (7%)\t170Mi (1%)\t170Mi (1%)\nAllocated resources:\n (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)\n CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ------------\t----------\t---------------\t-------------\n 310m (7%)\t310m (7%)\t170Mi (1%)\t170Mi (1%)\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubobjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n 26m\t\t26m\t\t1\t{kubelet 127.0.0.1}\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n 26m\t\t26m\t\t7\t{kubelet 127.0.0.1}\t\t\tNormal\t\tNodeHasSufficientDisk\tNode 127.0.0.1 status is now: NodeHasSufficientDisk\n 26m\t\t26m\t\t1\t{controllermanager }\t\t\tNormal\t\tRegisteredNode\t\tNode 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController"
Mar 28 11:20:13.417: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-fs8m1'
Mar 28 11:20:13.430: INFO: stderr: ""
Mar 28 11:20:13.430: INFO: stdout: "Name:\te2e-tests-kubectl-fs8m1\nLabels:\te2e-framework=kubectl,e2e-run=39433e52-f50e-11e5-b1e1-0862662cf845\nStatus:\tActive\n\nNo resource quota.\n\nNo resource limits."
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:20:13.430: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fs8m1" for this suite.
• [SLOW TEST:36.213 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl describe
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:671
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:670
------------------------------
SSS
------------------------------
Docker Containers
should be able to override the image's default command and arguments [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:20:33.435: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:20:33.437: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-5s5bx
Mar 28 11:20:33.444: INFO: Get service account default in ns e2e-tests-containers-5s5bx failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:20:35.445: INFO: Service account default in ns e2e-tests-containers-5s5bx with secrets found. (2.007840296s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:20:35.445: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-5s5bx
Mar 28 11:20:35.445: INFO: Service account default in ns e2e-tests-containers-5s5bx with secrets found. (700.202µs)
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should be able to override the image's default command and arguments [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
STEP: Creating a pod to test override all
Mar 28 11:20:35.448: INFO: Waiting up to 5m0s for pod client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:20:35.449: INFO: No Status.Info for container 'test-container' in pod 'client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845' yet
Mar 28 11:20:35.449: INFO: Waiting for pod client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-5s5bx' status to be 'success or failure'(found phase: "Pending", readiness: false) (890.911µs elapsed)
Mar 28 11:20:37.450: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-containers-5s5bx' so far
Mar 28 11:20:37.450: INFO: Waiting for pod client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-5s5bx' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002254973s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-c3e4a670-f511-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2 override arguments]
[AfterEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:20:39.469: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5s5bx" for this suite.
• [SLOW TEST:11.044 seconds]
Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72
should be able to override the image's default command and arguments [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:71
------------------------------
SS
------------------------------
Kubectl client Guestbook application
should create and stop a working application [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:179
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:20:44.480: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:20:44.481: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-d1upx
Mar 28 11:20:44.481: INFO: Get service account default in ns e2e-tests-kubectl-d1upx failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:20:46.483: INFO: Service account default in ns e2e-tests-kubectl-d1upx with secrets found. (2.001567938s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:20:46.483: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-d1upx
Mar 28 11:20:46.483: INFO: Service account default in ns e2e-tests-kubectl-d1upx with secrets found. (738.34µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Guestbook application
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167
[It] should create and stop a working application [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:179
STEP: creating all guestbook components
Mar 28 11:20:46.484: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook --namespace=e2e-tests-kubectl-d1upx'
Mar 28 11:20:46.820: INFO: stderr: ""
Mar 28 11:20:46.820: INFO: stdout: "replicationcontroller \"frontend\" created\nservice \"frontend\" created\nreplicationcontroller \"redis-master\" created\nservice \"redis-master\" created\nreplicationcontroller \"redis-slave\" created\nservice \"redis-slave\" created"
STEP: validating guestbook app
Mar 28 11:20:46.820: INFO: Waiting for frontend to serve content.
Mar 28 11:20:46.822: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:20:51.824: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:20:56.825: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:01.826: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:06.828: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:11.829: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:16.831: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:21.835: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:26.836: INFO: Failed to get response from guestbook. err: no endpoints available for service "frontend", response:
Mar 28 11:21:36.847: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResource()
#6 /usr/local/lib/php/Predis/Connection/Stre in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Mar 28 11:21:46.857: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResource()
#6 /usr/local/lib/php/Predis/Connection/Stre in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Mar 28 11:21:56.868: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResource()
#6 /usr/local/lib/php/Predis/Connection/Stre in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Mar 28 11:22:06.877: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResource()
#6 /usr/local/lib/php/Predis/Connection/Stre in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Mar 28 11:22:16.942: INFO: Failed to get response from guestbook. err: <nil>, response: <br />
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('Connection time...', 110)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResource()
#6 /usr/local/lib/php/Predis/Connection/Stre in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />
Mar 28 11:22:21.947: INFO: Trying to add a new entry to the guestbook.
Mar 28 11:22:21.978: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar 28 11:22:21.983: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 -f ../../examples/guestbook --namespace=e2e-tests-kubectl-d1upx'
Mar 28 11:22:28.136: INFO: stderr: ""
Mar 28 11:22:28.136: INFO: stdout: "replicationcontroller \"frontend\" deleted\nservice \"frontend\" deleted\nreplicationcontroller \"redis-master\" deleted\nservice \"redis-master\" deleted\nreplicationcontroller \"redis-slave\" deleted\nservice \"redis-slave\" deleted"
Mar 28 11:22:28.136: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l app=guestbook,tier=frontend --no-headers --namespace=e2e-tests-kubectl-d1upx'
Mar 28 11:22:28.149: INFO: stderr: ""
Mar 28 11:22:28.149: INFO: stdout: ""
Mar 28 11:22:28.149: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l app=guestbook,tier=frontend --namespace=e2e-tests-kubectl-d1upx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:22:28.161: INFO: stderr: ""
Mar 28 11:22:28.161: INFO: stdout: ""
Mar 28 11:22:28.161: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l app=redis,role=master --no-headers --namespace=e2e-tests-kubectl-d1upx'
Mar 28 11:22:28.173: INFO: stderr: ""
Mar 28 11:22:28.173: INFO: stdout: ""
Mar 28 11:22:28.173: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l app=redis,role=master --namespace=e2e-tests-kubectl-d1upx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:22:28.189: INFO: stderr: ""
Mar 28 11:22:28.189: INFO: stdout: ""
Mar 28 11:22:28.189: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l app=redis,role=slave --no-headers --namespace=e2e-tests-kubectl-d1upx'
Mar 28 11:22:28.202: INFO: stderr: ""
Mar 28 11:22:28.202: INFO: stdout: ""
Mar 28 11:22:28.202: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l app=redis,role=slave --namespace=e2e-tests-kubectl-d1upx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:22:28.216: INFO: stderr: ""
Mar 28 11:22:28.216: INFO: stdout: ""
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:22:28.216: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d1upx" for this suite.
• [SLOW TEST:138.742 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Guestbook application
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:180
should create and stop a working application [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:179
------------------------------
SS
------------------------------
Docker Containers
should use the image defaults if command and args are blank [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:23:03.222: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:23:03.224: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-loy35
Mar 28 11:23:03.225: INFO: Get service account default in ns e2e-tests-containers-loy35 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:23:05.226: INFO: Service account default in ns e2e-tests-containers-loy35 with secrets found. (2.001775956s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:23:05.226: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-loy35
Mar 28 11:23:05.227: INFO: Service account default in ns e2e-tests-containers-loy35 with secrets found. (615.664µs)
[BeforeEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:35
[It] should use the image defaults if command and args are blank [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
STEP: Creating a pod to test use defaults
Mar 28 11:23:05.229: INFO: Waiting up to 5m0s for pod client-containers-1d2b6f41-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:23:05.230: INFO: No Status.Info for container 'test-container' in pod 'client-containers-1d2b6f41-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:23:05.230: INFO: Waiting for pod client-containers-1d2b6f41-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-containers-loy35' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.266082ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-1d2b6f41-f512-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments]
[AfterEach] Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:23:07.249: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-loy35" for this suite.
• [SLOW TEST:9.036 seconds]
Docker Containers
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72
should use the image defaults if command and args are blank [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
------------------------------
S
------------------------------
Proxy version v1
should proxy to cadvisor using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:23:12.258: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:23:12.260: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-06pv6
Mar 28 11:23:12.260: INFO: Get service account default in ns e2e-tests-proxy-06pv6 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:23:14.267: INFO: Service account default in ns e2e-tests-proxy-06pv6 with secrets found. (2.007108138s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:23:14.267: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-06pv6
Mar 28 11:23:14.268: INFO: Service account default in ns e2e-tests-proxy-06pv6 with secrets found. (683.064µs)
[It] should proxy to cadvisor using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61
Mar 28 11:23:14.272: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 2.404046ms)
Mar 28 11:23:14.274: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.932248ms)
Mar 28 11:23:14.275: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.707629ms)
Mar 28 11:23:14.277: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.708237ms)
Mar 28 11:23:14.279: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.781308ms)
Mar 28 11:23:14.281: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.880715ms)
Mar 28 11:23:14.283: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.844332ms)
Mar 28 11:23:14.284: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.766686ms)
Mar 28 11:23:14.286: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.680789ms)
Mar 28 11:23:14.288: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.613091ms)
Mar 28 11:23:14.290: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.753139ms)
Mar 28 11:23:14.291: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.778568ms)
Mar 28 11:23:14.293: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.6586ms)
Mar 28 11:23:14.295: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.707352ms)
Mar 28 11:23:14.296: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.646412ms)
Mar 28 11:23:14.298: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.66559ms)
Mar 28 11:23:14.300: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.650203ms)
Mar 28 11:23:14.301: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.703747ms)
Mar 28 11:23:14.303: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.690679ms)
Mar 28 11:23:14.305: INFO: /api/v1/nodes/127.0.0.1:4194/proxy/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.600452ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:23:14.305: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-06pv6" for this suite.
• [SLOW TEST:7.052 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy to cadvisor using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61
------------------------------
SS
------------------------------
Kubectl client Kubectl cluster-info
should check if Kubernetes master services is included in cluster-info [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:571
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:23:19.310: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:23:19.312: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-vurz1
Mar 28 11:23:19.312: INFO: Get service account default in ns e2e-tests-kubectl-vurz1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:23:21.313: INFO: Service account default in ns e2e-tests-kubectl-vurz1 with secrets found. (2.001802441s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:23:21.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-vurz1
Mar 28 11:23:21.314: INFO: Service account default in ns e2e-tests-kubectl-vurz1 with secrets found. (604.665µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should check if Kubernetes master services is included in cluster-info [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:571
STEP: validating cluster-info
Mar 28 11:23:21.314: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Mar 28 11:23:21.325: INFO: stderr: ""
Mar 28 11:23:21.325: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttp://localhost:8080\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttp://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns\x1b[0m"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:23:21.325: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vurz1" for this suite.
• [SLOW TEST:7.020 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl cluster-info
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:572
should check if Kubernetes master services is included in cluster-info [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:571
------------------------------
S
------------------------------
DNS
should provide DNS for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330
[BeforeEach] DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:23:26.331: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:23:26.332: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-ms9rg
Mar 28 11:23:26.333: INFO: Get service account default in ns e2e-tests-dns-ms9rg failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:23:28.334: INFO: Service account default in ns e2e-tests-dns-ms9rg with secrets found. (2.001523676s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:23:28.334: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-ms9rg
Mar 28 11:23:28.335: INFO: Service account default in ns e2e-tests-dns-ms9rg with secrets found. (722.558µs)
[It] should provide DNS for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330
STEP: Waiting for DNS Service to be Running
Mar 28 11:23:28.336: INFO: Waiting up to 5m0s for pod kube-dns-v10-d8w4s status to be running
Mar 28 11:23:28.338: INFO: Found pod 'kube-dns-v10-d8w4s' on node '127.0.0.1'
STEP: Creating a test headless service
STEP: Running these commands on wheezy:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-test-service A)" && echo OK > /results/wheezy_udp@dns-test-service;test -n "$$(dig +tcp +noall +answer +search dns-test-service A)" && echo OK > /results/wheezy_tcp@dns-test-service;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg A)" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ms9rg;test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg A)" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ms9rg;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg.svc A)" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg.svc A)" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ms9rg.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search dns-test-service A)" && echo OK > /results/jessie_udp@dns-test-service;test -n "$$(dig +tcp +noall +answer +search dns-test-service A)" && echo OK > /results/jessie_tcp@dns-test-service;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg A)" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ms9rg;test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg A)" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ms9rg;test -n "$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg.svc A)" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ms9rg.svc A)" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc;test -n "$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc SRV)" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-ms9rg.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ms9rg.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
Mar 28 11:23:28.378: INFO: Waiting up to 5m0s for pod dns-test-2af6b1c5-f512-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:23:28.381: INFO: Waiting for pod dns-test-2af6b1c5-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-dns-ms9rg' status to be 'running'(found phase: "Pending", readiness: false) (3.173046ms elapsed)
Mar 28 11:23:30.383: INFO: Waiting for pod dns-test-2af6b1c5-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-dns-ms9rg' status to be 'running'(found phase: "Pending", readiness: false) (2.004854624s elapsed)
Mar 28 11:23:32.385: INFO: Found pod 'dns-test-2af6b1c5-f512-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: retrieving the pod
STEP: looking for the results for each expected name from probiers
Mar 28 11:23:34.414: INFO: DNS probes using dns-test-2af6b1c5-f512-11e5-b1e1-0862662cf845 succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:23:34.482: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-ms9rg" for this suite.
• [SLOW TEST:13.169 seconds]
DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:368
should provide DNS for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:330
------------------------------
Service endpoints latency
should not be very high [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
[BeforeEach] Service endpoints latency
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:23:39.499: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:23:39.501: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-kiz40
Mar 28 11:23:39.501: INFO: Get service account default in ns e2e-tests-svc-latency-kiz40 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:23:41.502: INFO: Service account default in ns e2e-tests-svc-latency-kiz40 with secrets found. (2.001688964s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:23:41.502: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-kiz40
Mar 28 11:23:41.503: INFO: Service account default in ns e2e-tests-svc-latency-kiz40 with secrets found. (591.039µs)
[It] should not be very high [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-kiz40
Mar 28 11:23:41.504: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-kiz40, replica count: 1
Mar 28 11:23:42.505: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Mar 28 11:23:43.505: INFO: svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Mar 28 11:23:43.616: INFO: Created: latency-svc-8n2pt
Mar 28 11:23:43.624: INFO: Got endpoints: latency-svc-8n2pt [19.067769ms]
Mar 28 11:23:43.687: INFO: Created: latency-svc-un5pb
Mar 28 11:23:43.700: INFO: Created: latency-svc-gev8o
Mar 28 11:23:43.701: INFO: Got endpoints: latency-svc-un5pb [36.370949ms]
Mar 28 11:23:43.716: INFO: Got endpoints: latency-svc-gev8o [52.236502ms]
Mar 28 11:23:43.717: INFO: Created: latency-svc-1mqlt
Mar 28 11:23:43.734: INFO: Created: latency-svc-e3h78
Mar 28 11:23:43.735: INFO: Got endpoints: latency-svc-1mqlt [70.025289ms]
Mar 28 11:23:43.749: INFO: Created: latency-svc-gpsxc
Mar 28 11:23:43.750: INFO: Got endpoints: latency-svc-e3h78 [85.682602ms]
Mar 28 11:23:43.771: INFO: Created: latency-svc-cajyc
Mar 28 11:23:43.772: INFO: Got endpoints: latency-svc-gpsxc [107.619344ms]
Mar 28 11:23:43.786: INFO: Created: latency-svc-3q31h
Mar 28 11:23:43.788: INFO: Got endpoints: latency-svc-cajyc [123.535725ms]
Mar 28 11:23:43.805: INFO: Got endpoints: latency-svc-3q31h [139.82979ms]
Mar 28 11:23:43.827: INFO: Created: latency-svc-m3fn8
Mar 28 11:23:43.843: INFO: Got endpoints: latency-svc-m3fn8 [178.616603ms]
Mar 28 11:23:43.844: INFO: Created: latency-svc-80otu
Mar 28 11:23:43.862: INFO: Created: latency-svc-xrqbq
Mar 28 11:23:43.863: INFO: Got endpoints: latency-svc-80otu [199.004543ms]
Mar 28 11:23:43.884: INFO: Got endpoints: latency-svc-xrqbq [219.56465ms]
Mar 28 11:23:43.905: INFO: Created: latency-svc-w5r1w
Mar 28 11:23:43.921: INFO: Created: latency-svc-8mn0h
Mar 28 11:23:43.923: INFO: Got endpoints: latency-svc-w5r1w [258.60831ms]
Mar 28 11:23:43.952: INFO: Got endpoints: latency-svc-8mn0h [287.895992ms]
Mar 28 11:23:43.984: INFO: Created: latency-svc-k0tnb
Mar 28 11:23:44.018: INFO: Created: latency-svc-ny2my
Mar 28 11:23:44.056: INFO: Created: latency-svc-gethe
Mar 28 11:23:44.065: INFO: Created: latency-svc-0t7cs
Mar 28 11:23:44.095: INFO: Created: latency-svc-lzt7y
Mar 28 11:23:44.118: INFO: Created: latency-svc-g7816
Mar 28 11:23:44.126: INFO: Created: latency-svc-xgtyo
Mar 28 11:23:44.149: INFO: Created: latency-svc-7hefg
Mar 28 11:23:44.171: INFO: Created: latency-svc-yzzzz
Mar 28 11:23:44.179: INFO: Created: latency-svc-k5zeu
Mar 28 11:23:44.200: INFO: Created: latency-svc-22vvm
Mar 28 11:23:44.222: INFO: Created: latency-svc-wz90z
Mar 28 11:23:44.236: INFO: Created: latency-svc-ui9b2
Mar 28 11:23:44.252: INFO: Created: latency-svc-qgo0c
Mar 28 11:23:44.268: INFO: Created: latency-svc-cwmja
Mar 28 11:23:44.347: INFO: Got endpoints: latency-svc-k0tnb [682.308961ms]
Mar 28 11:23:44.394: INFO: Created: latency-svc-sw3rz
Mar 28 11:23:44.403: INFO: Got endpoints: latency-svc-ny2my [616.303886ms]
Mar 28 11:23:44.450: INFO: Got endpoints: latency-svc-gethe [785.740858ms]
Mar 28 11:23:44.481: INFO: Created: latency-svc-5bexr
Mar 28 11:23:44.513: INFO: Got endpoints: latency-svc-0t7cs [651.381923ms]
Mar 28 11:23:44.528: INFO: Created: latency-svc-090t6
Mar 28 11:23:44.551: INFO: Got endpoints: latency-svc-lzt7y [887.172232ms]
Mar 28 11:23:44.581: INFO: Created: latency-svc-z1qrp
Mar 28 11:23:44.626: INFO: Created: latency-svc-ywxmr
Mar 28 11:23:44.847: INFO: Got endpoints: latency-svc-g7816 [925.192075ms]
Mar 28 11:23:44.901: INFO: Created: latency-svc-ytr2k
Mar 28 11:23:44.901: INFO: Got endpoints: latency-svc-xgtyo [964.092297ms]
Mar 28 11:23:44.960: INFO: Created: latency-svc-wmbg3
Mar 28 11:23:44.997: INFO: Got endpoints: latency-svc-7hefg [1.044997531s]
Mar 28 11:23:45.045: INFO: Created: latency-svc-gvdfl
Mar 28 11:23:45.053: INFO: Got endpoints: latency-svc-yzzzz [1.069330061s]
Mar 28 11:23:45.098: INFO: Got endpoints: latency-svc-k5zeu [1.080881874s]
Mar 28 11:23:45.125: INFO: Created: latency-svc-ji7bm
Mar 28 11:23:45.177: INFO: Created: latency-svc-2fr1h
Mar 28 11:23:45.397: INFO: Got endpoints: latency-svc-22vvm [1.364291802s]
Mar 28 11:23:45.450: INFO: Created: latency-svc-4zs3a
Mar 28 11:23:45.450: INFO: Got endpoints: latency-svc-wz90z [1.385968034s]
Mar 28 11:23:45.498: INFO: Got endpoints: latency-svc-ui9b2 [1.40297957s]
Mar 28 11:23:45.506: INFO: Created: latency-svc-d5apt
Mar 28 11:23:45.554: INFO: Created: latency-svc-szgwn
Mar 28 11:23:45.555: INFO: Got endpoints: latency-svc-qgo0c [1.429209274s]
Mar 28 11:23:45.604: INFO: Got endpoints: latency-svc-cwmja [1.454926816s]
Mar 28 11:23:45.619: INFO: Created: latency-svc-z5fe9
Mar 28 11:23:45.679: INFO: Created: latency-svc-ly2ed
Mar 28 11:23:45.947: INFO: Got endpoints: latency-svc-sw3rz [1.569125011s]
Mar 28 11:23:45.997: INFO: Created: latency-svc-h01wj
Mar 28 11:23:46.097: INFO: Got endpoints: latency-svc-5bexr [1.646822576s]
Mar 28 11:23:46.140: INFO: Created: latency-svc-w8w5s
Mar 28 11:23:46.147: INFO: Got endpoints: latency-svc-090t6 [1.651004407s]
Mar 28 11:23:46.187: INFO: Created: latency-svc-6g9yy
Mar 28 11:23:46.447: INFO: Got endpoints: latency-svc-z1qrp [1.896052713s]
Mar 28 11:23:46.507: INFO: Got endpoints: latency-svc-ywxmr [1.911226739s]
Mar 28 11:23:46.508: INFO: Created: latency-svc-0k2ys
Mar 28 11:23:46.562: INFO: Created: latency-svc-35z4s
Mar 28 11:23:46.597: INFO: Got endpoints: latency-svc-ytr2k [1.724646264s]
Mar 28 11:23:46.648: INFO: Created: latency-svc-1dedh
Mar 28 11:23:46.797: INFO: Got endpoints: latency-svc-wmbg3 [1.858330172s]
Mar 28 11:23:46.846: INFO: Created: latency-svc-kx3wx
Mar 28 11:23:46.947: INFO: Got endpoints: latency-svc-gvdfl [1.919521039s]
Mar 28 11:23:46.996: INFO: Created: latency-svc-37s3x
Mar 28 11:23:47.147: INFO: Got endpoints: latency-svc-ji7bm [2.04924907s]
Mar 28 11:23:47.200: INFO: Created: latency-svc-s1y7k
Mar 28 11:23:47.247: INFO: Got endpoints: latency-svc-2fr1h [2.105993531s]
Mar 28 11:23:47.301: INFO: Created: latency-svc-8qdrl
Mar 28 11:23:47.347: INFO: Got endpoints: latency-svc-4zs3a [1.913447777s]
Mar 28 11:23:47.399: INFO: Created: latency-svc-b7563
Mar 28 11:23:47.547: INFO: Got endpoints: latency-svc-d5apt [2.056966925s]
Mar 28 11:23:47.595: INFO: Created: latency-svc-8jdf0
Mar 28 11:23:47.697: INFO: Got endpoints: latency-svc-szgwn [2.159796308s]
Mar 28 11:23:47.751: INFO: Created: latency-svc-aiug3
Mar 28 11:23:47.847: INFO: Got endpoints: latency-svc-z5fe9 [2.263740573s]
Mar 28 11:23:47.908: INFO: Created: latency-svc-axq7c
Mar 28 11:23:47.997: INFO: Got endpoints: latency-svc-ly2ed [2.34070277s]
Mar 28 11:23:48.060: INFO: Created: latency-svc-z8yyl
Mar 28 11:23:48.147: INFO: Got endpoints: latency-svc-h01wj [2.165545045s]
Mar 28 11:23:48.198: INFO: Created: latency-svc-xmq9i
Mar 28 11:23:48.297: INFO: Got endpoints: latency-svc-w8w5s [2.172960546s]
Mar 28 11:23:48.346: INFO: Created: latency-svc-z9bk7
Mar 28 11:23:48.447: INFO: Got endpoints: latency-svc-6g9yy [2.277087014s]
Mar 28 11:23:48.502: INFO: Created: latency-svc-pvjrs
Mar 28 11:23:48.597: INFO: Got endpoints: latency-svc-0k2ys [2.111107517s]
Mar 28 11:23:48.654: INFO: Created: latency-svc-2rfua
Mar 28 11:23:48.747: INFO: Got endpoints: latency-svc-35z4s [2.201485831s]
Mar 28 11:23:48.797: INFO: Created: latency-svc-2hwsc
Mar 28 11:23:48.897: INFO: Got endpoints: latency-svc-1dedh [2.264409615s]
Mar 28 11:23:48.945: INFO: Created: latency-svc-axru1
Mar 28 11:23:49.047: INFO: Got endpoints: latency-svc-kx3wx [2.217154155s]
Mar 28 11:23:49.095: INFO: Created: latency-svc-aiilg
Mar 28 11:23:49.197: INFO: Got endpoints: latency-svc-37s3x [2.217660426s]
Mar 28 11:23:49.248: INFO: Created: latency-svc-ugfkp
Mar 28 11:23:49.347: INFO: Got endpoints: latency-svc-s1y7k [2.170327306s]
Mar 28 11:23:49.403: INFO: Created: latency-svc-co9qi
Mar 28 11:23:49.498: INFO: Got endpoints: latency-svc-8qdrl [2.214037041s]
Mar 28 11:23:49.546: INFO: Created: latency-svc-aawpq
Mar 28 11:23:49.648: INFO: Got endpoints: latency-svc-b7563 [2.276644604s]
Mar 28 11:23:49.703: INFO: Created: latency-svc-ml8mk
Mar 28 11:23:49.797: INFO: Got endpoints: latency-svc-8jdf0 [2.217666791s]
Mar 28 11:23:49.854: INFO: Created: latency-svc-chzco
Mar 28 11:23:49.947: INFO: Got endpoints: latency-svc-aiug3 [2.21134006s]
Mar 28 11:23:50.006: INFO: Created: latency-svc-9w1am
Mar 28 11:23:50.097: INFO: Got endpoints: latency-svc-axq7c [2.20548432s]
Mar 28 11:23:50.158: INFO: Created: latency-svc-qst5f
Mar 28 11:23:50.247: INFO: Got endpoints: latency-svc-z8yyl [2.20718617s]
Mar 28 11:23:50.295: INFO: Created: latency-svc-8x5r5
Mar 28 11:23:50.397: INFO: Got endpoints: latency-svc-xmq9i [2.214772182s]
Mar 28 11:23:50.447: INFO: Created: latency-svc-6pfds
Mar 28 11:23:50.547: INFO: Got endpoints: latency-svc-z9bk7 [2.21714356s]
Mar 28 11:23:50.615: INFO: Created: latency-svc-6uz5j
Mar 28 11:23:50.697: INFO: Got endpoints: latency-svc-pvjrs [2.209723438s]
Mar 28 11:23:50.738: INFO: Created: latency-svc-m2j5k
Mar 28 11:23:50.847: INFO: Got endpoints: latency-svc-2rfua [2.215288516s]
Mar 28 11:23:50.898: INFO: Created: latency-svc-7171n
Mar 28 11:23:50.997: INFO: Got endpoints: latency-svc-2hwsc [2.216175938s]
Mar 28 11:23:51.044: INFO: Created: latency-svc-yrtmy
Mar 28 11:23:51.147: INFO: Got endpoints: latency-svc-axru1 [2.22313973s]
Mar 28 11:23:51.186: INFO: Created: latency-svc-ljehk
Mar 28 11:23:51.297: INFO: Got endpoints: latency-svc-aiilg [2.218383583s]
Mar 28 11:23:51.347: INFO: Created: latency-svc-y1c75
Mar 28 11:23:51.447: INFO: Got endpoints: latency-svc-ugfkp [2.221958008s]
Mar 28 11:23:51.507: INFO: Created: latency-svc-ju23l
Mar 28 11:23:51.597: INFO: Got endpoints: latency-svc-co9qi [2.218685882s]
Mar 28 11:23:51.655: INFO: Created: latency-svc-xy231
Mar 28 11:23:51.747: INFO: Got endpoints: latency-svc-aawpq [2.218725926s]
Mar 28 11:23:51.797: INFO: Created: latency-svc-mut7c
Mar 28 11:23:51.897: INFO: Got endpoints: latency-svc-ml8mk [2.209780237s]
Mar 28 11:23:51.946: INFO: Created: latency-svc-pdaa6
Mar 28 11:23:52.047: INFO: Got endpoints: latency-svc-chzco [2.214900489s]
Mar 28 11:23:52.097: INFO: Created: latency-svc-tw7g2
Mar 28 11:23:52.197: INFO: Got endpoints: latency-svc-9w1am [2.206984717s]
Mar 28 11:23:52.243: INFO: Created: latency-svc-03d3e
Mar 28 11:23:52.347: INFO: Got endpoints: latency-svc-qst5f [2.215278364s]
Mar 28 11:23:52.394: INFO: Created: latency-svc-8hmvk
Mar 28 11:23:52.497: INFO: Got endpoints: latency-svc-8x5r5 [2.219392598s]
Mar 28 11:23:52.543: INFO: Created: latency-svc-dku89
Mar 28 11:23:52.647: INFO: Got endpoints: latency-svc-6pfds [2.217088469s]
Mar 28 11:23:52.695: INFO: Created: latency-svc-4blhu
Mar 28 11:23:52.797: INFO: Got endpoints: latency-svc-6uz5j [2.220794574s]
Mar 28 11:23:52.843: INFO: Created: latency-svc-2zeu8
Mar 28 11:23:52.947: INFO: Got endpoints: latency-svc-m2j5k [2.224853996s]
Mar 28 11:23:53.005: INFO: Created: latency-svc-shgve
Mar 28 11:23:53.097: INFO: Got endpoints: latency-svc-7171n [2.219280034s]
Mar 28 11:23:53.162: INFO: Created: latency-svc-o5zmx
Mar 28 11:23:53.247: INFO: Got endpoints: latency-svc-yrtmy [2.219275394s]
Mar 28 11:23:53.302: INFO: Created: latency-svc-wfkix
Mar 28 11:23:53.397: INFO: Got endpoints: latency-svc-ljehk [2.226646153s]
Mar 28 11:23:53.446: INFO: Created: latency-svc-7vgm7
Mar 28 11:23:53.547: INFO: Got endpoints: latency-svc-y1c75 [2.216144964s]
Mar 28 11:23:53.600: INFO: Created: latency-svc-7r0f5
Mar 28 11:23:53.697: INFO: Got endpoints: latency-svc-ju23l [2.20692098s]
Mar 28 11:23:53.744: INFO: Created: latency-svc-ocyh4
Mar 28 11:23:53.847: INFO: Got endpoints: latency-svc-xy231 [2.217879014s]
Mar 28 11:23:53.894: INFO: Created: latency-svc-lwcvv
Mar 28 11:23:53.997: INFO: Got endpoints: latency-svc-mut7c [2.21528734s]
Mar 28 11:23:54.048: INFO: Created: latency-svc-25bur
Mar 28 11:23:54.147: INFO: Got endpoints: latency-svc-pdaa6 [2.217012437s]
Mar 28 11:23:54.200: INFO: Created: latency-svc-bcgjk
Mar 28 11:23:54.297: INFO: Got endpoints: latency-svc-tw7g2 [2.215783019s]
Mar 28 11:23:54.345: INFO: Created: latency-svc-mcopa
Mar 28 11:23:54.447: INFO: Got endpoints: latency-svc-03d3e [2.221196795s]
Mar 28 11:23:54.499: INFO: Created: latency-svc-2gsqv
Mar 28 11:23:54.597: INFO: Got endpoints: latency-svc-8hmvk [2.218523665s]
Mar 28 11:23:54.661: INFO: Created: latency-svc-gk663
Mar 28 11:23:54.747: INFO: Got endpoints: latency-svc-dku89 [2.218810928s]
Mar 28 11:23:54.796: INFO: Created: latency-svc-z9vjn
Mar 28 11:23:54.897: INFO: Got endpoints: latency-svc-4blhu [2.219656191s]
Mar 28 11:23:54.945: INFO: Created: latency-svc-ncbod
Mar 28 11:23:55.047: INFO: Got endpoints: latency-svc-2zeu8 [2.218277824s]
Mar 28 11:23:55.100: INFO: Created: latency-svc-vaqo7
Mar 28 11:23:55.197: INFO: Got endpoints: latency-svc-shgve [2.208271075s]
Mar 28 11:23:55.245: INFO: Created: latency-svc-k3jbz
Mar 28 11:23:55.347: INFO: Got endpoints: latency-svc-o5zmx [2.20588417s]
Mar 28 11:23:55.396: INFO: Created: latency-svc-ae4m4
Mar 28 11:23:55.497: INFO: Got endpoints: latency-svc-wfkix [2.210504605s]
Mar 28 11:23:55.556: INFO: Created: latency-svc-4th93
Mar 28 11:23:55.647: INFO: Got endpoints: latency-svc-7vgm7 [2.217943438s]
Mar 28 11:23:55.693: INFO: Created: latency-svc-796gk
Mar 28 11:23:55.797: INFO: Got endpoints: latency-svc-7r0f5 [2.222944799s]
Mar 28 11:23:55.850: INFO: Created: latency-svc-s3k6j
Mar 28 11:23:55.947: INFO: Got endpoints: latency-svc-ocyh4 [2.219428572s]
Mar 28 11:23:56.008: INFO: Created: latency-svc-qek4v
Mar 28 11:23:56.097: INFO: Got endpoints: latency-svc-lwcvv [2.219727229s]
Mar 28 11:23:56.156: INFO: Created: latency-svc-m3cbg
Mar 28 11:23:56.247: INFO: Got endpoints: latency-svc-25bur [2.21349682s]
Mar 28 11:23:56.297: INFO: Created: latency-svc-fi1j6
Mar 28 11:23:56.397: INFO: Got endpoints: latency-svc-bcgjk [2.218360803s]
Mar 28 11:23:56.446: INFO: Created: latency-svc-uyqq5
Mar 28 11:23:56.547: INFO: Got endpoints: latency-svc-mcopa [2.217537892s]
Mar 28 11:23:56.597: INFO: Created: latency-svc-u8m27
Mar 28 11:23:56.697: INFO: Got endpoints: latency-svc-2gsqv [2.217346092s]
Mar 28 11:23:56.748: INFO: Created: latency-svc-xykwf
Mar 28 11:23:56.847: INFO: Got endpoints: latency-svc-gk663 [2.215724779s]
Mar 28 11:23:56.904: INFO: Created: latency-svc-9qgfw
Mar 28 11:23:56.997: INFO: Got endpoints: latency-svc-z9vjn [2.216154588s]
Mar 28 11:23:57.048: INFO: Created: latency-svc-h4ybv
Mar 28 11:23:57.147: INFO: Got endpoints: latency-svc-ncbod [2.216736729s]
Mar 28 11:23:57.201: INFO: Created: latency-svc-h1d8y
Mar 28 11:23:57.297: INFO: Got endpoints: latency-svc-vaqo7 [2.217957658s]
Mar 28 11:23:57.347: INFO: Created: latency-svc-zr7j5
Mar 28 11:23:57.447: INFO: Got endpoints: latency-svc-k3jbz [2.2172679s]
Mar 28 11:23:57.493: INFO: Created: latency-svc-qboio
Mar 28 11:23:57.597: INFO: Got endpoints: latency-svc-ae4m4 [2.214804162s]
Mar 28 11:23:57.649: INFO: Created: latency-svc-h6w4j
Mar 28 11:23:57.747: INFO: Got endpoints: latency-svc-4th93 [2.212229121s]
Mar 28 11:23:57.795: INFO: Created: latency-svc-sorfh
Mar 28 11:23:57.897: INFO: Got endpoints: latency-svc-796gk [2.219875438s]
Mar 28 11:23:57.959: INFO: Created: latency-svc-n3rgj
Mar 28 11:23:58.047: INFO: Got endpoints: latency-svc-s3k6j [2.217886767s]
Mar 28 11:23:58.098: INFO: Created: latency-svc-yrflt
Mar 28 11:23:58.198: INFO: Got endpoints: latency-svc-qek4v [2.204924123s]
Mar 28 11:23:58.260: INFO: Created: latency-svc-dtrzg
Mar 28 11:23:58.357: INFO: Got endpoints: latency-svc-m3cbg [2.222749714s]
Mar 28 11:23:58.422: INFO: Created: latency-svc-7jm3w
Mar 28 11:23:58.497: INFO: Got endpoints: latency-svc-fi1j6 [2.215715242s]
Mar 28 11:23:58.558: INFO: Created: latency-svc-9a87e
Mar 28 11:23:58.647: INFO: Got endpoints: latency-svc-uyqq5 [2.217021083s]
Mar 28 11:23:58.702: INFO: Created: latency-svc-kg1m3
Mar 28 11:23:58.797: INFO: Got endpoints: latency-svc-u8m27 [2.216321388s]
Mar 28 11:23:58.850: INFO: Created: latency-svc-6dasm
Mar 28 11:23:58.947: INFO: Got endpoints: latency-svc-xykwf [2.215044137s]
Mar 28 11:23:59.003: INFO: Created: latency-svc-c7xg6
Mar 28 11:23:59.097: INFO: Got endpoints: latency-svc-9qgfw [2.208490618s]
Mar 28 11:23:59.157: INFO: Created: latency-svc-1z1o5
Mar 28 11:23:59.247: INFO: Got endpoints: latency-svc-h4ybv [2.21526687s]
Mar 28 11:23:59.299: INFO: Created: latency-svc-snmiz
Mar 28 11:23:59.397: INFO: Got endpoints: latency-svc-h1d8y [2.218175866s]
Mar 28 11:23:59.448: INFO: Created: latency-svc-vyrjk
Mar 28 11:23:59.547: INFO: Got endpoints: latency-svc-zr7j5 [2.216730576s]
Mar 28 11:23:59.602: INFO: Created: latency-svc-0c8fr
Mar 28 11:23:59.697: INFO: Got endpoints: latency-svc-qboio [2.219462418s]
Mar 28 11:23:59.746: INFO: Created: latency-svc-t50ac
Mar 28 11:23:59.847: INFO: Got endpoints: latency-svc-h6w4j [2.223503031s]
Mar 28 11:23:59.895: INFO: Created: latency-svc-tbcbt
Mar 28 11:23:59.997: INFO: Got endpoints: latency-svc-sorfh [2.218452741s]
Mar 28 11:24:00.049: INFO: Created: latency-svc-rc6u5
Mar 28 11:24:00.147: INFO: Got endpoints: latency-svc-n3rgj [2.209460527s]
Mar 28 11:24:00.195: INFO: Created: latency-svc-2ewd9
Mar 28 11:24:00.297: INFO: Got endpoints: latency-svc-yrflt [2.215064328s]
Mar 28 11:24:00.351: INFO: Created: latency-svc-qe8x1
Mar 28 11:24:00.447: INFO: Got endpoints: latency-svc-dtrzg [2.204303275s]
Mar 28 11:24:00.497: INFO: Created: latency-svc-501vg
Mar 28 11:24:00.597: INFO: Got endpoints: latency-svc-7jm3w [2.19777181s]
Mar 28 11:24:00.662: INFO: Created: latency-svc-bax1p
Mar 28 11:24:00.747: INFO: Got endpoints: latency-svc-9a87e [2.212474162s]
Mar 28 11:24:00.804: INFO: Created: latency-svc-fvdag
Mar 28 11:24:00.897: INFO: Got endpoints: latency-svc-kg1m3 [2.210912663s]
Mar 28 11:24:00.949: INFO: Created: latency-svc-am4ep
Mar 28 11:24:01.047: INFO: Got endpoints: latency-svc-6dasm [2.214508556s]
Mar 28 11:24:01.108: INFO: Created: latency-svc-juhki
Mar 28 11:24:01.197: INFO: Got endpoints: latency-svc-c7xg6 [2.210115589s]
Mar 28 11:24:01.247: INFO: Created: latency-svc-i70u5
Mar 28 11:24:01.347: INFO: Got endpoints: latency-svc-1z1o5 [2.216438532s]
Mar 28 11:24:01.404: INFO: Created: latency-svc-qcile
Mar 28 11:24:01.498: INFO: Got endpoints: latency-svc-snmiz [2.216167781s]
Mar 28 11:24:01.550: INFO: Created: latency-svc-1qeua
Mar 28 11:24:01.648: INFO: Got endpoints: latency-svc-vyrjk [2.215713842s]
Mar 28 11:24:01.716: INFO: Created: latency-svc-m4p9j
Mar 28 11:24:01.797: INFO: Got endpoints: latency-svc-0c8fr [2.210785452s]
Mar 28 11:24:01.858: INFO: Created: latency-svc-aw4rl
Mar 28 11:24:01.947: INFO: Got endpoints: latency-svc-t50ac [2.217314902s]
Mar 28 11:24:02.012: INFO: Created: latency-svc-g7z30
Mar 28 11:24:02.098: INFO: Got endpoints: latency-svc-tbcbt [2.226619749s]
Mar 28 11:24:02.144: INFO: Created: latency-svc-fs1hw
Mar 28 11:24:02.247: INFO: Got endpoints: latency-svc-rc6u5 [2.2195927s]
Mar 28 11:24:02.313: INFO: Created: latency-svc-vqwjt
Mar 28 11:24:02.398: INFO: Got endpoints: latency-svc-2ewd9 [2.219488655s]
Mar 28 11:24:02.446: INFO: Created: latency-svc-xf8pz
Mar 28 11:24:02.547: INFO: Got endpoints: latency-svc-qe8x1 [2.215804553s]
Mar 28 11:24:02.617: INFO: Created: latency-svc-gnz2d
Mar 28 11:24:02.697: INFO: Got endpoints: latency-svc-501vg [2.216646654s]
Mar 28 11:24:02.754: INFO: Created: latency-svc-k0hs2
Mar 28 11:24:02.847: INFO: Got endpoints: latency-svc-bax1p [2.208327434s]
Mar 28 11:24:02.904: INFO: Created: latency-svc-s0qkd
Mar 28 11:24:02.997: INFO: Got endpoints: latency-svc-fvdag [2.208629563s]
Mar 28 11:24:03.062: INFO: Created: latency-svc-0awv9
Mar 28 11:24:03.148: INFO: Got endpoints: latency-svc-am4ep [2.213549216s]
Mar 28 11:24:03.194: INFO: Created: latency-svc-t0m18
Mar 28 11:24:03.297: INFO: Got endpoints: latency-svc-juhki [2.20517356s]
Mar 28 11:24:03.353: INFO: Created: latency-svc-xvh91
Mar 28 11:24:03.447: INFO: Got endpoints: latency-svc-i70u5 [2.218845118s]
Mar 28 11:24:03.507: INFO: Created: latency-svc-cci6f
Mar 28 11:24:03.597: INFO: Got endpoints: latency-svc-qcile [2.209147363s]
Mar 28 11:24:03.660: INFO: Created: latency-svc-z6n9f
Mar 28 11:24:03.747: INFO: Got endpoints: latency-svc-1qeua [2.218847517s]
Mar 28 11:24:03.804: INFO: Created: latency-svc-jntjv
Mar 28 11:24:03.897: INFO: Got endpoints: latency-svc-m4p9j [2.214785545s]
Mar 28 11:24:03.944: INFO: Created: latency-svc-rfvxw
Mar 28 11:24:04.047: INFO: Got endpoints: latency-svc-aw4rl [2.206844956s]
Mar 28 11:24:04.095: INFO: Created: latency-svc-4pqxm
Mar 28 11:24:04.197: INFO: Got endpoints: latency-svc-g7z30 [2.206133489s]
Mar 28 11:24:04.245: INFO: Created: latency-svc-mtx05
Mar 28 11:24:04.347: INFO: Got endpoints: latency-svc-fs1hw [2.22113582s]
Mar 28 11:24:04.399: INFO: Created: latency-svc-89cmi
Mar 28 11:24:04.497: INFO: Got endpoints: latency-svc-vqwjt [2.206436784s]
Mar 28 11:24:04.544: INFO: Created: latency-svc-qoxl6
Mar 28 11:24:04.648: INFO: Got endpoints: latency-svc-xf8pz [2.218614456s]
Mar 28 11:24:04.705: INFO: Created: latency-svc-v8f6e
Mar 28 11:24:04.797: INFO: Got endpoints: latency-svc-gnz2d [2.214450706s]
Mar 28 11:24:04.850: INFO: Created: latency-svc-h1f0i
Mar 28 11:24:04.947: INFO: Got endpoints: latency-svc-k0hs2 [2.210804892s]
Mar 28 11:24:05.004: INFO: Created: latency-svc-3k6c0
Mar 28 11:24:05.097: INFO: Got endpoints: latency-svc-s0qkd [2.208829459s]
Mar 28 11:24:05.153: INFO: Created: latency-svc-i3pfn
Mar 28 11:24:05.247: INFO: Got endpoints: latency-svc-0awv9 [2.221017997s]
Mar 28 11:24:05.305: INFO: Created: latency-svc-7zeoq
Mar 28 11:24:05.397: INFO: Got endpoints: latency-svc-t0m18 [2.218344629s]
Mar 28 11:24:05.453: INFO: Created: latency-svc-rixvv
Mar 28 11:24:05.547: INFO: Got endpoints: latency-svc-xvh91 [2.209597187s]
Mar 28 11:24:05.604: INFO: Created: latency-svc-ounon
Mar 28 11:24:05.697: INFO: Got endpoints: latency-svc-cci6f [2.208302538s]
Mar 28 11:24:05.744: INFO: Created: latency-svc-0r2r9
Mar 28 11:24:05.847: INFO: Got endpoints: latency-svc-z6n9f [2.212298589s]
Mar 28 11:24:05.891: INFO: Created: latency-svc-2uzg9
Mar 28 11:24:05.997: INFO: Got endpoints: latency-svc-jntjv [2.209163976s]
Mar 28 11:24:06.040: INFO: Created: latency-svc-dpxrk
Mar 28 11:24:06.147: INFO: Got endpoints: latency-svc-rfvxw [2.219034652s]
Mar 28 11:24:06.196: INFO: Created: latency-svc-4zpcf
Mar 28 11:24:06.297: INFO: Got endpoints: latency-svc-4pqxm [2.21930982s]
Mar 28 11:24:06.340: INFO: Created: latency-svc-3xg5a
Mar 28 11:24:06.447: INFO: Got endpoints: latency-svc-mtx05 [2.218015016s]
Mar 28 11:24:06.494: INFO: Created: latency-svc-6n66l
Mar 28 11:24:06.597: INFO: Got endpoints: latency-svc-89cmi [2.215301659s]
Mar 28 11:24:06.649: INFO: Created: latency-svc-w1buk
Mar 28 11:24:06.747: INFO: Got endpoints: latency-svc-qoxl6 [2.219336401s]
Mar 28 11:24:06.805: INFO: Created: latency-svc-5pm1l
Mar 28 11:24:06.897: INFO: Got endpoints: latency-svc-v8f6e [2.214489199s]
Mar 28 11:24:06.958: INFO: Created: latency-svc-m2cdg
Mar 28 11:24:07.097: INFO: Got endpoints: latency-svc-h1f0i [2.266760581s]
Mar 28 11:24:07.153: INFO: Created: latency-svc-8a304
Mar 28 11:24:07.247: INFO: Got endpoints: latency-svc-3k6c0 [2.258881988s]
Mar 28 11:24:07.302: INFO: Created: latency-svc-x7nao
Mar 28 11:24:07.397: INFO: Got endpoints: latency-svc-i3pfn [2.265224418s]
Mar 28 11:24:07.453: INFO: Created: latency-svc-dbpf8
Mar 28 11:24:07.547: INFO: Got endpoints: latency-svc-7zeoq [2.270262958s]
Mar 28 11:24:07.602: INFO: Created: latency-svc-wyiqd
Mar 28 11:24:07.697: INFO: Got endpoints: latency-svc-rixvv [2.261573179s]
Mar 28 11:24:07.739: INFO: Created: latency-svc-nr11e
Mar 28 11:24:07.847: INFO: Got endpoints: latency-svc-ounon [2.259511618s]
Mar 28 11:24:07.895: INFO: Created: latency-svc-3cj5k
Mar 28 11:24:07.997: INFO: Got endpoints: latency-svc-0r2r9 [2.269208031s]
Mar 28 11:24:08.037: INFO: Created: latency-svc-s1uhg
Mar 28 11:24:08.047: INFO: Got endpoints: latency-svc-2uzg9 [2.169599008s]
Mar 28 11:24:08.099: INFO: Created: latency-svc-fp8tj
Mar 28 11:24:08.147: INFO: Got endpoints: latency-svc-dpxrk [2.121123861s]
Mar 28 11:24:08.214: INFO: Got endpoints: latency-svc-6n66l [1.735117675s]
Mar 28 11:24:08.214: INFO: Created: latency-svc-672le
Mar 28 11:24:08.280: INFO: Created: latency-svc-8i7ua
Mar 28 11:24:08.347: INFO: Got endpoints: latency-svc-w1buk [1.717672618s]
Mar 28 11:24:08.394: INFO: Created: latency-svc-6sy0m
Mar 28 11:24:08.497: INFO: Got endpoints: latency-svc-5pm1l [1.708087025s]
Mar 28 11:24:08.548: INFO: Got endpoints: latency-svc-m2cdg [1.606027704s]
Mar 28 11:24:08.548: INFO: Created: latency-svc-5dcbz
Mar 28 11:24:08.610: INFO: Created: latency-svc-tbho5
Mar 28 11:24:08.697: INFO: Got endpoints: latency-svc-3xg5a [2.371757024s]
Mar 28 11:24:08.742: INFO: Created: latency-svc-8le2k
Mar 28 11:24:08.847: INFO: Got endpoints: latency-svc-4zpcf [2.66766375s]
Mar 28 11:24:08.892: INFO: Created: latency-svc-wst0t
Mar 28 11:24:09.447: INFO: Got endpoints: latency-svc-8a304 [2.31552544s]
Mar 28 11:24:09.508: INFO: Created: latency-svc-tpuss
Mar 28 11:24:09.547: INFO: Got endpoints: latency-svc-x7nao [2.265131537s]
Mar 28 11:24:09.594: INFO: Created: latency-svc-8k9hf
Mar 28 11:24:09.647: INFO: Got endpoints: latency-svc-dbpf8 [2.217358151s]
Mar 28 11:24:09.847: INFO: Got endpoints: latency-svc-wyiqd [2.265258297s]
Mar 28 11:24:09.997: INFO: Got endpoints: latency-svc-nr11e [2.274147133s]
Mar 28 11:24:10.147: INFO: Got endpoints: latency-svc-3cj5k [2.268087799s]
Mar 28 11:24:10.297: INFO: Got endpoints: latency-svc-s1uhg [2.276178048s]
Mar 28 11:24:10.447: INFO: Got endpoints: latency-svc-fp8tj [2.374002389s]
Mar 28 11:24:10.597: INFO: Got endpoints: latency-svc-672le [2.419680868s]
Mar 28 11:24:10.747: INFO: Got endpoints: latency-svc-8i7ua [2.48847198s]
Mar 28 11:24:10.897: INFO: Got endpoints: latency-svc-6sy0m [2.518892858s]
Mar 28 11:24:11.047: INFO: Got endpoints: latency-svc-5dcbz [2.520347028s]
Mar 28 11:24:11.197: INFO: Got endpoints: latency-svc-tbho5 [2.610979311s]
Mar 28 11:24:11.347: INFO: Got endpoints: latency-svc-8le2k [2.620183738s]
Mar 28 11:24:11.497: INFO: Got endpoints: latency-svc-wst0t [2.619836023s]
Mar 28 11:24:11.647: INFO: Got endpoints: latency-svc-tpuss [2.154370785s]
Mar 28 11:24:11.797: INFO: Got endpoints: latency-svc-8k9hf [2.219125437s]
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-kiz40
Mar 28 11:24:13.850: INFO: Deleting RC svc-latency-rc took: 2.014680088s
Mar 28 11:24:13.850: INFO: Terminating RC svc-latency-rc pods took: 44.043µs
Mar 28 11:24:13.851: INFO: Latencies: [36.370949ms 52.236502ms 70.025289ms 85.682602ms 107.619344ms 123.535725ms 139.82979ms 178.616603ms 199.004543ms 219.56465ms 258.60831ms 287.895992ms 616.303886ms 651.381923ms 682.308961ms 785.740858ms 887.172232ms 925.192075ms 964.092297ms 1.044997531s 1.069330061s 1.080881874s 1.364291802s 1.385968034s 1.40297957s 1.429209274s 1.454926816s 1.569125011s 1.606027704s 1.646822576s 1.651004407s 1.708087025s 1.717672618s 1.724646264s 1.735117675s 1.858330172s 1.896052713s 1.911226739s 1.913447777s 1.919521039s 2.04924907s 2.056966925s 2.105993531s 2.111107517s 2.121123861s 2.154370785s 2.159796308s 2.165545045s 2.169599008s 2.170327306s 2.172960546s 2.19777181s 2.201485831s 2.204303275s 2.204924123s 2.20517356s 2.20548432s 2.20588417s 2.206133489s 2.206436784s 2.206844956s 2.20692098s 2.206984717s 2.20718617s 2.208271075s 2.208302538s 2.208327434s 2.208490618s 2.208629563s 2.208829459s 2.209147363s 2.209163976s 2.209460527s 2.209597187s 2.209723438s 2.209780237s 2.210115589s 2.210504605s 2.210785452s 2.210804892s 2.210912663s 2.21134006s 2.212229121s 2.212298589s 2.212474162s 2.21349682s 2.213549216s 2.214037041s 2.214450706s 2.214489199s 2.214508556s 2.214772182s 2.214785545s 2.214804162s 2.214900489s 2.215044137s 2.215064328s 2.21526687s 2.215278364s 2.21528734s 2.215288516s 2.215301659s 2.215713842s 2.215715242s 2.215724779s 2.215783019s 2.215804553s 2.216144964s 2.216154588s 2.216167781s 2.216175938s 2.216321388s 2.216438532s 2.216646654s 2.216730576s 2.216736729s 2.217012437s 2.217021083s 2.217088469s 2.21714356s 2.217154155s 2.2172679s 2.217314902s 2.217346092s 2.217358151s 2.217537892s 2.217660426s 2.217666791s 2.217879014s 2.217886767s 2.217943438s 2.217957658s 2.218015016s 2.218175866s 2.218277824s 2.218344629s 2.218360803s 2.218383583s 2.218452741s 2.218523665s 2.218614456s 2.218685882s 2.218725926s 2.218810928s 2.218845118s 2.218847517s 2.219034652s 2.219125437s 2.219275394s 2.219280034s 2.21930982s 2.219336401s 2.219392598s 2.219428572s 2.219462418s 2.219488655s 2.2195927s 2.219656191s 2.219727229s 2.219875438s 2.220794574s 2.221017997s 2.22113582s 2.221196795s 2.221958008s 2.222749714s 2.222944799s 2.22313973s 2.223503031s 2.224853996s 2.226619749s 2.226646153s 2.258881988s 2.259511618s 2.261573179s 2.263740573s 2.264409615s 2.265131537s 2.265224418s 2.265258297s 2.266760581s 2.268087799s 2.269208031s 2.270262958s 2.274147133s 2.276178048s 2.276644604s 2.277087014s 2.31552544s 2.34070277s 2.371757024s 2.374002389s 2.419680868s 2.48847198s 2.518892858s 2.520347028s 2.610979311s 2.619836023s 2.620183738s 2.66766375s]
Mar 28 11:24:13.851: INFO: 50 %ile: 2.215288516s
Mar 28 11:24:13.851: INFO: 90 %ile: 2.266760581s
Mar 28 11:24:13.851: INFO: 99 %ile: 2.620183738s
Mar 28 11:24:13.851: INFO: Total sample count: 200
[AfterEach] Service endpoints latency
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:24:13.851: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-kiz40" for this suite.
• [SLOW TEST:39.362 seconds]
Service endpoints latency
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:115
should not be very high [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
------------------------------
SS
------------------------------
Pods
should *not* be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:783
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:24:18.862: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:24:18.863: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-fwomj
Mar 28 11:24:18.864: INFO: Get service account default in ns e2e-tests-pods-fwomj failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:24:20.865: INFO: Service account default in ns e2e-tests-pods-fwomj with secrets found. (2.001589891s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:24:20.865: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-fwomj
Mar 28 11:24:20.866: INFO: Service account default in ns e2e-tests-pods-fwomj with secrets found. (725.326µs)
[It] should *not* be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:783
STEP: Creating pod liveness-http in namespace e2e-tests-pods-fwomj
Mar 28 11:24:20.869: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 28 11:24:20.870: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-fwomj' status to be '!pending'(found phase: "Pending", readiness: false) (1.042962ms elapsed)
Mar 28 11:24:22.872: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-fwomj' out of pending state (found '"Running"')
Mar 28 11:24:22.872: INFO: Started pod liveness-http in namespace e2e-tests-pods-fwomj
STEP: checking the pod's current state and verifying that restartCount is present
Mar 28 11:24:22.873: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:26:22.971: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-fwomj" for this suite.
• [SLOW TEST:129.124 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should *not* be restarted with a /healthz http liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:783
------------------------------
Kubectl client Kubectl expose
should create services for rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:740
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:26:27.986: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:26:27.988: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-nlybd
Mar 28 11:26:27.988: INFO: Get service account default in ns e2e-tests-kubectl-nlybd failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:26:29.989: INFO: Service account default in ns e2e-tests-kubectl-nlybd with secrets found. (2.00161433s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:26:29.989: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-nlybd
Mar 28 11:26:29.990: INFO: Service account default in ns e2e-tests-kubectl-nlybd with secrets found. (756.037µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should create services for rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:740
STEP: creating Redis RC
Mar 28 11:26:29.990: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-nlybd'
Mar 28 11:26:30.035: INFO: stderr: ""
Mar 28 11:26:30.035: INFO: stdout: "replicationcontroller \"redis-master\" created"
Mar 28 11:26:32.037: INFO: Waiting up to 5m0s for pod redis-master-i1vaf status to be running
Mar 28 11:26:32.038: INFO: Waiting for pod redis-master-i1vaf in namespace 'e2e-tests-kubectl-nlybd' status to be 'running'(found phase: "Pending", readiness: false) (948.314µs elapsed)
Mar 28 11:26:34.040: INFO: Found pod 'redis-master-i1vaf' on node '127.0.0.1'
Mar 28 11:26:34.040: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-i1vaf redis-master --namespace=e2e-tests-kubectl-nlybd'
Mar 28 11:26:34.055: INFO: stderr: ""
Mar 28 11:26:34.055: INFO: stdout: "1:C 28 Mar 18:26:33.532 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Mar 18:26:33.533 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Mar 18:26:33.533 # Server started, Redis version 3.0.7\n1:M 28 Mar 18:26:33.533 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Mar 18:26:33.533 * The server is now ready to accept connections on port 6379"
STEP: exposing RC
Mar 28 11:26:34.055: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-nlybd'
Mar 28 11:26:34.080: INFO: stderr: ""
Mar 28 11:26:34.080: INFO: stdout: "service \"rm2\" exposed"
Mar 28 11:26:34.081: INFO: Service rm2 in namespace e2e-tests-kubectl-nlybd found.
STEP: exposing service
Mar 28 11:26:36.083: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-nlybd'
Mar 28 11:26:36.106: INFO: stderr: ""
Mar 28 11:26:36.106: INFO: stdout: "service \"rm3\" exposed"
Mar 28 11:26:36.107: INFO: Service rm3 in namespace e2e-tests-kubectl-nlybd found.
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:26:38.108: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nlybd" for this suite.
• [SLOW TEST:30.128 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl expose
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:741
should create services for rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:740
------------------------------
EmptyDir volumes
should support (non-root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:26:58.114: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:26:58.116: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2uhj7
Mar 28 11:26:58.117: INFO: Get service account default in ns e2e-tests-emptydir-2uhj7 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:00.118: INFO: Service account default in ns e2e-tests-emptydir-2uhj7 with secrets found. (2.001741328s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:00.118: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-2uhj7
Mar 28 11:27:00.118: INFO: Service account default in ns e2e-tests-emptydir-2uhj7 with secrets found. (684.979µs)
[It] should support (non-root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 28 11:27:00.121: INFO: Waiting up to 5m0s for pod pod-a92d19ed-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:27:00.121: INFO: No Status.Info for container 'test-container' in pod 'pod-a92d19ed-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:27:00.121: INFO: Waiting for pod pod-a92d19ed-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-2uhj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (866.517µs elapsed)
Mar 28 11:27:02.123: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a92d19ed-f512-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-2uhj7' so far
Mar 28 11:27:02.123: INFO: Waiting for pod pod-a92d19ed-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-2uhj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002269054s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-a92d19ed-f512-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:27:04.142: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2uhj7" for this suite.
• [SLOW TEST:11.039 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0644,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:104
------------------------------
SS
------------------------------
Pods
should be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:470
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:27:09.153: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:27:09.154: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-gzz2q
Mar 28 11:27:09.155: INFO: Get service account default in ns e2e-tests-pods-gzz2q failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:11.156: INFO: Service account default in ns e2e-tests-pods-gzz2q with secrets found. (2.001890211s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:11.156: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-gzz2q
Mar 28 11:27:11.157: INFO: Service account default in ns e2e-tests-pods-gzz2q with secrets found. (846.75µs)
[It] should be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:470
STEP: creating the pod
STEP: submitting the pod to kubernetes
Mar 28 11:27:11.159: INFO: Waiting up to 5m0s for pod pod-update-afc16a91-f512-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:27:11.160: INFO: Waiting for pod pod-update-afc16a91-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-gzz2q' status to be 'running'(found phase: "Pending", readiness: false) (1.179137ms elapsed)
Mar 28 11:27:13.162: INFO: Waiting for pod pod-update-afc16a91-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-gzz2q' status to be 'running'(found phase: "Pending", readiness: false) (2.002609055s elapsed)
Mar 28 11:27:15.163: INFO: Found pod 'pod-update-afc16a91-f512-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 28 11:27:15.666: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-afc16a91-f512-11e5-b1e1-0862662cf845" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Mar 28 11:27:16.168: INFO: Successfully updated pod
Mar 28 11:27:16.168: INFO: Waiting up to 5m0s for pod pod-update-afc16a91-f512-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:27:16.178: INFO: Found pod 'pod-update-afc16a91-f512-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: verifying the updated pod is in kubernetes
Mar 28 11:27:16.179: INFO: Pod update OK
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:27:16.190: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gzz2q" for this suite.
• [SLOW TEST:12.047 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should be updated [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:470
------------------------------
SS
------------------------------
Downward API
should provide pod name and namespace as env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
[BeforeEach] Downward API
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:27:21.200: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:27:21.201: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-7am59
Mar 28 11:27:21.202: INFO: Get service account default in ns e2e-tests-downward-api-7am59 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:23.203: INFO: Service account default in ns e2e-tests-downward-api-7am59 with secrets found. (2.001547961s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:23.203: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-7am59
Mar 28 11:27:23.203: INFO: Service account default in ns e2e-tests-downward-api-7am59 with secrets found. (606.543µs)
[It] should provide pod name and namespace as env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
STEP: Creating a pod to test downward api env vars
Mar 28 11:27:23.205: INFO: Waiting up to 5m0s for pod downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:27:23.206: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:27:23.206: INFO: Waiting for pod downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-7am59' status to be 'success or failure'(found phase: "Pending", readiness: false) (950.857µs elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
POD_NAME=downward-api-b6ef9510-f512-11e5-b1e1-0862662cf845
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
POD_NAMESPACE=e2e-tests-downward-api-7am59
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
[AfterEach] Downward API
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:27:25.224: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7am59" for this suite.
• [SLOW TEST:9.033 seconds]
Downward API
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:82
should provide pod name and namespace as env vars [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:60
------------------------------
SSSSS
------------------------------
Networking
should provide unchanging, static URL paths for kubernetes api services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:27:30.233: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:27:30.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-qn85s
Mar 28 11:27:30.235: INFO: Get service account default in ns e2e-tests-nettest-qn85s failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:32.236: INFO: Service account default in ns e2e-tests-nettest-qn85s with secrets found. (2.00157307s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:32.236: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-qn85s
Mar 28 11:27:32.237: INFO: Service account default in ns e2e-tests-nettest-qn85s with secrets found. (691.022µs)
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
STEP: testing: /validate
STEP: testing: /healthz
[AfterEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:27:32.374: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-qn85s" for this suite.
• [SLOW TEST:7.146 seconds]
Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:258
should provide unchanging, static URL paths for kubernetes api services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:75
------------------------------
S
------------------------------
ConfigMap
should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
[BeforeEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:27:37.379: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:27:37.380: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-uf3r7
Mar 28 11:27:37.381: INFO: Get service account default in ns e2e-tests-configmap-uf3r7 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:39.382: INFO: Service account default in ns e2e-tests-configmap-uf3r7 with secrets found. (2.001811172s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:39.382: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-uf3r7
Mar 28 11:27:39.383: INFO: Service account default in ns e2e-tests-configmap-uf3r7 with secrets found. (734.474µs)
[It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
STEP: Creating configMap with name configmap-test-volume-map-c094617a-f512-11e5-b1e1-0862662cf845
STEP: Creating a pod to test consume configMaps
Mar 28 11:27:39.394: INFO: Waiting up to 5m0s for pod pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:27:39.395: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:27:39.395: INFO: Waiting for pod pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-configmap-uf3r7' status to be 'success or failure'(found phase: "Pending", readiness: false) (915.348µs elapsed)
Mar 28 11:27:41.397: INFO: Unexpected error occurred: pod 'pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:27:40 -0700 PDT FinishedAt:2016-03-28 11:27:40 -0700 PDT ContainerID:docker://8b35cb85c5d86e4184b9cd49aaba133121c9571bfee5687ea6ee20ac201a2a14}
STEP: Cleaning up the configMap
[AfterEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-configmap-uf3r7".
Mar 28 11:27:41.420: INFO: At 2016-03-28 11:27:39 -0700 PDT - event for pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:27:41.420: INFO: At 2016-03-28 11:27:40 -0700 PDT - event for pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:27:41.420: INFO: At 2016-03-28 11:27:40 -0700 PDT - event for pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id 8b35cb85c5d8
Mar 28 11:27:41.420: INFO: At 2016-03-28 11:27:40 -0700 PDT - event for pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id 8b35cb85c5d8
Mar 28 11:27:41.422: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:27:41.422: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:27:41.422: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:27:41.422: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:27:41.422: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:27:41.422: INFO:
Mar 28 11:27:41.424: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:27:41.426: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 3781 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:27:37 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:27:37 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[nginx:latest] 190461396} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[redis:latest] 177537508} {[gcr.io/google_samples/gb-frontend:v4] 510197625} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/example-dns-frontend:v1] 677794775} {[gcr.io/google_containers/example-dns-backend:v1] 675068800} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_samples/gb-redisslave:v1] 109462535} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/redis:e2e] 418929769} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:27:41.426: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:27:41.427: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:27:41.431: INFO: pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845 started at <nil> (0 container statuses recorded)
Mar 28 11:27:41.431: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:27:41.431: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:27:41.431: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:27:41.431: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:27:41.488: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 51 @[0]
Mar 28 11:27:41.488: INFO: ERROR kubelet_docker_errors{operation_type="inspect_image"} => 15 @[0]
Mar 28 11:27:41.488: INFO: ERROR kubelet_docker_errors{operation_type="start_container"} => 2 @[0]
Mar 28 11:27:41.489: INFO: ERROR kubelet_docker_errors{operation_type="stop_container"} => 48 @[0]
Mar 28 11:27:41.489: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:27:41.489: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:2m13.946501s}
Mar 28 11:27:41.489: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:1m39.865401s}
Mar 28 11:27:41.489: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m33.46205s}
Mar 28 11:27:41.489: INFO: {Operation:SyncPod Method:container_manager_latency_microseconds Quantile:0.99 Latency:1m32.755513s}
Mar 28 11:27:41.489: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m29.886563s}
Mar 28 11:27:41.489: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m12.393286s}
Mar 28 11:27:41.489: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.144331s}
Mar 28 11:27:41.489: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:28.802765s}
Mar 28 11:27:41.489: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:28.802765s}
Mar 28 11:27:41.489: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:12.894036s}
Mar 28 11:27:41.489: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-uf3r7" for this suite.
• Failure [9.115 seconds]
ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:333
should be consumable from pods in volume with mappings [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:176
Expected error:
<*errors.errorString | 0xc820900a60>: {
s: "pod 'pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:27:40 -0700 PDT FinishedAt:2016-03-28 11:27:40 -0700 PDT ContainerID:docker://8b35cb85c5d86e4184b9cd49aaba133121c9571bfee5687ea6ee20ac201a2a14}",
}
pod 'pod-configmaps-c0948d7a-f512-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:27:40 -0700 PDT FinishedAt:2016-03-28 11:27:40 -0700 PDT ContainerID:docker://8b35cb85c5d86e4184b9cd49aaba133121c9571bfee5687ea6ee20ac201a2a14}
not to have occurred
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
------------------------------
SSS
------------------------------
Kubectl client Kubectl logs
should be able to retrieve and filter logs [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:27:46.494: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:27:46.496: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-yho2r
Mar 28 11:27:46.496: INFO: Get service account default in ns e2e-tests-kubectl-yho2r failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:27:48.497: INFO: Service account default in ns e2e-tests-kubectl-yho2r with secrets found. (2.001555188s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:27:48.497: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-yho2r
Mar 28 11:27:48.498: INFO: Service account default in ns e2e-tests-kubectl-yho2r with secrets found. (704.625µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl logs
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:791
STEP: creating an rc
Mar 28 11:27:48.498: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-yho2r'
Mar 28 11:27:48.538: INFO: stderr: ""
Mar 28 11:27:48.538: INFO: stdout: "replicationcontroller \"redis-master\" created"
[It] should be able to retrieve and filter logs [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837
Mar 28 11:27:50.541: INFO: Waiting up to 5m0s for pod redis-master-5cai4 status to be running
Mar 28 11:27:50.542: INFO: Waiting for pod redis-master-5cai4 in namespace 'e2e-tests-kubectl-yho2r' status to be 'running'(found phase: "Pending", readiness: false) (1.013734ms elapsed)
Mar 28 11:27:52.544: INFO: Found pod 'redis-master-5cai4' on node '127.0.0.1'
STEP: checking for a matching strings
Mar 28 11:27:52.544: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r'
Mar 28 11:27:52.561: INFO: stderr: ""
Mar 28 11:27:52.561: INFO: stdout: "1:C 28 Mar 18:27:51.430 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Mar 18:27:51.430 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Mar 18:27:51.430 # Server started, Redis version 3.0.7\n1:M 28 Mar 18:27:51.430 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Mar 18:27:51.430 * The server is now ready to accept connections on port 6379"
STEP: limiting log lines
Mar 28 11:27:52.561: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r --tail=1'
Mar 28 11:27:52.576: INFO: stderr: ""
Mar 28 11:27:52.576: INFO: stdout: "1:M 28 Mar 18:27:51.430 * The server is now ready to accept connections on port 6379"
STEP: limiting log bytes
Mar 28 11:27:52.576: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r --limit-bytes=1'
Mar 28 11:27:52.590: INFO: stderr: ""
Mar 28 11:27:52.590: INFO: stdout: "1"
STEP: exposing timestamps
Mar 28 11:27:52.590: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r --tail=1 --timestamps'
Mar 28 11:27:52.606: INFO: stderr: ""
Mar 28 11:27:52.606: INFO: stdout: "2016-03-28T18:27:51.431031692Z 1:M 28 Mar 18:27:51.430 * The server is now ready to accept connections on port 6379"
STEP: restricting to a time range
Mar 28 11:27:55.106: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r --since=1s'
Mar 28 11:27:55.123: INFO: stderr: ""
Mar 28 11:27:55.123: INFO: stdout: ""
Mar 28 11:27:55.123: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5cai4 redis-master --namespace=e2e-tests-kubectl-yho2r --since=24h'
Mar 28 11:27:55.138: INFO: stderr: ""
Mar 28 11:27:55.138: INFO: stdout: "1:C 28 Mar 18:27:51.430 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Mar 18:27:51.430 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Mar 18:27:51.430 # Server started, Redis version 3.0.7\n1:M 28 Mar 18:27:51.430 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Mar 18:27:51.430 * The server is now ready to accept connections on port 6379"
[AfterEach] Kubectl logs
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:794
STEP: using delete to clean up resources
Mar 28 11:27:55.139: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 -f ../../examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-yho2r'
Mar 28 11:27:57.159: INFO: stderr: ""
Mar 28 11:27:57.159: INFO: stdout: "replicationcontroller \"redis-master\" deleted"
Mar 28 11:27:57.159: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-yho2r'
Mar 28 11:27:57.171: INFO: stderr: ""
Mar 28 11:27:57.171: INFO: stdout: ""
Mar 28 11:27:57.171: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-yho2r -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 28 11:27:57.184: INFO: stderr: ""
Mar 28 11:27:57.184: INFO: stdout: ""
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:27:57.184: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-yho2r" for this suite.
• [SLOW TEST:15.701 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl logs
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:838
should be able to retrieve and filter logs [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:837
------------------------------
SSSSS
------------------------------
EmptyDir volumes
should support (root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:28:02.195: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:28:02.196: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-qjb8x
Mar 28 11:28:02.197: INFO: Get service account default in ns e2e-tests-emptydir-qjb8x failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:28:04.198: INFO: Service account default in ns e2e-tests-emptydir-qjb8x with secrets found. (2.001501437s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:28:04.198: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-qjb8x
Mar 28 11:28:04.198: INFO: Service account default in ns e2e-tests-emptydir-qjb8x with secrets found. (626.487µs)
[It] should support (root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 28 11:28:04.200: INFO: Waiting up to 5m0s for pod pod-cf5ef018-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:28:04.201: INFO: No Status.Info for container 'test-container' in pod 'pod-cf5ef018-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:28:04.201: INFO: Waiting for pod pod-cf5ef018-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-qjb8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (868.128µs elapsed)
Mar 28 11:28:06.203: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-cf5ef018-f512-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-qjb8x' so far
Mar 28 11:28:06.203: INFO: Waiting for pod pod-cf5ef018-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-qjb8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00226846s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-cf5ef018-f512-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:28:08.220: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qjb8x" for this suite.
• [SLOW TEST:11.035 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:64
------------------------------
S
------------------------------
Pods
should contain environment variables for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:637
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:28:13.230: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:28:13.231: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-iz45q
Mar 28 11:28:13.232: INFO: Get service account default in ns e2e-tests-pods-iz45q failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:28:15.233: INFO: Service account default in ns e2e-tests-pods-iz45q with secrets found. (2.001597612s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:28:15.233: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-iz45q
Mar 28 11:28:15.234: INFO: Service account default in ns e2e-tests-pods-iz45q with secrets found. (688.107µs)
[It] should contain environment variables for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:637
Mar 28 11:28:15.236: INFO: Waiting up to 5m0s for pod server-envvars-d5f2bee7-f512-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:28:15.237: INFO: Waiting for pod server-envvars-d5f2bee7-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-iz45q' status to be 'running'(found phase: "Pending", readiness: false) (1.014128ms elapsed)
Mar 28 11:28:17.238: INFO: Waiting for pod server-envvars-d5f2bee7-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-iz45q' status to be 'running'(found phase: "Pending", readiness: false) (2.002369594s elapsed)
Mar 28 11:28:19.239: INFO: Found pod 'server-envvars-d5f2bee7-f512-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: Creating a pod to test service env
Mar 28 11:28:19.259: INFO: Waiting up to 5m0s for pod client-envvars-d8579340-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:28:19.260: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-d8579340-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:28:19.260: INFO: Waiting for pod client-envvars-d8579340-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-iz45q' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.486104ms elapsed)
Mar 28 11:28:21.262: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-d8579340-f512-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-pods-iz45q' so far
Mar 28 11:28:21.262: INFO: Waiting for pod client-envvars-d8579340-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-iz45q' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002781243s elapsed)
Mar 28 11:28:23.263: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-d8579340-f512-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-pods-iz45q' so far
Mar 28 11:28:23.263: INFO: Waiting for pod client-envvars-d8579340-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-iz45q' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004111479s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-envvars-d8579340-f512-11e5-b1e1-0862662cf845 container env3cont: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOSERVICE_PORT_8765_TCP_PORT=8765
FOOSERVICE_PORT_8765_TCP_PROTO=tcp
HOSTNAME=client-envvars-d8579340-f512-11e5-b1e1-0862662cf845
SHLVL=1
HOME=/root
FOOSERVICE_PORT_8765_TCP=tcp://10.0.0.192:8765
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
FOOSERVICE_SERVICE_HOST=10.0.0.192
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
FOOSERVICE_SERVICE_PORT=8765
FOOSERVICE_PORT=tcp://10.0.0.192:8765
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.0.192
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:28:25.334: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-iz45q" for this suite.
• [SLOW TEST:17.114 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should contain environment variables for services [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:637
------------------------------
Services
should provide secure master service [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:28:30.344: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:28:30.346: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-y4b25
Mar 28 11:28:30.346: INFO: Get service account default in ns e2e-tests-services-y4b25 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:28:32.347: INFO: Service account default in ns e2e-tests-services-y4b25 with secrets found. (2.001709927s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:28:32.347: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-y4b25
Mar 28 11:28:32.348: INFO: Service account default in ns e2e-tests-services-y4b25 with secrets found. (686.252µs)
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 28 11:28:32.348: INFO: >>> testContext.KubeConfig: /root/.kube/config
[It] should provide secure master service [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
[AfterEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:28:32.349: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-y4b25" for this suite.
• [SLOW TEST:7.010 seconds]
Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:902
should provide secure master service [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:80
------------------------------
Networking
should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:28:37.354: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:28:37.356: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-5dzyy
Mar 28 11:28:37.356: INFO: Get service account default in ns e2e-tests-nettest-5dzyy failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:28:39.357: INFO: Service account default in ns e2e-tests-nettest-5dzyy with secrets found. (2.001568994s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:28:39.357: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-5dzyy
Mar 28 11:28:39.358: INFO: Service account default in ns e2e-tests-nettest-5dzyy with secrets found. (656.02µs)
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
STEP: Creating a service named "nettest" in namespace "e2e-tests-nettest-5dzyy"
STEP: Creating a webserver (pending) pod on each node
Mar 28 11:28:39.474: FAIL: The test requires two Ready nodes on , but found just one.
STEP: Cleaning up the service
[AfterEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-nettest-5dzyy".
Mar 28 11:28:39.518: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:28:39.518: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:28:39.518: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:28:39.518: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:28:39.518: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:28:39.518: INFO:
Mar 28 11:28:39.519: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:28:39.521: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 3934 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI} cpu:{4.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:28:37 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:28:37 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[nginx:latest] 190461396} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[redis:latest] 177537508} {[gcr.io/google_samples/gb-frontend:v4] 510197625} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/example-dns-frontend:v1] 677794775} {[gcr.io/google_containers/example-dns-backend:v1] 675068800} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_samples/gb-redisslave:v1] 109462535} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/redis:e2e] 418929769} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:28:39.521: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:28:39.523: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:28:39.529: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:28:39.529: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:28:39.529: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:28:39.529: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:28:39.580: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 51 @[0]
Mar 28 11:28:39.580: INFO: ERROR kubelet_docker_errors{operation_type="inspect_image"} => 15 @[0]
Mar 28 11:28:39.580: INFO: ERROR kubelet_docker_errors{operation_type="logs"} => 1 @[0]
Mar 28 11:28:39.580: INFO: ERROR kubelet_docker_errors{operation_type="start_container"} => 2 @[0]
Mar 28 11:28:39.580: INFO: ERROR kubelet_docker_errors{operation_type="stop_container"} => 50 @[0]
Mar 28 11:28:39.580: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:28:39.580: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:2m11.9148s}
Mar 28 11:28:39.580: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:1m38.987016s}
Mar 28 11:28:39.580: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m33.46205s}
Mar 28 11:28:39.580: INFO: {Operation:SyncPod Method:container_manager_latency_microseconds Quantile:0.99 Latency:1m32.755513s}
Mar 28 11:28:39.580: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m29.886563s}
Mar 28 11:28:39.580: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m12.393286s}
Mar 28 11:28:39.580: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.144331s}
Mar 28 11:28:39.580: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:28.802765s}
Mar 28 11:28:39.580: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:28.802765s}
Mar 28 11:28:39.580: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:12.894036s}
Mar 28 11:28:39.580: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-5dzyy" for this suite.
• Failure [7.231 seconds]
Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:258
should function for intra-pod communication [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
Mar 28 11:28:39.474: The test requires two Ready nodes on , but found just one.
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
------------------------------
SS
------------------------------
Kubectl client Kubectl patch
should add annotations for pods in rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:866
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:28:44.586: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:28:44.587: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-bjthc
Mar 28 11:28:44.587: INFO: Get service account default in ns e2e-tests-kubectl-bjthc failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:28:46.588: INFO: Service account default in ns e2e-tests-kubectl-bjthc with secrets found. (2.001549745s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:28:46.588: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-bjthc
Mar 28 11:28:46.589: INFO: Service account default in ns e2e-tests-kubectl-bjthc with secrets found. (580.733µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should add annotations for pods in rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:866
STEP: creating Redis RC
Mar 28 11:28:46.589: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config create -f ../../examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-bjthc'
Mar 28 11:28:46.628: INFO: stderr: ""
Mar 28 11:28:46.628: INFO: stdout: "replicationcontroller \"redis-master\" created"
STEP: patching all pods
Mar 28 11:28:48.630: INFO: Waiting up to 5m0s for pod redis-master-85amr status to be running
Mar 28 11:28:48.631: INFO: Waiting for pod redis-master-85amr in namespace 'e2e-tests-kubectl-bjthc' status to be 'running'(found phase: "Pending", readiness: false) (938.228µs elapsed)
Mar 28 11:28:50.633: INFO: Found pod 'redis-master-85amr' on node '127.0.0.1'
Mar 28 11:28:50.633: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-85amr --namespace=e2e-tests-kubectl-bjthc -p {"metadata":{"annotations":{"x":"y"}}}'
Mar 28 11:28:50.648: INFO: stderr: ""
Mar 28 11:28:50.648: INFO: stdout: "\"redis-master-85amr\" patched"
STEP: checking annotations
Mar 28 11:28:50.650: INFO: Waiting up to 5m0s for pod redis-master-85amr status to be running
Mar 28 11:28:50.651: INFO: Found pod 'redis-master-85amr' on node '127.0.0.1'
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:28:50.651: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bjthc" for this suite.
• [SLOW TEST:21.073 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl patch
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:867
should add annotations for pods in rc [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:866
------------------------------
S
------------------------------
EmptyDir volumes
should support (non-root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:29:05.658: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:29:05.660: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-0d6di
Mar 28 11:29:05.660: INFO: Get service account default in ns e2e-tests-emptydir-0d6di failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:29:07.661: INFO: Service account default in ns e2e-tests-emptydir-0d6di with secrets found. (2.001479199s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:29:07.661: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-0d6di
Mar 28 11:29:07.662: INFO: Service account default in ns e2e-tests-emptydir-0d6di with secrets found. (576.055µs)
[It] should support (non-root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 28 11:29:07.664: INFO: Waiting up to 5m0s for pod pod-f532a8ef-f512-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:29:07.665: INFO: No Status.Info for container 'test-container' in pod 'pod-f532a8ef-f512-11e5-b1e1-0862662cf845' yet
Mar 28 11:29:07.665: INFO: Waiting for pod pod-f532a8ef-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-0d6di' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.183494ms elapsed)
Mar 28 11:29:09.666: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f532a8ef-f512-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-0d6di' so far
Mar 28 11:29:09.666: INFO: Waiting for pod pod-f532a8ef-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-0d6di' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002463473s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-f532a8ef-f512-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:29:11.681: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-0d6di" for this suite.
• [SLOW TEST:11.032 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0666,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:108
------------------------------
SSSSS
------------------------------
DNS
should provide DNS for the cluster [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280
[BeforeEach] DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:29:16.691: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:29:16.692: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-wi40r
Mar 28 11:29:16.693: INFO: Get service account default in ns e2e-tests-dns-wi40r failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:29:18.694: INFO: Service account default in ns e2e-tests-dns-wi40r with secrets found. (2.00150862s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:29:18.694: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-wi40r
Mar 28 11:29:18.694: INFO: Service account default in ns e2e-tests-dns-wi40r with secrets found. (679.328µs)
[It] should provide DNS for the cluster [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280
STEP: Waiting for DNS Service to be Running
Mar 28 11:29:18.696: INFO: Waiting up to 5m0s for pod kube-dns-v10-d8w4s status to be running
Mar 28 11:29:18.698: INFO: Found pod 'kube-dns-v10-d8w4s' on node '127.0.0.1'
STEP: Running these commands on wheezy:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(dig +notcp +noall +answer +search google.com A)" && echo OK > /results/wheezy_udp@google.com;test -n "$$(dig +tcp +noall +answer +search google.com A)" && echo OK > /results/wheezy_tcp@google.com;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wi40r.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie:for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(dig +notcp +noall +answer +search google.com A)" && echo OK > /results/jessie_udp@google.com;test -n "$$(dig +tcp +noall +answer +search google.com A)" && echo OK > /results/jessie_tcp@google.com;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wi40r.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
Mar 28 11:29:18.700: INFO: Waiting up to 5m0s for pod dns-test-fbc69dec-f512-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:29:18.703: INFO: Waiting for pod dns-test-fbc69dec-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-dns-wi40r' status to be 'running'(found phase: "Pending", readiness: false) (2.375025ms elapsed)
Mar 28 11:29:20.705: INFO: Waiting for pod dns-test-fbc69dec-f512-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-dns-wi40r' status to be 'running'(found phase: "Pending", readiness: false) (2.004118528s elapsed)
Mar 28 11:29:22.706: INFO: Found pod 'dns-test-fbc69dec-f512-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: retrieving the pod
STEP: looking for the results for each expected name from probiers
Mar 28 11:29:24.726: INFO: DNS probes using dns-test-fbc69dec-f512-11e5-b1e1-0862662cf845 succeeded
STEP: deleting the pod
[AfterEach] DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:29:24.746: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-wi40r" for this suite.
• [SLOW TEST:13.063 seconds]
DNS
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:368
should provide DNS for the cluster [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:280
------------------------------
Proxy version v1
should proxy logs on node with explicit kubelet port [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:55
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:29:29.754: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:29:29.755: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-qvooz
Mar 28 11:29:29.756: INFO: Get service account default in ns e2e-tests-proxy-qvooz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:29:31.757: INFO: Service account default in ns e2e-tests-proxy-qvooz with secrets found. (2.001958064s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:29:31.757: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-qvooz
Mar 28 11:29:31.758: INFO: Service account default in ns e2e-tests-proxy-qvooz with secrets found. (723.117µs)
[It] should proxy logs on node with explicit kubelet port [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:55
Mar 28 11:29:31.768: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 8.539698ms)
Mar 28 11:29:31.770: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.274202ms)
Mar 28 11:29:31.772: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.047311ms)
Mar 28 11:29:31.774: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.005141ms)
Mar 28 11:29:31.776: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.892709ms)
Mar 28 11:29:31.778: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.863985ms)
Mar 28 11:29:31.780: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.882629ms)
Mar 28 11:29:31.782: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.842038ms)
Mar 28 11:29:31.784: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.938373ms)
Mar 28 11:29:31.788: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 4.33976ms)
Mar 28 11:29:31.790: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.907254ms)
Mar 28 11:29:31.792: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.900158ms)
Mar 28 11:29:31.794: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.850958ms)
Mar 28 11:29:31.796: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.873432ms)
Mar 28 11:29:31.798: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.944635ms)
Mar 28 11:29:31.800: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.848651ms)
Mar 28 11:29:31.802: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.047558ms)
Mar 28 11:29:31.803: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.893342ms)
Mar 28 11:29:31.805: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.868322ms)
Mar 28 11:29:31.807: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.889518ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:29:31.807: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-qvooz" for this suite.
• [SLOW TEST:7.059 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy logs on node with explicit kubelet port [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:55
------------------------------
Probing container
with readiness probe that fails should never be ready and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
[BeforeEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:29:36.812: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:29:36.814: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-nbian
Mar 28 11:29:36.814: INFO: Get service account default in ns e2e-tests-container-probe-nbian failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:29:38.815: INFO: Service account default in ns e2e-tests-container-probe-nbian with secrets found. (2.001491257s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:29:38.815: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-nbian
Mar 28 11:29:38.816: INFO: Service account default in ns e2e-tests-container-probe-nbian with secrets found. (595.263µs)
[BeforeEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:45
[It] with readiness probe that fails should never be ready and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
[AfterEach] Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:32:38.822: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nbian" for this suite.
• [SLOW TEST:202.014 seconds]
Probing container
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:112
with readiness probe that fails should never be ready and never restart [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:110
------------------------------
SS
------------------------------
SchedulerPredicates [Serial]
validates that NodeSelector is respected if not matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:32:58.827: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:32:58.828: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-rnrc6
Mar 28 11:32:58.829: INFO: Get service account default in ns e2e-tests-sched-pred-rnrc6 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:33:00.830: INFO: Service account default in ns e2e-tests-sched-pred-rnrc6 with secrets found. (2.001783045s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:33:00.830: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-rnrc6
Mar 28 11:33:00.831: INFO: Service account default in ns e2e-tests-sched-pred-rnrc6 with secrets found. (722.504µs)
[BeforeEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:199
Mar 28 11:33:00.832: INFO: Waiting for terminating namespaces to be deleted...
Mar 28 11:33:00.835: INFO: >>> testContext.KubeConfig: /root/.kube/config
Mar 28 11:33:00.835: INFO: Waiting up to 2m0s for all pods (need at least 1) in namespace 'kube-system' to be running and ready
Mar 28 11:33:00.838: INFO: 1 / 1 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 28 11:33:00.838: INFO: expected 1 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Mar 28 11:33:00.838: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1 before test
Mar 28 11:33:00.841: INFO: k8s-etcd-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:33:00.841: INFO: k8s-proxy-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:33:00.841: INFO: k8s-master-127.0.0.1 from default started at <nil> (0 container statuses recorded)
Mar 28 11:33:00.841: INFO: kube-dns-v10-d8w4s from kube-system started at 2016-03-28 10:54:41 -0700 PDT (4 container statuses recorded)
Mar 28 11:33:00.841: INFO: Container etcd ready: true, restart count 0
Mar 28 11:33:00.841: INFO: Container healthz ready: true, restart count 0
Mar 28 11:33:00.841: INFO: Container kube2sky ready: true, restart count 0
Mar 28 11:33:00.841: INFO: Container skydns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
STEP: Trying to schedule Pod with nonempty NodeSelector.
Mar 28 11:33:00.846: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-rnrc6
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:158
[AfterEach] SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:33:10.863: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-rnrc6" for this suite.
• [SLOW TEST:17.046 seconds]
SchedulerPredicates [Serial]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:729
validates that NodeSelector is respected if not matching [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:393
------------------------------
S
------------------------------
ConfigMap
should be consumable via environment variable [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
[BeforeEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:33:15.873: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:33:15.874: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-t6tmk
Mar 28 11:33:15.875: INFO: Get service account default in ns e2e-tests-configmap-t6tmk failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:33:17.876: INFO: Service account default in ns e2e-tests-configmap-t6tmk with secrets found. (2.001363412s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:33:17.876: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-t6tmk
Mar 28 11:33:17.876: INFO: Service account default in ns e2e-tests-configmap-t6tmk with secrets found. (582.207µs)
[It] should be consumable via environment variable [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
STEP: Creating configMap e2e-tests-configmap-t6tmk/configmap-test-8a565d69-f513-11e5-b1e1-0862662cf845
STEP: Creating a pod to test consume configMaps
Mar 28 11:33:17.895: INFO: Waiting up to 5m0s for pod pod-configmaps-8a5683ad-f513-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:33:17.896: INFO: No Status.Info for container 'env-test' in pod 'pod-configmaps-8a5683ad-f513-11e5-b1e1-0862662cf845' yet
Mar 28 11:33:17.896: INFO: Waiting for pod pod-configmaps-8a5683ad-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-configmap-t6tmk' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.252926ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-configmaps-8a5683ad-f513-11e5-b1e1-0862662cf845 container env-test: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
CONFIG_DATA_1=value-1
HOSTNAME=pod-configmaps-8a5683ad-f513-11e5-b1e1-0862662cf845
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
STEP: Cleaning up the configMap
[AfterEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:33:19.921: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t6tmk" for this suite.
• [SLOW TEST:9.058 seconds]
ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:333
should be consumable via environment variable [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:332
------------------------------
SSSSSSS
------------------------------
Pods
should be submitted and removed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:384
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:33:24.931: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:33:24.932: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-cf3wm
Mar 28 11:33:24.933: INFO: Get service account default in ns e2e-tests-pods-cf3wm failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:33:26.934: INFO: Service account default in ns e2e-tests-pods-cf3wm with secrets found. (2.001725716s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:33:26.934: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-cf3wm
Mar 28 11:33:26.935: INFO: Service account default in ns e2e-tests-pods-cf3wm with secrets found. (795.056µs)
[It] should be submitted and removed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:384
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Mar 28 11:33:26.940: INFO: Waiting up to 5m0s for pod pod-update-8fbc9ef0-f513-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:33:26.941: INFO: Waiting for pod pod-update-8fbc9ef0-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-cf3wm' status to be 'running'(found phase: "Pending", readiness: false) (760.255µs elapsed)
Mar 28 11:33:28.942: INFO: Found pod 'pod-update-8fbc9ef0-f513-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:33:31.620: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cf3wm" for this suite.
• [SLOW TEST:11.696 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should be submitted and removed [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:384
------------------------------
SSSSSS
------------------------------
EmptyDir volumes
should support (non-root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:33:36.627: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:33:36.629: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4dw52
Mar 28 11:33:36.629: INFO: Get service account default in ns e2e-tests-emptydir-4dw52 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:33:38.631: INFO: Service account default in ns e2e-tests-emptydir-4dw52 with secrets found. (2.002323389s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:33:38.631: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4dw52
Mar 28 11:33:38.633: INFO: Service account default in ns e2e-tests-emptydir-4dw52 with secrets found. (1.367434ms)
[It] should support (non-root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 28 11:33:38.636: INFO: Waiting up to 5m0s for pod pod-96b586e1-f513-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:33:38.639: INFO: No Status.Info for container 'test-container' in pod 'pod-96b586e1-f513-11e5-b1e1-0862662cf845' yet
Mar 28 11:33:38.639: INFO: Waiting for pod pod-96b586e1-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-4dw52' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.289166ms elapsed)
Mar 28 11:33:40.640: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-96b586e1-f513-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-4dw52' so far
Mar 28 11:33:40.640: INFO: Waiting for pod pod-96b586e1-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-4dw52' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00368327s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-96b586e1-f513-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:33:42.665: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4dw52" for this suite.
• [SLOW TEST:11.047 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0644,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:76
------------------------------
EmptyDir volumes
should support (root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:33:47.675: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:33:47.676: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-1fmyz
Mar 28 11:33:47.676: INFO: Get service account default in ns e2e-tests-emptydir-1fmyz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:33:49.677: INFO: Service account default in ns e2e-tests-emptydir-1fmyz with secrets found. (2.001475537s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:33:49.677: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-1fmyz
Mar 28 11:33:49.678: INFO: Service account default in ns e2e-tests-emptydir-1fmyz with secrets found. (622.463µs)
[It] should support (root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 28 11:33:49.680: INFO: Waiting up to 5m0s for pod pod-9d4aed24-f513-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:33:49.681: INFO: No Status.Info for container 'test-container' in pod 'pod-9d4aed24-f513-11e5-b1e1-0862662cf845' yet
Mar 28 11:33:49.681: INFO: Waiting for pod pod-9d4aed24-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-1fmyz' status to be 'success or failure'(found phase: "Pending", readiness: false) (985.33µs elapsed)
Mar 28 11:33:51.682: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-9d4aed24-f513-11e5-b1e1-0862662cf845' in namespace 'e2e-tests-emptydir-1fmyz' so far
Mar 28 11:33:51.682: INFO: Waiting for pod pod-9d4aed24-f513-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-1fmyz' status to be 'success or failure'(found phase: "Running", readiness: true) (2.002293427s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-9d4aed24-f513-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:33:53.702: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-1fmyz" for this suite.
E0328 11:33:56.936454 11167 iowatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
• [SLOW TEST:11.035 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (root,0777,tmpfs) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:72
------------------------------
S
------------------------------
Proxy version v1
should proxy logs on node using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:33:58.710: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:33:58.711: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-4sx18
Mar 28 11:33:58.712: INFO: Get service account default in ns e2e-tests-proxy-4sx18 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:34:00.713: INFO: Service account default in ns e2e-tests-proxy-4sx18 with secrets found. (2.001431142s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:34:00.713: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-4sx18
Mar 28 11:34:00.713: INFO: Service account default in ns e2e-tests-proxy-4sx18 with secrets found. (608.941µs)
[It] should proxy logs on node using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
Mar 28 11:34:00.717: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.986785ms)
Mar 28 11:34:00.719: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.891771ms)
Mar 28 11:34:00.720: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.845782ms)
Mar 28 11:34:00.722: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.834161ms)
Mar 28 11:34:00.724: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.81441ms)
Mar 28 11:34:00.726: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.306964ms)
Mar 28 11:34:00.728: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.953431ms)
Mar 28 11:34:00.730: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.855223ms)
Mar 28 11:34:00.732: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.815578ms)
Mar 28 11:34:00.734: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.846493ms)
Mar 28 11:34:00.736: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.80712ms)
Mar 28 11:34:00.738: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.858112ms)
Mar 28 11:34:00.740: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.849759ms)
Mar 28 11:34:00.742: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.976796ms)
Mar 28 11:34:00.743: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.879657ms)
Mar 28 11:34:00.751: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 7.80914ms)
Mar 28 11:34:00.753: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.048178ms)
Mar 28 11:34:00.755: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.020671ms)
Mar 28 11:34:00.757: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.981932ms)
Mar 28 11:34:00.759: INFO: /api/v1/nodes/127.0.0.1/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.809887ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:34:00.759: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-4sx18" for this suite.
• [SLOW TEST:7.054 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy logs on node using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
------------------------------
SSSSSSSS
------------------------------
Kubectl client Proxy server
should support proxy with --port 0 [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1115
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:34:05.764: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:34:05.765: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fuqvc
Mar 28 11:34:05.766: INFO: Get service account default in ns e2e-tests-kubectl-fuqvc failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:34:07.767: INFO: Service account default in ns e2e-tests-kubectl-fuqvc with secrets found. (2.001914423s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:34:07.767: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fuqvc
Mar 28 11:34:07.769: INFO: Service account default in ns e2e-tests-kubectl-fuqvc with secrets found. (1.13887ms)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should support proxy with --port 0 [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1115
STEP: starting the proxy server
Mar 28 11:34:07.769: INFO: Asynchronously running '/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: curling proxy /api/ output
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:34:07.780: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fuqvc" for this suite.
• [SLOW TEST:7.021 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Proxy server
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1144
should support proxy with --port 0 [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1115
------------------------------
S
------------------------------
Pods
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:691
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:34:12.785: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:34:12.786: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-pq5n1
Mar 28 11:34:12.787: INFO: Get service account default in ns e2e-tests-pods-pq5n1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:34:14.788: INFO: Service account default in ns e2e-tests-pods-pq5n1 with secrets found. (2.001818615s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:34:14.788: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-pq5n1
Mar 28 11:34:14.789: INFO: Service account default in ns e2e-tests-pods-pq5n1 with secrets found. (613.528µs)
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:691
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-pq5n1
Mar 28 11:34:14.792: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Mar 28 11:34:14.792: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-pq5n1' status to be '!pending'(found phase: "Pending", readiness: false) (733.294µs elapsed)
Mar 28 11:34:16.794: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-pq5n1' status to be '!pending'(found phase: "Pending", readiness: false) (2.002034479s elapsed)
Mar 28 11:34:18.795: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-pq5n1' out of pending state (found '"Running"')
Mar 28 11:34:18.795: INFO: Started pod liveness-exec in namespace e2e-tests-pods-pq5n1
STEP: checking the pod's current state and verifying that restartCount is present
Mar 28 11:34:18.796: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:36:18.896: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pq5n1" for this suite.
• [SLOW TEST:131.120 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:691
------------------------------
SSSSS
------------------------------
Proxy version v1
should proxy logs on node [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:36:23.906: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:36:23.907: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ny0fo
Mar 28 11:36:23.908: INFO: Get service account default in ns e2e-tests-proxy-ny0fo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:36:25.909: INFO: Service account default in ns e2e-tests-proxy-ny0fo with secrets found. (2.001565415s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:36:25.909: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-ny0fo
Mar 28 11:36:25.909: INFO: Service account default in ns e2e-tests-proxy-ny0fo with secrets found. (727.557µs)
[It] should proxy logs on node [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
Mar 28 11:36:25.914: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 3.378531ms)
Mar 28 11:36:25.916: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.903586ms)
Mar 28 11:36:25.918: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.915552ms)
Mar 28 11:36:25.920: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.911967ms)
Mar 28 11:36:25.922: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.045793ms)
Mar 28 11:36:25.924: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.940039ms)
Mar 28 11:36:25.927: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.891833ms)
Mar 28 11:36:25.929: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.929289ms)
Mar 28 11:36:25.931: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.807443ms)
Mar 28 11:36:25.933: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.813409ms)
Mar 28 11:36:25.934: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.827046ms)
Mar 28 11:36:25.937: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.100614ms)
Mar 28 11:36:25.939: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.948231ms)
Mar 28 11:36:25.940: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.899248ms)
Mar 28 11:36:25.942: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.856048ms)
Mar 28 11:36:25.944: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.09714ms)
Mar 28 11:36:25.947: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.119447ms)
Mar 28 11:36:25.949: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.027907ms)
Mar 28 11:36:25.951: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.923184ms)
Mar 28 11:36:25.953: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.944378ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:36:25.953: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-ny0fo" for this suite.
• [SLOW TEST:7.062 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy logs on node [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
------------------------------
Port forwarding With a server that expects a client request
should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154
[BeforeEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:36:30.967: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:36:30.969: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-8hscl
Mar 28 11:36:30.969: INFO: Get service account default in ns e2e-tests-port-forwarding-8hscl failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:36:32.970: INFO: Service account default in ns e2e-tests-port-forwarding-8hscl with secrets found. (2.001696201s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:36:32.970: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-8hscl
Mar 28 11:36:32.971: INFO: Service account default in ns e2e-tests-port-forwarding-8hscl with secrets found. (640.95µs)
[It] should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154
STEP: creating the target pod
Mar 28 11:36:32.973: INFO: Waiting up to 5m0s for pod pfpod status to be running
Mar 28 11:36:32.974: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-8hscl' status to be 'running'(found phase: "Pending", readiness: false) (815.154µs elapsed)
Mar 28 11:36:34.975: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Mar 28 11:36:34.975: INFO: starting port-forward command and streaming output
Mar 28 11:36:34.975: INFO: Asynchronously running '/bin/kubectl kubectl --kubeconfig=/root/.kube/config port-forward --namespace=e2e-tests-port-forwarding-8hscl pfpod :80'
Mar 28 11:36:34.976: INFO: reading from `kubectl port-forward` command's stderr
STEP: Dialing the local port
STEP: Closing the connection to the local port
STEP: Waiting for the target pod to stop running
Mar 28 11:36:34.999: INFO: Waiting up to 30s for pod pfpod status to be no longer running
Mar 28 11:36:35.000: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-8hscl' status to be 'no longer running'(found phase: "Running", readiness: true) (1.238452ms elapsed)
Mar 28 11:36:37.002: INFO: Found pod 'pfpod' with status 'Failed' on node '127.0.0.1'
STEP: Retrieving logs from the target pod
STEP: Verifying logs
[AfterEach] Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:36:37.004: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-port-forwarding-8hscl" for this suite.
• [SLOW TEST:11.042 seconds]
Port forwarding
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:268
With a server that expects a client request
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:216
should support a client that connects, sends no data, and disconnects [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154
------------------------------
S
------------------------------
Downward API volume
should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
[BeforeEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:36:42.009: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:36:42.011: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-wp6iz
Mar 28 11:36:42.011: INFO: Get service account default in ns e2e-tests-downward-api-wp6iz failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:36:44.012: INFO: Service account default in ns e2e-tests-downward-api-wp6iz with secrets found. (2.001609287s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:36:44.012: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-wp6iz
Mar 28 11:36:44.013: INFO: Service account default in ns e2e-tests-downward-api-wp6iz with secrets found. (707.083µs)
[It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
STEP: Creating the pod
Mar 28 11:36:44.015: INFO: Waiting up to 5m0s for pod annotationupdate053450d8-f514-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:36:44.016: INFO: Waiting for pod annotationupdate053450d8-f514-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-downward-api-wp6iz' status to be 'running'(found phase: "Pending", readiness: false) (911.058µs elapsed)
Mar 28 11:36:46.018: INFO: Found pod 'annotationupdate053450d8-f514-11e5-b1e1-0862662cf845' on node '127.0.0.1'
STEP: Deleting the pod
[AfterEach] Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-downward-api-wp6iz".
Mar 28 11:37:31.037: INFO: At 2016-03-28 11:36:44 -0700 PDT - event for annotationupdate053450d8-f514-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned annotationupdate053450d8-f514-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:37:31.037: INFO: At 2016-03-28 11:36:44 -0700 PDT - event for annotationupdate053450d8-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:37:31.037: INFO: At 2016-03-28 11:36:45 -0700 PDT - event for annotationupdate053450d8-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id 6068690ad6fb
Mar 28 11:37:31.037: INFO: At 2016-03-28 11:36:45 -0700 PDT - event for annotationupdate053450d8-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id 6068690ad6fb
Mar 28 11:37:31.040: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:37:31.040: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:37:31.040: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:37:31.040: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:37:31.040: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:37:31.040: INFO:
Mar 28 11:37:31.041: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:37:31.043: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 4352 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/hostname:127.0.0.1 kubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845:42] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] [{OutOfDisk False 2016-03-28 11:37:21 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:37:21 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[nginx:latest] 190461396} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[redis:latest] 177537508} {[gcr.io/google_samples/gb-frontend:v4] 510197625} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/example-dns-frontend:v1] 677794775} {[gcr.io/google_containers/example-dns-backend:v1] 675068800} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_samples/gb-redisslave:v1] 109462535} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/redis:e2e] 418929769} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:37:31.043: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:37:31.044: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:37:31.048: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:31.048: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:31.048: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:31.048: INFO: annotationupdate053450d8-f514-11e5-b1e1-0862662cf845 started at <nil> (0 container statuses recorded)
Mar 28 11:37:31.048: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:37:31.107: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 57 @[0]
Mar 28 11:37:31.107: INFO: ERROR kubelet_docker_errors{operation_type="inspect_image"} => 15 @[0]
Mar 28 11:37:31.107: INFO: ERROR kubelet_docker_errors{operation_type="logs"} => 1 @[0]
Mar 28 11:37:31.107: INFO: ERROR kubelet_docker_errors{operation_type="start_container"} => 2 @[0]
Mar 28 11:37:31.107: INFO: ERROR kubelet_docker_errors{operation_type="stop_container"} => 57 @[0]
Mar 28 11:37:31.107: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:37:31.107: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wp6iz" for this suite.
• Failure [54.104 seconds]
Downward API volume
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:133
should update annotations on modification [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:132
Timed out after 45.000s.
Expected
<string>: Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
Error reading file /etc/annotations: open /etc/annotations: no such file or directory, retrying
to contain substring
<string>: builder="bar"
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
------------------------------
S
------------------------------
ConfigMap
should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
[BeforeEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:37:36.113: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:37:36.114: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-ssdx1
Mar 28 11:37:36.115: INFO: Get service account default in ns e2e-tests-configmap-ssdx1 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:37:38.116: INFO: Service account default in ns e2e-tests-configmap-ssdx1 with secrets found. (2.001510742s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:37:38.116: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-configmap-ssdx1
Mar 28 11:37:38.117: INFO: Service account default in ns e2e-tests-configmap-ssdx1 with secrets found. (730.049µs)
[It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
STEP: Creating configMap with name configmap-test-volume-2573e979-f514-11e5-b1e1-0862662cf845
STEP: Creating a pod to test consume configMaps
Mar 28 11:37:38.134: INFO: Waiting up to 5m0s for pod pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:37:38.136: INFO: No Status.Info for container 'configmap-volume-test' in pod 'pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845' yet
Mar 28 11:37:38.136: INFO: Waiting for pod pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-configmap-ssdx1' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.642803ms elapsed)
Mar 28 11:37:40.137: INFO: Unexpected error occurred: pod 'pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:37:39 -0700 PDT FinishedAt:2016-03-28 11:37:39 -0700 PDT ContainerID:docker://87b8fe7adc5f2806f09f31ccf3fb9f29d831bc1a422a917440229080af9f7d5b}
STEP: Cleaning up the configMap
[AfterEach] ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
STEP: Collecting events from namespace "e2e-tests-configmap-ssdx1".
Mar 28 11:37:40.157: INFO: At 2016-03-28 11:37:38 -0700 PDT - event for pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845: {default-scheduler } Scheduled: Successfully assigned pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845 to 127.0.0.1
Mar 28 11:37:40.158: INFO: At 2016-03-28 11:37:38 -0700 PDT - event for pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.6" already present on machine
Mar 28 11:37:40.158: INFO: At 2016-03-28 11:37:39 -0700 PDT - event for pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Created: Created container with docker id 87b8fe7adc5f
Mar 28 11:37:40.158: INFO: At 2016-03-28 11:37:39 -0700 PDT - event for pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845: {kubelet 127.0.0.1} Started: Started container with docker id 87b8fe7adc5f
Mar 28 11:37:40.160: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 28 11:37:40.160: INFO: k8s-etcd-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:07 -0700 PDT }]
Mar 28 11:37:40.160: INFO: k8s-master-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:09 -0700 PDT }]
Mar 28 11:37:40.160: INFO: k8s-proxy-127.0.0.1 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:06 -0700 PDT }]
Mar 28 11:37:40.160: INFO: kube-dns-v10-d8w4s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-03-28 10:54:51 -0700 PDT }]
Mar 28 11:37:40.160: INFO:
Mar 28 11:37:40.162: INFO:
Logging node info for node 127.0.0.1
Mar 28 11:37:40.163: INFO: Node Info: &{{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 137b6e8b-f50e-11e5-a3fb-0862662cf845 4366 0 2016-03-28 10:54:10 -0700 PDT <nil> <nil> map[kubernetes.io/e2e-0faa8565-f50f-11e5-b1e1-0862662cf845:42 kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI} pods:{110.000 DecimalSI}] map[pods:{110.000 DecimalSI} cpu:{4.000 DecimalSI} memory:{16767156224.000 BinarySI}] [{OutOfDisk False 2016-03-28 11:37:31 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {Ready True 2016-03-28 11:37:31 -0700 PDT 2016-03-28 10:54:10 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {12fd47e6d4c54914a88972c6cc8d81de 44EC9C00-D7DA-11DD-9398-0862662CF845 f0f5728b-196f-47d9-9672-2a11ecdb3d77 4.4.6-300.fc23.x86_64 Debian GNU/Linux 8 (jessie) docker://1.10.3 v1.2.0 v1.2.0} [{[brs-ui:latest localhost:5000/brs-ui:latest] 605145331} {[<none>:<none>] 605144068} {[<none>:<none>] 605138643} {[ci1.brs.rzc.cudaops.com:5000/bn/brs-ui:latest] 613900067} {[kube-build:build-3df9ed65b2] 1628893568} {[<none>:<none>] 1628893569} {[<none>:<none>] 1628893559} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1627726538} {[<none>:<none>] 1628892949} {[localhost:5000/log-indexer-config:latest log-indexer-config:latest] 717448316} {[localhost:5000/log-indexer:latest log-indexer:latest] 383713661} {[<none>:<none>] 310976263} {[<none>:<none>] 321414548} {[<none>:<none>] 383714083} {[ci1.brs.rzc.cudaops.com:5000/bn/curator:latest] 736669380} {[<none>:<none>] 413912127} {[<none>:<none>] 717449212} {[<none>:<none>] 717449200} {[<none>:<none>] 717448730} {[<none>:<none>] 699153894} {[<none>:<none>] 413912971} {[<none>:<none>] 413912975} {[<none>:<none>] 413913005} {[<none>:<none>] 413912873} {[localhost:5000/log-indexers:latest log-indexers:latest] 717448654} {[<none>:<none>] 717448624} {[localhost:5000/log-server:latest log-server:latest] 375265658} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 717448145} {[<none>:<none>] 413906461} {[<none>:<none>] 413905748} {[<none>:<none>] 413905807} {[<none>:<none>] 413905790} {[<none>:<none>] 413905790} {[<none>:<none>] 605680907} {[<none>:<none>] 413905763} {[<none>:<none>] 375258621} {[<none>:<none>] 375258592} {[python:3.5] 689482475} {[php:5.6-apache] 479350972} {[ci1.brs.rzc.cudaops.com:5000/bn/ponyo:latest] 937604426} {[busybox:latest] 1112820} {[gcr.io/google_containers/hyperkube-amd64:v1.2.0] 316607821} {[postgres:9.5] 264519910} {[elasticsearch:2.2] 347050720} {[java:openjdk-8-jre] 310976263} {[ci1.brs.rzc.cudaops.com:5000/bn/notifier:latest] 736471567} {[ci1.brs.rzc.cudaops.com:5000/bn/log-server:latest] 374723147} {[nginx:latest] 190461396} {[registry:2] 165760106} {[ci1.brs.rzc.cudaops.com:5000/bn/postfix:latest] 190134010} {[ci1.brs.rzc.cudaops.com:5000/bn/log-indexer:latest] 413367511} {[ci1.brs.rzc.cudaops.com:5000/bn/kafka:latest] 437445503} {[redis:latest] 177537508} {[gcr.io/google_samples/gb-frontend:v4] 510197625} {[gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/kube-cross:v1.4.2-1] 1551078122} {[gcr.io/google_containers/etcd-amd64:2.2.1] 28192476} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/kube2sky:1.12] 24482187} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/example-dns-frontend:v1] 677794775} {[gcr.io/google_containers/example-dns-backend:v1] 675068800} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/etcd:2.2.1] 28191895} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/skydns:2015-10-13-8c72f8c] 40547562} {[gcr.io/google_containers/pause:2.0] 350164} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_samples/gb-redisslave:v1] 109462535} {[gcr.io/google_containers/exechealthz:1.0] 7095869} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/mounttest:0.2] 1752375} {[gcr.io/google_containers/dnsutils:e2e] 141873580} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/serve_hostname:1.1] 4522409} {[gcr.io/google_containers/redis:e2e] 418929769} {[gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/nginx:1.7.9] 91641000} {[kubernetes/redis:v1] 145954175} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/busybox:latest] 2429728} {[gcr.io/google_containers/liveness:e2e] 4387474}]}}
Mar 28 11:37:40.163: INFO:
Logging kubelet events for node 127.0.0.1
Mar 28 11:37:40.165: INFO:
Logging pods the kubelet thinks is on node 127.0.0.1
Mar 28 11:37:40.170: INFO: k8s-proxy-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:40.170: INFO: pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845 started at <nil> (0 container statuses recorded)
Mar 28 11:37:40.170: INFO: kube-dns-v10-d8w4s started at <nil> (0 container statuses recorded)
Mar 28 11:37:40.170: INFO: k8s-master-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:40.170: INFO: k8s-etcd-127.0.0.1 started at <nil> (0 container statuses recorded)
Mar 28 11:37:40.227: INFO: ERROR kubelet_docker_errors{operation_type="inspect_container"} => 58 @[0]
Mar 28 11:37:40.227: INFO: ERROR kubelet_docker_errors{operation_type="inspect_image"} => 15 @[0]
Mar 28 11:37:40.227: INFO: ERROR kubelet_docker_errors{operation_type="logs"} => 1 @[0]
Mar 28 11:37:40.227: INFO: ERROR kubelet_docker_errors{operation_type="start_container"} => 2 @[0]
Mar 28 11:37:40.227: INFO: ERROR kubelet_docker_errors{operation_type="stop_container"} => 61 @[0]
Mar 28 11:37:40.227: INFO:
Latency metrics for node 127.0.0.1
Mar 28 11:37:40.227: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ssdx1" for this suite.
• Failure [9.120 seconds]
ConfigMap
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:333
should be consumable from pods in volume [Conformance] [It]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:101
Expected error:
<*errors.errorString | 0xc82130b390>: {
s: "pod 'pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:37:39 -0700 PDT FinishedAt:2016-03-28 11:37:39 -0700 PDT ContainerID:docker://87b8fe7adc5f2806f09f31ccf3fb9f29d831bc1a422a917440229080af9f7d5b}",
}
pod 'pod-configmaps-2574130d-f514-11e5-b1e1-0862662cf845' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-03-28 11:37:39 -0700 PDT FinishedAt:2016-03-28 11:37:39 -0700 PDT ContainerID:docker://87b8fe7adc5f2806f09f31ccf3fb9f29d831bc1a422a917440229080af9f7d5b}
not to have occurred
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
------------------------------
Networking
should provide Internet connection for containers [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:37:45.234: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:37:45.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-qilbo
Mar 28 11:37:45.236: INFO: Get service account default in ns e2e-tests-nettest-qilbo failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:37:47.237: INFO: Service account default in ns e2e-tests-nettest-qilbo with secrets found. (2.001693086s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:37:47.237: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-qilbo
Mar 28 11:37:47.237: INFO: Service account default in ns e2e-tests-nettest-qilbo with secrets found. (715.749µs)
[BeforeEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
STEP: Running container which tries to wget google.com
Mar 28 11:37:47.356: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure
Mar 28 11:37:47.357: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Mar 28 11:37:47.357: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-qilbo' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.202742ms elapsed)
STEP: Saw pod success
[AfterEach] Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:37:49.370: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-nettest-qilbo" for this suite.
• [SLOW TEST:9.147 seconds]
Networking
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:258
should provide Internet connection for containers [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
------------------------------
ReplicaSet
should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
[BeforeEach] ReplicaSet
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:37:54.381: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:37:54.383: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replicaset-61vvk
Mar 28 11:37:54.384: INFO: Get service account default in ns e2e-tests-replicaset-61vvk failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:37:56.385: INFO: Service account default in ns e2e-tests-replicaset-61vvk with secrets found. (2.001756044s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:37:56.385: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replicaset-61vvk
Mar 28 11:37:56.385: INFO: Service account default in ns e2e-tests-replicaset-61vvk with secrets found. (634.51µs)
[It] should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
STEP: Creating ReplicaSet my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845
Mar 28 11:37:56.390: INFO: Pod name my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845: Found 0 pods out of 2
Mar 28 11:38:01.391: INFO: Pod name my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845: Found 2 pods out of 2
STEP: Ensuring each pod is running
Mar 28 11:38:01.391: INFO: Waiting up to 5m0s for pod my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-22mbe status to be running
Mar 28 11:38:01.393: INFO: Found pod 'my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-22mbe' on node '127.0.0.1'
Mar 28 11:38:01.393: INFO: Waiting up to 5m0s for pod my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-k87tg status to be running
Mar 28 11:38:01.394: INFO: Found pod 'my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-k87tg' on node '127.0.0.1'
STEP: Trying to dial each unique pod
Mar 28 11:38:06.398: INFO: Controller my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845: Got expected result from replica 1 [my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-22mbe]: "my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-22mbe", 1 of 2 required successes so far
Mar 28 11:38:06.401: INFO: Controller my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845: Got expected result from replica 2 [my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-k87tg]: "my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845-k87tg", 2 of 2 required successes so far
STEP: deleting ReplicaSet my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845 in namespace e2e-tests-replicaset-61vvk
Mar 28 11:38:08.416: INFO: Deleting RS my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845 took: 2.014325742s
Mar 28 11:38:10.418: INFO: Terminating ReplicaSet my-hostname-basic-3057741d-f514-11e5-b1e1-0862662cf845 pods took: 2.002298668s
[AfterEach] ReplicaSet
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:38:10.418: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-61vvk" for this suite.
• [SLOW TEST:21.043 seconds]
ReplicaSet
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
should serve a basic image on each replica with a public image [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:39
------------------------------
SSSSS
------------------------------
Proxy version v1
should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
[BeforeEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:38:15.424: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:38:15.425: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-9nli8
Mar 28 11:38:15.425: INFO: Get service account default in ns e2e-tests-proxy-9nli8 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:38:17.426: INFO: Service account default in ns e2e-tests-proxy-9nli8 with secrets found. (2.001535429s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:38:17.426: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-9nli8
Mar 28 11:38:17.427: INFO: Service account default in ns e2e-tests-proxy-9nli8 with secrets found. (625.678µs)
[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
Mar 28 11:38:17.430: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.003463ms)
Mar 28 11:38:17.432: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.847265ms)
Mar 28 11:38:17.434: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.817638ms)
Mar 28 11:38:17.436: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.780591ms)
Mar 28 11:38:17.438: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.802899ms)
Mar 28 11:38:17.440: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 2.010523ms)
Mar 28 11:38:17.442: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.901011ms)
Mar 28 11:38:17.444: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.829322ms)
Mar 28 11:38:17.445: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.90091ms)
Mar 28 11:38:17.447: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.822102ms)
Mar 28 11:38:17.449: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.799037ms)
Mar 28 11:38:17.451: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.791955ms)
Mar 28 11:38:17.453: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.811076ms)
Mar 28 11:38:17.455: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.958112ms)
Mar 28 11:38:17.457: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.938725ms)
Mar 28 11:38:17.459: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.949967ms)
Mar 28 11:38:17.461: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.91171ms)
Mar 28 11:38:17.462: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.879479ms)
Mar 28 11:38:17.464: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.897087ms)
Mar 28 11:38:17.466: INFO: /api/v1/nodes/127.0.0.1:10250/proxy/logs/: <pre>
<a href="alternatives.log">alternatives.log</a>
<a href="apt/">apt/</a>
<a href="bootstrap.... (200; 1.873574ms)
[AfterEach] version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:38:17.466: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-9nli8" for this suite.
• [SLOW TEST:7.048 seconds]
Proxy
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
version v1
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:39
should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:59
------------------------------
S
------------------------------
Kubectl client Kubectl run job
should create a job from an image when restart is OnFailure [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1049
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:38:22.472: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:38:22.473: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-h0o3f
Mar 28 11:38:22.474: INFO: Get service account default in ns e2e-tests-kubectl-h0o3f failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:38:24.475: INFO: Service account default in ns e2e-tests-kubectl-h0o3f with secrets found. (2.001674654s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:38:24.475: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-h0o3f
Mar 28 11:38:24.475: INFO: Service account default in ns e2e-tests-kubectl-h0o3f with secrets found. (605.617µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1026
[It] should create a job from an image when restart is OnFailure [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1049
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 28 11:38:24.476: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-h0o3f'
Mar 28 11:38:24.492: INFO: stderr: ""
Mar 28 11:38:24.492: INFO: stdout: "job \"e2e-test-nginx-job\" created"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1030
Mar 28 11:38:24.493: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-h0o3f'
Mar 28 11:38:26.517: INFO: stderr: ""
Mar 28 11:38:26.517: INFO: stdout: "job \"e2e-test-nginx-job\" deleted"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:38:26.517: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h0o3f" for this suite.
• [SLOW TEST:39.053 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1069
should create a job from an image when restart is OnFailure [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1049
------------------------------
SSSSSS
------------------------------
EmptyDir volumes
should support (non-root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112
[BeforeEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:39:01.525: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:39:01.526: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-lzdb8
Mar 28 11:39:01.527: INFO: Get service account default in ns e2e-tests-emptydir-lzdb8 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:39:03.528: INFO: Service account default in ns e2e-tests-emptydir-lzdb8 with secrets found. (2.001529785s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:39:03.528: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-lzdb8
Mar 28 11:39:03.529: INFO: Service account default in ns e2e-tests-emptydir-lzdb8 with secrets found. (594.205µs)
[It] should support (non-root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 28 11:39:03.531: INFO: Waiting up to 5m0s for pod pod-585cb668-f514-11e5-b1e1-0862662cf845 status to be success or failure
Mar 28 11:39:03.532: INFO: No Status.Info for container 'test-container' in pod 'pod-585cb668-f514-11e5-b1e1-0862662cf845' yet
Mar 28 11:39:03.532: INFO: Waiting for pod pod-585cb668-f514-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-emptydir-lzdb8' status to be 'success or failure'(found phase: "Pending", readiness: false) (981.263µs elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-585cb668-f514-11e5-b1e1-0862662cf845 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:39:05.547: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lzdb8" for this suite.
• [SLOW TEST:9.033 seconds]
EmptyDir volumes
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:113
should support (non-root,0777,default) [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:112
------------------------------
SS
------------------------------
Kubectl client Kubectl api-versions
should check if v1 is in available api versions [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:535
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:39:10.558: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:39:10.559: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fqpes
Mar 28 11:39:10.560: INFO: Get service account default in ns e2e-tests-kubectl-fqpes failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:39:12.561: INFO: Service account default in ns e2e-tests-kubectl-fqpes with secrets found. (2.001802151s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:39:12.561: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fqpes
Mar 28 11:39:12.562: INFO: Service account default in ns e2e-tests-kubectl-fqpes with secrets found. (646.007µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[It] should check if v1 is in available api versions [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:535
STEP: validating api verions
Mar 28 11:39:12.562: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Mar 28 11:39:12.573: INFO: stderr: ""
Mar 28 11:39:12.573: INFO: stdout: "autoscaling/v1\nbatch/v1\nextensions/v1beta1\nv1"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:39:12.573: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fqpes" for this suite.
• [SLOW TEST:7.021 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl api-versions
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:536
should check if v1 is in available api versions [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:535
------------------------------
S
------------------------------
Pods
should have monotonically increasing restart count [Conformance] [Slow]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:748
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:39:17.579: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:39:17.580: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-6k1ke
Mar 28 11:39:17.581: INFO: Get service account default in ns e2e-tests-pods-6k1ke failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:39:19.582: INFO: Service account default in ns e2e-tests-pods-6k1ke with secrets found. (2.001802965s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:39:19.582: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-6k1ke
Mar 28 11:39:19.583: INFO: Service account default in ns e2e-tests-pods-6k1ke with secrets found. (620.141µs)
[It] should have monotonically increasing restart count [Conformance] [Slow]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:748
STEP: Creating pod liveness-http in namespace e2e-tests-pods-6k1ke
Mar 28 11:39:19.585: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Mar 28 11:39:19.586: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-6k1ke' status to be '!pending'(found phase: "Pending", readiness: false) (937.445µs elapsed)
Mar 28 11:39:21.587: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-6k1ke' out of pending state (found '"Running"')
Mar 28 11:39:21.587: INFO: Started pod liveness-http in namespace e2e-tests-pods-6k1ke
STEP: checking the pod's current state and verifying that restartCount is present
Mar 28 11:39:21.588: INFO: Initial restart count of pod liveness-http is 0
Mar 28 11:39:43.607: INFO: Restart count of pod e2e-tests-pods-6k1ke/liveness-http is now 1 (22.019250959s elapsed)
Mar 28 11:40:01.620: INFO: Restart count of pod e2e-tests-pods-6k1ke/liveness-http is now 2 (40.031507955s elapsed)
Mar 28 11:40:21.634: INFO: Restart count of pod e2e-tests-pods-6k1ke/liveness-http is now 3 (1m0.045449005s elapsed)
Mar 28 11:40:41.648: INFO: Restart count of pod e2e-tests-pods-6k1ke/liveness-http is now 4 (1m20.060272865s elapsed)
Mar 28 11:41:43.703: INFO: Restart count of pod e2e-tests-pods-6k1ke/liveness-http is now 5 (2m22.115140938s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:41:43.715: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6k1ke" for this suite.
• [SLOW TEST:151.146 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should have monotonically increasing restart count [Conformance] [Slow]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:748
------------------------------
S
------------------------------
Pods
should be schedule with cpu and memory limits [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263
[BeforeEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:41:48.725: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:41:48.726: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-caci9
Mar 28 11:41:48.728: INFO: Get service account default in ns e2e-tests-pods-caci9 failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:41:50.729: INFO: Service account default in ns e2e-tests-pods-caci9 with secrets found. (2.002167107s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:41:50.729: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-caci9
Mar 28 11:41:50.729: INFO: Service account default in ns e2e-tests-pods-caci9 with secrets found. (600.534µs)
[It] should be schedule with cpu and memory limits [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263
STEP: creating the pod
Mar 28 11:41:50.731: INFO: Waiting up to 5m0s for pod pod-update-bc05831a-f514-11e5-b1e1-0862662cf845 status to be running
Mar 28 11:41:50.733: INFO: Waiting for pod pod-update-bc05831a-f514-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-caci9' status to be 'running'(found phase: "Pending", readiness: false) (2.023606ms elapsed)
Mar 28 11:41:52.735: INFO: Waiting for pod pod-update-bc05831a-f514-11e5-b1e1-0862662cf845 in namespace 'e2e-tests-pods-caci9' status to be 'running'(found phase: "Pending", readiness: false) (2.003399109s elapsed)
Mar 28 11:41:54.736: INFO: Found pod 'pod-update-bc05831a-f514-11e5-b1e1-0862662cf845' on node '127.0.0.1'
[AfterEach] Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:41:54.739: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-caci9" for this suite.
• [SLOW TEST:26.026 seconds]
Pods
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:1222
should be schedule with cpu and memory limits [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:263
------------------------------
Services
should serve multiport endpoints from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:42:14.751: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:42:14.752: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ptyli
Mar 28 11:42:14.753: INFO: Get service account default in ns e2e-tests-services-ptyli failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:42:16.754: INFO: Service account default in ns e2e-tests-services-ptyli with secrets found. (2.001624225s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:42:16.754: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ptyli
Mar 28 11:42:16.754: INFO: Service account default in ns e2e-tests-services-ptyli with secrets found. (606.067µs)
[BeforeEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:73
Mar 28 11:42:16.754: INFO: >>> testContext.KubeConfig: /root/.kube/config
[It] should serve multiport endpoints from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-ptyli
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-ptyli to expose endpoints map[]
Mar 28 11:42:16.768: INFO: Get endpoints failed (1.107531ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar 28 11:42:17.769: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ptyli exposes endpoints map[] (1.002094154s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-ptyli
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-ptyli to expose endpoints map[pod1:[100]]
Mar 28 11:42:20.779: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ptyli exposes endpoints map[pod1:[100]] (3.007913048s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-ptyli
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-ptyli to expose endpoints map[pod1:[100] pod2:[101]]
Mar 28 11:42:23.792: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ptyli exposes endpoints map[pod2:[101] pod1:[100]] (3.011897685s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-ptyli
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-ptyli to expose endpoints map[pod2:[101]]
Mar 28 11:42:24.799: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ptyli exposes endpoints map[pod2:[101]] (1.004465235s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-ptyli
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-ptyli to expose endpoints map[]
Mar 28 11:42:25.804: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ptyli exposes endpoints map[] (1.003476419s elapsed)
[AfterEach] Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:42:25.825: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-services-ptyli" for this suite.
• [SLOW TEST:31.085 seconds]
Services
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:902
should serve multiport endpoints from pods [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:224
------------------------------
Kubectl client Kubectl run job
should create a job from an image when restart is Never [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1068
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:100
STEP: Creating a kubernetes client
Mar 28 11:42:45.836: INFO: >>> testContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
Mar 28 11:42:45.837: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-h7lwa
Mar 28 11:42:45.838: INFO: Get service account default in ns e2e-tests-kubectl-h7lwa failed, ignoring for 2s: serviceaccounts "default" not found
Mar 28 11:42:47.839: INFO: Service account default in ns e2e-tests-kubectl-h7lwa with secrets found. (2.001475543s)
STEP: Waiting for a default service account to be provisioned in namespace
Mar 28 11:42:47.839: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-h7lwa
Mar 28 11:42:47.839: INFO: Service account default in ns e2e-tests-kubectl-h7lwa with secrets found. (633.332µs)
[BeforeEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:120
[BeforeEach] Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1026
[It] should create a job from an image when restart is Never [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1068
STEP: running the image gcr.io/google_containers/nginx:1.7.9
Mar 28 11:42:47.840: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=Never --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-h7lwa'
Mar 28 11:42:47.856: INFO: stderr: ""
Mar 28 11:42:47.856: INFO: stdout: "job \"e2e-test-nginx-job\" created"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1030
Mar 28 11:42:47.857: INFO: Running '/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-h7lwa'
Mar 28 11:42:49.880: INFO: stderr: ""
Mar 28 11:42:49.880: INFO: stdout: "job \"e2e-test-nginx-job\" deleted"
[AfterEach] Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Mar 28 11:42:49.880: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h7lwa" for this suite.
• [SLOW TEST:24.054 seconds]
Kubectl client
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1145
Kubectl run job
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1069
should create a job from an image when restart is Never [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1068
------------------------------
Summarizing 7 Failures:
[Fail] ConfigMap [It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
[Fail] Downward API volume [It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Downward API volume [It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
[Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Networking [It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
[Fail] Downward API volume [It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
[Fail] ConfigMap [It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
Ran 93 of 265 Specs in 2875.468 seconds
FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s)
FAIL
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment