Skip to content

Instantly share code, notes, and snippets.

@yifan-gu
Created July 10, 2018 02:51
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yifan-gu/a1cc0a8af097166f316fa90e6652f3db to your computer and use it in GitHub Desktop.
Save yifan-gu/a1cc0a8af097166f316fa90e6652f3db to your computer and use it in GitHub Desktop.
conformance test
This file has been truncated, but you can view the full file.
[INFO] [19:08:41-0700] Running tests against existing cluster...
[INFO] [19:08:41-0700] Running parallel tests N=<default>
I0709 19:08:41.741853 10764 test.go:86] Extended test version v3.10.0-alpha.0+e63afaa-1228-dirty
Running Suite: Extended
=======================
Random Seed: 1531188522 - Will randomize all specs
Will run 447 specs
Running in parallel across 5 nodes
I0709 19:08:43.830656 11717 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
Jul 9 19:08:43.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:08:43.832: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Jul 9 19:08:44.246: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jul 9 19:08:44.634: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jul 9 19:08:44.634: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Jul 9 19:08:44.692: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Jul 9 19:08:44.692: INFO: Dumping network health container logs from all nodes...
Jul 9 19:08:44.761: INFO: e2e test version: v1.10.0+b81c8f8
Jul 9 19:08:44.840: INFO: kube-apiserver version: v1.11.0+d4cacc0
I0709 19:08:44.840549 11717 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
SSS
------------------------------
I0709 19:08:44.845674 11716 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
I0709 19:08:44.850342 11714 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
S
------------------------------
I0709 19:08:44.859308 11713 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
I0709 19:08:44.859324 11748 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
SSSSSS
------------------------------
[sig-storage] HostPath
should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:08:44.852: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:08:47.067: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Jul 9 19:08:47.860: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jul 9 19:08:48.099: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-mrzt2
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test hostPath mode
Jul 9 19:08:48.437: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-mrzt2" to be "success or failure"
Jul 9 19:08:48.465: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.784025ms
Jul 9 19:08:50.532: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.095115151s
STEP: Saw pod success
Jul 9 19:08:50.532: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul 9 19:08:50.602: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul 9 19:08:50.751: INFO: Waiting for pod pod-host-path-test to disappear
Jul 9 19:08:50.819: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:08:50.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-mrzt2" for this suite.
Jul 9 19:08:57.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:01.300: INFO: namespace: e2e-tests-hostpath-mrzt2, resource: bindings, ignored listing per whitelist
Jul 9 19:09:02.072: INFO: namespace e2e-tests-hostpath-mrzt2 deletion completed in 11.171547155s
• [SLOW TEST:17.221 seconds]
[sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34
should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Projected
should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:08:44.861: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:08:48.672: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Jul 9 19:08:49.441: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jul 9 19:08:49.693: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-hz5j2
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:08:51.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-hz5j2" to be "success or failure"
Jul 9 19:08:51.191: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 48.433563ms
Jul 9 19:08:53.252: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109326818s
Jul 9 19:08:55.331: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188430871s
STEP: Saw pod success
Jul 9 19:08:55.331: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:08:55.414: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:08:55.588: INFO: Waiting for pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 to disappear
Jul 9 19:08:55.656: INFO: Pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:08:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hz5j2" for this suite.
Jul 9 19:09:01.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:04.923: INFO: namespace: e2e-tests-projected-hz5j2, resource: bindings, ignored listing per whitelist
Jul 9 19:09:07.026: INFO: namespace e2e-tests-projected-hz5j2 deletion completed in 11.313794635s
• [SLOW TEST:22.165 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Secrets
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:08:44.863: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:08:47.936: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Jul 9 19:08:48.568: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jul 9 19:08:48.738: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-z59rs
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86
Jul 9 19:08:49.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secret-namespace-txhdr
STEP: Creating secret with name secret-test-2f0ac750-83e6-11e8-bd2e-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:08:51.010: INFO: Waiting up to 5m0s for pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-z59rs" to be "success or failure"
Jul 9 19:08:51.046: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.426674ms
Jul 9 19:08:53.109: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099545743s
Jul 9 19:08:55.156: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146331244s
STEP: Saw pod success
Jul 9 19:08:55.156: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:08:55.217: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:08:55.370: INFO: Waiting for pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:08:55.447: INFO: Pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:08:55.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-z59rs" for this suite.
Jul 9 19:09:01.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:04.411: INFO: namespace: e2e-tests-secrets-z59rs, resource: bindings, ignored listing per whitelist
Jul 9 19:09:06.079: INFO: namespace e2e-tests-secrets-z59rs deletion completed in 10.554564859s
STEP: Destroying namespace "e2e-tests-secret-namespace-txhdr" for this suite.
Jul 9 19:09:12.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:15.304: INFO: namespace: e2e-tests-secret-namespace-txhdr, resource: bindings, ignored listing per whitelist
Jul 9 19:09:15.666: INFO: namespace e2e-tests-secret-namespace-txhdr deletion completed in 9.58619834s
• [SLOW TEST:30.803 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86
------------------------------
S
------------------------------
[sig-storage] Projected
should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:02.074: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:04.463: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-fpcz7
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-map-38a11c87-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:09:05.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-fpcz7" to be "success or failure"
Jul 9 19:09:05.162: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.038412ms
Jul 9 19:09:07.190: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058319067s
STEP: Saw pod success
Jul 9 19:09:07.190: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:09:07.220: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:09:07.293: INFO: Waiting for pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:09:07.320: INFO: Pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:07.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fpcz7" for this suite.
Jul 9 19:09:13.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:16.255: INFO: namespace: e2e-tests-projected-fpcz7, resource: bindings, ignored listing per whitelist
Jul 9 19:09:16.782: INFO: namespace e2e-tests-projected-fpcz7 deletion completed in 9.431912033s
• [SLOW TEST:14.708 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-api-machinery] Downward API
should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:15.667: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:17.263: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-94pzx
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward api env vars
Jul 9 19:09:18.065: INFO: Waiting up to 5m0s for pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-94pzx" to be "success or failure"
Jul 9 19:09:18.094: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.033879ms
Jul 9 19:09:20.138: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072997211s
Jul 9 19:09:22.168: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103381227s
STEP: Saw pod success
Jul 9 19:09:22.168: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:09:22.206: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:09:22.277: INFO: Waiting for pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:09:22.308: INFO: Pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:22.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-94pzx" for this suite.
Jul 9 19:09:28.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:30.064: INFO: namespace: e2e-tests-downward-api-94pzx, resource: bindings, ignored listing per whitelist
Jul 9 19:09:31.863: INFO: namespace e2e-tests-downward-api-94pzx deletion completed in 9.523567764s
• [SLOW TEST:16.195 seconds]
[sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37
should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] ConfigMap
binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:07.027: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:08.916: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-wcrw4
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187
Jul 9 19:09:09.671: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating configMap with name configmap-test-upd-3b60bc1b-83e6-11e8-992b-28d244b00276
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:13.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wcrw4" for this suite.
Jul 9 19:09:36.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:37.910: INFO: namespace: e2e-tests-configmap-wcrw4, resource: bindings, ignored listing per whitelist
Jul 9 19:09:40.347: INFO: namespace e2e-tests-configmap-wcrw4 deletion completed in 26.324645331s
• [SLOW TEST:33.320 seconds]
[sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes when FSGroup is specified
files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:31.864: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:33.500: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-q75bf
STEP: Waiting for a default service account to be provisioned in namespace
[It] files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 9 19:09:34.110: INFO: Waiting up to 5m0s for pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-emptydir-q75bf" to be "success or failure"
Jul 9 19:09:34.141: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003318ms
Jul 9 19:09:36.172: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062287969s
STEP: Saw pod success
Jul 9 19:09:36.172: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:09:36.206: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:09:36.273: INFO: Waiting for pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:09:36.303: INFO: Pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:36.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q75bf" for this suite.
Jul 9 19:09:42.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:44.883: INFO: namespace: e2e-tests-emptydir-q75bf, resource: bindings, ignored listing per whitelist
Jul 9 19:09:45.978: INFO: namespace e2e-tests-emptydir-q75bf deletion completed in 9.628539544s
• [SLOW TEST:14.114 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
when FSGroup is specified
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57
------------------------------
[k8s.io] Pods
should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:16.783: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:18.406: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-vvlc8
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127
[It] should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 9 19:09:21.782: INFO: Successfully updated pod "pod-update-40eff97e-83e6-11e8-8401-28d244b00276"
STEP: verifying the updated pod is in kubernetes
Jul 9 19:09:21.840: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:21.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vvlc8" for this suite.
Jul 9 19:09:44.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:09:46.796: INFO: namespace: e2e-tests-pods-vvlc8, resource: bindings, ignored listing per whitelist
Jul 9 19:09:47.464: INFO: namespace e2e-tests-pods-vvlc8 deletion completed in 25.509105364s
• [SLOW TEST:30.681 seconds]
[k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[Feature:Builds][Conformance] oc new-app
should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:40.350: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:09:42.725: INFO: configPath is now "/tmp/e2e-test-new-app-dk5fm-user.kubeconfig"
Jul 9 19:09:42.725: INFO: The user is now "e2e-test-new-app-dk5fm-user"
Jul 9 19:09:42.725: INFO: Creating project "e2e-test-new-app-dk5fm"
Jul 9 19:09:42.839: INFO: Waiting on permissions in project "e2e-test-new-app-dk5fm" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:26
Jul 9 19:09:42.921: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:30
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:09:43.049: INFO: Running scan #0
Jul 9 19:09:43.049: INFO: Checking language ruby
Jul 9 19:09:43.100: INFO: Checking tag 2.0
Jul 9 19:09:43.100: INFO: Checking tag 2.2
Jul 9 19:09:43.100: INFO: Checking tag 2.3
Jul 9 19:09:43.100: INFO: Checking tag 2.4
Jul 9 19:09:43.100: INFO: Checking tag 2.5
Jul 9 19:09:43.100: INFO: Checking tag latest
Jul 9 19:09:43.100: INFO: Checking language nodejs
Jul 9 19:09:43.138: INFO: Checking tag 0.10
Jul 9 19:09:43.138: INFO: Checking tag 4
Jul 9 19:09:43.138: INFO: Checking tag 6
Jul 9 19:09:43.138: INFO: Checking tag 8
Jul 9 19:09:43.138: INFO: Checking tag latest
Jul 9 19:09:43.138: INFO: Checking language perl
Jul 9 19:09:43.171: INFO: Checking tag 5.16
Jul 9 19:09:43.171: INFO: Checking tag 5.20
Jul 9 19:09:43.171: INFO: Checking tag 5.24
Jul 9 19:09:43.171: INFO: Checking tag latest
Jul 9 19:09:43.171: INFO: Checking language php
Jul 9 19:09:43.204: INFO: Checking tag 5.6
Jul 9 19:09:43.204: INFO: Checking tag 7.0
Jul 9 19:09:43.204: INFO: Checking tag 7.1
Jul 9 19:09:43.204: INFO: Checking tag latest
Jul 9 19:09:43.204: INFO: Checking tag 5.5
Jul 9 19:09:43.204: INFO: Checking language python
Jul 9 19:09:43.238: INFO: Checking tag latest
Jul 9 19:09:43.238: INFO: Checking tag 2.7
Jul 9 19:09:43.238: INFO: Checking tag 3.3
Jul 9 19:09:43.238: INFO: Checking tag 3.4
Jul 9 19:09:43.238: INFO: Checking tag 3.5
Jul 9 19:09:43.238: INFO: Checking tag 3.6
Jul 9 19:09:43.238: INFO: Checking language wildfly
Jul 9 19:09:43.272: INFO: Checking tag latest
Jul 9 19:09:43.272: INFO: Checking tag 10.0
Jul 9 19:09:43.272: INFO: Checking tag 10.1
Jul 9 19:09:43.272: INFO: Checking tag 11.0
Jul 9 19:09:43.272: INFO: Checking tag 12.0
Jul 9 19:09:43.272: INFO: Checking tag 8.1
Jul 9 19:09:43.272: INFO: Checking tag 9.0
Jul 9 19:09:43.272: INFO: Checking language mysql
Jul 9 19:09:43.303: INFO: Checking tag 5.5
Jul 9 19:09:43.303: INFO: Checking tag 5.6
Jul 9 19:09:43.303: INFO: Checking tag 5.7
Jul 9 19:09:43.303: INFO: Checking tag latest
Jul 9 19:09:43.303: INFO: Checking language postgresql
Jul 9 19:09:43.341: INFO: Checking tag 9.5
Jul 9 19:09:43.341: INFO: Checking tag 9.6
Jul 9 19:09:43.341: INFO: Checking tag latest
Jul 9 19:09:43.341: INFO: Checking tag 9.2
Jul 9 19:09:43.341: INFO: Checking tag 9.4
Jul 9 19:09:43.341: INFO: Checking language mongodb
Jul 9 19:09:43.382: INFO: Checking tag 3.4
Jul 9 19:09:43.382: INFO: Checking tag latest
Jul 9 19:09:43.382: INFO: Checking tag 2.4
Jul 9 19:09:43.382: INFO: Checking tag 2.6
Jul 9 19:09:43.382: INFO: Checking tag 3.2
Jul 9 19:09:43.382: INFO: Checking language jenkins
Jul 9 19:09:43.417: INFO: Checking tag 1
Jul 9 19:09:43.417: INFO: Checking tag 2
Jul 9 19:09:43.417: INFO: Checking tag latest
Jul 9 19:09:43.417: INFO: Success!
[It] should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66
STEP: calling oc new-app
Jul 9 19:09:43.417: INFO: Running 'oc new-app --config=/tmp/e2e-test-new-app-dk5fm-user.kubeconfig --namespace=e2e-test-new-app-dk5fm https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789'
Jul 9 19:09:46.048: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc new-app --config=/tmp/e2e-test-new-app-dk5fm-user.kubeconfig --namespace=e2e-test-new-app-dk5fm https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789] [] error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character.
error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character.
[] <nil> 0xc42105f740 exit status 1 <nil> <nil> true [0xc4200dc888 0xc4200dc8f8 0xc4200dc8f8] [0xc4200dc888 0xc4200dc8f8] [0xc4200dc890 0xc4200dc8f0] [0x916090 0x916190] 0xc4214ecd80 <nil>}:
error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character.
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:40
[AfterEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:09:46.239: INFO: namespace : e2e-test-new-app-dk5fm api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:11.979 seconds]
[Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:16
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:24
should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:47.465: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:48.936: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-llm69
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54
STEP: Creating configMap with name configmap-test-volume-532239eb-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:09:49.596: INFO: Waiting up to 5m0s for pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-llm69" to be "success or failure"
Jul 9 19:09:49.630: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.792324ms
Jul 9 19:09:51.659: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062907306s
STEP: Saw pod success
Jul 9 19:09:51.659: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:09:51.698: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 container configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:09:51.762: INFO: Waiting for pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:09:51.799: INFO: Pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:51.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-llm69" for this suite.
Jul 9 19:09:57.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:00.793: INFO: namespace: e2e-tests-configmap-llm69, resource: bindings, ignored listing per whitelist
Jul 9 19:10:01.310: INFO: namespace e2e-tests-configmap-llm69 deletion completed in 9.47277623s
• [SLOW TEST:13.845 seconds]
[sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54
------------------------------
S
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:444
Jul 9 19:10:01.576: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin
Jul 9 19:10:01.576: INFO: Not using one of the specified plugins
[AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
[AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:01.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.265 seconds]
[Area:Networking] multicast
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21
when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:442
should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45
Jul 9 19:10:01.576: Not using one of the specified plugins
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[k8s.io] InitContainer
should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:08:44.845: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:08:47.331: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Jul 9 19:08:48.100: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jul 9 19:08:48.285: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-vc5jv
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40
[It] should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166
STEP: creating the pod
Jul 9 19:08:48.568: INFO: PodSpec: initContainers in spec.initContainers
Jul 9 19:09:43.506: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2ec7c3dc-83e6-11e8-8fe2-28d244b00276", GenerateName:"", Namespace:"e2e-tests-init-container-vc5jv", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vc5jv/pods/pod-init-2ec7c3dc-83e6-11e8-8fe2-28d244b00276", UID:"2edb82bb-83e6-11e8-84c6-0af96768d57e", ResourceVersion:"69577", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"536063916"}, Annotations:map[string]string{"openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2dvtq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc4211d0e00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"busybox", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(0xc4211d0f80), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"busybox", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(0xc4211d1000), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause-amd64:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:31457280, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"31457280", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:31457280, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"31457280", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc4211d0e80), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc4214a4a48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-0-130-54.us-west-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc4211d0ec0), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"default-dockercfg-7gp5s"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.130.54", PodIP:"10.2.2.191", StartTime:(*v1.Time)(0xc4217b30e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc42045c700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc42045c770)}, Ready:false, RestartCount:3, Image:"busybox:latest", ImageID:"docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", ContainerID:"docker://aad60e5897b9c2cfbf55c49ff779c85feaef0893f60edb22c46ab15ad8fae41a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4217b3120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"busybox", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4217b3100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause-amd64:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:09:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vc5jv" for this suite.
Jul 9 19:10:05.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:08.083: INFO: namespace: e2e-tests-init-container-vc5jv, resource: bindings, ignored listing per whitelist
Jul 9 19:10:09.484: INFO: namespace e2e-tests-init-container-vc5jv deletion completed in 25.913101336s
• [SLOW TEST:84.639 seconds]
[k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:01.581: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:10:03.428: INFO: configPath is now "/tmp/e2e-test-router-stress-p27nt-user.kubeconfig"
Jul 9 19:10:03.428: INFO: The user is now "e2e-test-router-stress-p27nt-user"
Jul 9 19:10:03.428: INFO: Creating project "e2e-test-router-stress-p27nt"
Jul 9 19:10:03.577: INFO: Waiting on permissions in project "e2e-test-router-stress-p27nt" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:10:03.719: INFO: namespace : e2e-test-router-stress-p27nt api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:09.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [8.212 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21
The HAProxy router [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68
should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69
no router installed on the cluster
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48
------------------------------
S
------------------------------
[sig-api-machinery] ConfigMap
should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-api-machinery] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:09.795: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:10:11.209: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-m2zlr
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap e2e-tests-configmap-m2zlr/configmap-test-606abb83-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:10:11.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-m2zlr" to be "success or failure"
Jul 9 19:10:11.909: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.02209ms
Jul 9 19:10:13.938: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058136545s
Jul 9 19:10:15.968: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087813715s
Jul 9 19:10:17.997: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117318537s
STEP: Saw pod success
Jul 9 19:10:17.997: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:10:18.025: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 container env-test: <nil>
STEP: delete the pod
Jul 9 19:10:18.098: INFO: Waiting for pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:10:18.127: INFO: Pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-api-machinery] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m2zlr" for this suite.
Jul 9 19:10:24.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:26.021: INFO: namespace: e2e-tests-configmap-m2zlr, resource: bindings, ignored listing per whitelist
Jul 9 19:10:27.719: INFO: namespace e2e-tests-configmap-m2zlr deletion completed in 9.559471834s
• [SLOW TEST:17.924 seconds]
[sig-api-machinery] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:29
should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:52.331: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:54.259: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-2hdld
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2hdld
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 9 19:09:55.154: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 9 19:10:11.775: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.2.2.207 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-2hdld PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:10:11.775: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:10:13.217: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:13.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2hdld" for this suite.
Jul 9 19:10:35.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:39.766: INFO: namespace: e2e-tests-pod-network-test-2hdld, resource: bindings, ignored listing per whitelist
Jul 9 19:10:39.884: INFO: namespace e2e-tests-pod-network-test-2hdld deletion completed in 26.620108146s
• [SLOW TEST:47.553 seconds]
[sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[sig-storage] Projected
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:27.721: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:10:29.267: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-h97lg
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422
STEP: Creating configMap with name projected-configmap-test-volume-6b2667d6-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:10:29.896: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-h97lg" to be "success or failure"
Jul 9 19:10:29.927: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.894207ms
Jul 9 19:10:31.958: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06175169s
STEP: Saw pod success
Jul 9 19:10:31.958: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:10:31.984: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:10:32.059: INFO: Waiting for pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:10:32.089: INFO: Pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:32.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h97lg" for this suite.
Jul 9 19:10:38.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:41.402: INFO: namespace: e2e-tests-projected-h97lg, resource: bindings, ignored listing per whitelist
Jul 9 19:10:41.732: INFO: namespace e2e-tests-projected-h97lg deletion completed in 9.608230757s
• [SLOW TEST:14.011 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422
------------------------------
[sig-storage] Projected
updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:08:44.846: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:08:48.929: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Jul 9 19:08:49.775: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jul 9 19:08:50.089: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-b9w9s
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Jul 9 19:08:50.558: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2ffc6e90-83e6-11e8-881a-28d244b00276
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2ffc6e90-83e6-11e8-881a-28d244b00276
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:21.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9w9s" for this suite.
Jul 9 19:10:43.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:46.903: INFO: namespace: e2e-tests-projected-b9w9s, resource: bindings, ignored listing per whitelist
Jul 9 19:10:48.827: INFO: namespace e2e-tests-projected-b9w9s deletion completed in 27.040637991s
• [SLOW TEST:123.981 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] Projected
should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:39.890: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:10:42.001: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-dw7b2
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name projected-configmap-test-volume-72d724e5-83e6-11e8-992b-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:10:42.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-dw7b2" to be "success or failure"
Jul 9 19:10:42.846: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 40.524707ms
Jul 9 19:10:44.883: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077334435s
STEP: Saw pod success
Jul 9 19:10:44.883: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:10:44.933: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:10:45.039: INFO: Waiting for pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 to disappear
Jul 9 19:10:45.077: INFO: Pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:45.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dw7b2" for this suite.
Jul 9 19:10:51.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:55.279: INFO: namespace: e2e-tests-projected-dw7b2, resource: bindings, ignored listing per whitelist
Jul 9 19:10:55.603: INFO: namespace e2e-tests-projected-dw7b2 deletion completed in 10.483606757s
• [SLOW TEST:15.713 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Downward API volume
should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:41.733: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:10:43.292: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-t27sb
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:10:43.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-t27sb" to be "success or failure"
Jul 9 19:10:43.955: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.758628ms
Jul 9 19:10:45.984: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060640477s
STEP: Saw pod success
Jul 9 19:10:45.984: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:10:46.012: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:10:46.078: INFO: Waiting for pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:10:46.106: INFO: Pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t27sb" for this suite.
Jul 9 19:10:52.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:10:55.554: INFO: namespace: e2e-tests-downward-api-t27sb, resource: bindings, ignored listing per whitelist
Jul 9 19:10:55.783: INFO: namespace e2e-tests-downward-api-t27sb deletion completed in 9.633243258s
• [SLOW TEST:14.050 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419
Jul 9 19:10:55.784: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:55.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:55.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Area:Networking] services
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10
when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418
should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52
Jul 9 19:10:55.784: This plugin does not isolate namespaces by default.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
SS
------------------------------
[Feature:Builds][pruning] prune builds based on settings in the buildconfig
[Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:55.604: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:10:57.827: INFO: configPath is now "/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig"
Jul 9 19:10:57.827: INFO: The user is now "e2e-test-build-pruning-hptxt-user"
Jul 9 19:10:57.827: INFO: Creating project "e2e-test-build-pruning-hptxt"
Jul 9 19:10:57.977: INFO: Waiting on permissions in project "e2e-test-build-pruning-hptxt" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37
Jul 9 19:10:58.038: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:10:58.171: INFO: Running scan #0
Jul 9 19:10:58.171: INFO: Checking language ruby
Jul 9 19:10:58.202: INFO: Checking tag 2.0
Jul 9 19:10:58.202: INFO: Checking tag 2.2
Jul 9 19:10:58.202: INFO: Checking tag 2.3
Jul 9 19:10:58.202: INFO: Checking tag 2.4
Jul 9 19:10:58.202: INFO: Checking tag 2.5
Jul 9 19:10:58.202: INFO: Checking tag latest
Jul 9 19:10:58.202: INFO: Checking language nodejs
Jul 9 19:10:58.241: INFO: Checking tag 0.10
Jul 9 19:10:58.241: INFO: Checking tag 4
Jul 9 19:10:58.241: INFO: Checking tag 6
Jul 9 19:10:58.241: INFO: Checking tag 8
Jul 9 19:10:58.241: INFO: Checking tag latest
Jul 9 19:10:58.241: INFO: Checking language perl
Jul 9 19:10:58.280: INFO: Checking tag 5.16
Jul 9 19:10:58.280: INFO: Checking tag 5.20
Jul 9 19:10:58.280: INFO: Checking tag 5.24
Jul 9 19:10:58.280: INFO: Checking tag latest
Jul 9 19:10:58.280: INFO: Checking language php
Jul 9 19:10:58.318: INFO: Checking tag latest
Jul 9 19:10:58.318: INFO: Checking tag 5.5
Jul 9 19:10:58.318: INFO: Checking tag 5.6
Jul 9 19:10:58.318: INFO: Checking tag 7.0
Jul 9 19:10:58.318: INFO: Checking tag 7.1
Jul 9 19:10:58.318: INFO: Checking language python
Jul 9 19:10:58.375: INFO: Checking tag 2.7
Jul 9 19:10:58.375: INFO: Checking tag 3.3
Jul 9 19:10:58.375: INFO: Checking tag 3.4
Jul 9 19:10:58.375: INFO: Checking tag 3.5
Jul 9 19:10:58.375: INFO: Checking tag 3.6
Jul 9 19:10:58.375: INFO: Checking tag latest
Jul 9 19:10:58.375: INFO: Checking language wildfly
Jul 9 19:10:58.405: INFO: Checking tag 11.0
Jul 9 19:10:58.405: INFO: Checking tag 12.0
Jul 9 19:10:58.405: INFO: Checking tag 8.1
Jul 9 19:10:58.405: INFO: Checking tag 9.0
Jul 9 19:10:58.405: INFO: Checking tag latest
Jul 9 19:10:58.405: INFO: Checking tag 10.0
Jul 9 19:10:58.405: INFO: Checking tag 10.1
Jul 9 19:10:58.405: INFO: Checking language mysql
Jul 9 19:10:58.444: INFO: Checking tag 5.5
Jul 9 19:10:58.444: INFO: Checking tag 5.6
Jul 9 19:10:58.444: INFO: Checking tag 5.7
Jul 9 19:10:58.444: INFO: Checking tag latest
Jul 9 19:10:58.444: INFO: Checking language postgresql
Jul 9 19:10:58.476: INFO: Checking tag 9.5
Jul 9 19:10:58.476: INFO: Checking tag 9.6
Jul 9 19:10:58.476: INFO: Checking tag latest
Jul 9 19:10:58.476: INFO: Checking tag 9.2
Jul 9 19:10:58.476: INFO: Checking tag 9.4
Jul 9 19:10:58.476: INFO: Checking language mongodb
Jul 9 19:10:58.508: INFO: Checking tag 2.4
Jul 9 19:10:58.508: INFO: Checking tag 2.6
Jul 9 19:10:58.508: INFO: Checking tag 3.2
Jul 9 19:10:58.508: INFO: Checking tag 3.4
Jul 9 19:10:58.508: INFO: Checking tag latest
Jul 9 19:10:58.508: INFO: Checking language jenkins
Jul 9 19:10:58.547: INFO: Checking tag 1
Jul 9 19:10:58.547: INFO: Checking tag 2
Jul 9 19:10:58.547: INFO: Checking tag latest
Jul 9 19:10:58.547: INFO: Success!
STEP: creating test image stream
Jul 9 19:10:58.547: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig --namespace=e2e-test-build-pruning-hptxt -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/build-pruning/imagestream.yaml'
imagestream.image.openshift.io "myphp" created
[It] [Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294
STEP: creating a build config with the group api
Jul 9 19:10:58.824: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig --namespace=e2e-test-build-pruning-hptxt -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/build-pruning/default-group-build-config.yaml'
buildconfig.build.openshift.io "myphp" created
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56
[AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:10:59.264: INFO: namespace : e2e-test-build-pruning-hptxt api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:05.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:9.748 seconds]
[Feature:Builds][pruning] prune builds based on settings in the buildconfig
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35
[Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294
------------------------------
[Feature:DeploymentConfig] deploymentconfigs
should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:09.487: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:10:11.378: INFO: configPath is now "/tmp/e2e-test-cli-deployment-kj6d8-user.kubeconfig"
Jul 9 19:10:11.378: INFO: The user is now "e2e-test-cli-deployment-kj6d8-user"
Jul 9 19:10:11.378: INFO: Creating project "e2e-test-cli-deployment-kj6d8"
Jul 9 19:10:11.495: INFO: Waiting on permissions in project "e2e-test-cli-deployment-kj6d8" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137
STEP: should create ControllerRef in RCs it creates
Jul 9 19:10:24.708: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
STEP: releasing RCs that no longer match its selector
STEP: adopting RCs that match its selector and have no ControllerRef
STEP: deleting owned RCs when deleted
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1132
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:00.476: INFO: namespace : e2e-test-cli-deployment-kj6d8 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:06.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:57.069 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1130
should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137
------------------------------
S
------------------------------
[k8s.io] Probing container
with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Probing container
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:09:45.979: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:09:47.706: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-s5hdx
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[AfterEach] [k8s.io] Probing container
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:10:48.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-s5hdx" for this suite.
Jul 9 19:11:10.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:13.589: INFO: namespace: e2e-tests-container-probe-s5hdx, resource: bindings, ignored listing per whitelist
Jul 9 19:11:13.871: INFO: namespace e2e-tests-container-probe-s5hdx deletion completed in 25.444058596s
• [SLOW TEST:87.892 seconds]
[k8s.io] Probing container
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] Downward API volume
should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:06.559: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:08.279: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-snx9g
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:11:08.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-snx9g" to be "success or failure"
Jul 9 19:11:08.932: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.604579ms
Jul 9 19:11:10.971: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073107651s
STEP: Saw pod success
Jul 9 19:11:10.971: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:11.002: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:11:11.087: INFO: Waiting for pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:11:11.118: INFO: Pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:11.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-snx9g" for this suite.
Jul 9 19:11:17.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:19.183: INFO: namespace: e2e-tests-downward-api-snx9g, resource: bindings, ignored listing per whitelist
Jul 9 19:11:21.143: INFO: namespace e2e-tests-downward-api-snx9g deletion completed in 9.977340991s
• [SLOW TEST:14.583 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[k8s.io] Variable Expansion
should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:05.353: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:07.324: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-7r8ws
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test substitution in container's command
Jul 9 19:11:08.163: INFO: Waiting up to 5m0s for pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-var-expansion-7r8ws" to be "success or failure"
Jul 9 19:11:08.214: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 50.78496ms
Jul 9 19:11:10.317: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153374508s
Jul 9 19:11:12.360: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196448037s
Jul 9 19:11:14.398: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234583462s
Jul 9 19:11:16.495: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331559739s
Jul 9 19:11:18.535: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371487095s
STEP: Saw pod success
Jul 9 19:11:18.535: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:18.572: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:11:18.662: INFO: Waiting for pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 to disappear
Jul 9 19:11:18.698: INFO: Pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:18.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7r8ws" for this suite.
Jul 9 19:11:24.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:28.792: INFO: namespace: e2e-tests-var-expansion-7r8ws, resource: bindings, ignored listing per whitelist
Jul 9 19:11:29.280: INFO: namespace e2e-tests-var-expansion-7r8ws deletion completed in 10.540215906s
• [SLOW TEST:23.927 seconds]
[k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[Conformance][Area:Networking][Feature:Router] The HAProxy router
should override the route host with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:55.787: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:10:57.402: INFO: configPath is now "/tmp/e2e-test-router-scoped-smdsm-user.kubeconfig"
Jul 9 19:10:57.402: INFO: The user is now "e2e-test-router-scoped-smdsm-user"
Jul 9 19:10:57.402: INFO: Creating project "e2e-test-router-scoped-smdsm"
Jul 9 19:10:57.610: INFO: Waiting on permissions in project "e2e-test-router-scoped-smdsm" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:48
Jul 9 19:10:57.705: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-smdsm -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router'
--> Deploying template "e2e-test-router-scoped-smdsm/" for "/tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml" to project e2e-test-router-scoped-smdsm
* With parameters:
* IMAGE=openshift/origin-haproxy-router
* SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"]
--> Creating resources ...
pod "router-scoped" created
pod "router-override" created
pod "router-override-domains" created
rolebinding "system-router" created
route "route-1" created
route "route-2" created
route "route-override-domain-1" created
route "route-override-domain-2" created
service "endpoints" created
pod "endpoint-1" created
--> Success
Access your application via route 'first.example.com'
Access your application via route 'second.example.com'
Access your application via route 'y.a.null.ptr'
Access your application via route 'main.void.str'
Run 'oc status' to view your app.
[It] should override the route host with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109
Jul 9 19:10:58.736: INFO: Creating new exec pod
STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml"
STEP: waiting for the healthz endpoint to respond
Jul 9 19:11:07.875: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c
set -e
for i in $(seq 1 180); do
code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.224' "http://10.2.2.224:1936/healthz" ) || rc=$?
if [[ "${rc:-0}" -eq 0 ]]; then
echo $code
if [[ $code -eq 200 ]]; then
exit 0
fi
if [[ $code -ne 503 ]]; then
exit 1
fi
else
echo "error ${rc}" 1>&2
fi
sleep 1
done
'
Jul 9 19:11:08.612: INFO: stderr: ""
STEP: waiting for the valid route to respond
Jul 9 19:11:08.613: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c
set -e
for i in $(seq 1 180); do
code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter" ) || rc=$?
if [[ "${rc:-0}" -eq 0 ]]; then
echo $code
if [[ $code -eq 200 ]]; then
exit 0
fi
if [[ $code -ne 503 ]]; then
exit 1
fi
else
echo "error ${rc}" 1>&2
fi
sleep 1
done
'
Jul 9 19:11:15.466: INFO: stderr: ""
STEP: checking that the stored domain name does not match a route
Jul 9 19:11:15.466: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://10.2.2.224/Letter"'
Jul 9 19:11:16.104: INFO: stderr: ""
STEP: checking that route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com matches a route
Jul 9 19:11:16.104: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter"'
Jul 9 19:11:16.822: INFO: stderr: ""
STEP: checking that route-2-e2e-test-router-scoped-smdsm.myapps.mycompany.com matches a route
Jul 9 19:11:16.822: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter"'
Jul 9 19:11:17.497: INFO: stderr: ""
STEP: checking that the router reported the correct ingress and override
Jul 9 19:11:17.550: INFO: Selected: &route.RouteIngress{Host:"route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com", RouterName:"test-override", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc420ac0020)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, All: []route.RouteIngress{route.RouteIngress{Host:"first.example.com", RouterName:"router", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923c00)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"first.example.com", RouterName:"test-override-domains", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923d60)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"first.example.com", RouterName:"test-scoped", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923ec0)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com", RouterName:"test-override", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc420ac0020)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}}
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:36
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:17.645: INFO: namespace : e2e-test-router-scoped-smdsm api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:29.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:33.943 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:26
The HAProxy router
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:67
should override the route host with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109
------------------------------
S
------------------------------
[sig-storage] Projected
should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:21.144: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:22.818: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-8qd2x
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:11:23.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-8qd2x" to be "success or failure"
Jul 9 19:11:23.539: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 46.274665ms
Jul 9 19:11:25.582: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08953228s
Jul 9 19:11:27.618: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125776966s
STEP: Saw pod success
Jul 9 19:11:27.618: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:27.651: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:11:27.733: INFO: Waiting for pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:11:27.763: INFO: Pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:27.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8qd2x" for this suite.
Jul 9 19:11:33.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:37.118: INFO: namespace: e2e-tests-projected-8qd2x, resource: bindings, ignored listing per whitelist
Jul 9 19:11:37.781: INFO: namespace e2e-tests-projected-8qd2x deletion completed in 9.97551978s
• [SLOW TEST:16.637 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:29.284: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:11:31.450: INFO: configPath is now "/tmp/e2e-test-router-reencrypt-rvndf-user.kubeconfig"
Jul 9 19:11:31.450: INFO: The user is now "e2e-test-router-reencrypt-rvndf-user"
Jul 9 19:11:31.450: INFO: Creating project "e2e-test-router-reencrypt-rvndf"
Jul 9 19:11:31.579: INFO: Waiting on permissions in project "e2e-test-router-reencrypt-rvndf" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:41
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:29
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:31.726: INFO: namespace : e2e-test-router-reencrypt-rvndf api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:37.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [8.543 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:18
The HAProxy router [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:52
should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53
no router installed on the cluster
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:44
------------------------------
SSS
------------------------------
[Conformance][templates] templateinstance cross-namespace test
should create and delete objects across namespaces [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:29.732: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:11:31.293: INFO: configPath is now "/tmp/e2e-test-templates-b585v-user.kubeconfig"
Jul 9 19:11:31.293: INFO: The user is now "e2e-test-templates-b585v-user"
Jul 9 19:11:31.293: INFO: Creating project "e2e-test-templates-b585v"
Jul 9 19:11:31.440: INFO: Waiting on permissions in project "e2e-test-templates-b585v" ...
[BeforeEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:31.487: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:11:33.023: INFO: configPath is now "/tmp/e2e-test-templates2-nghzw-user.kubeconfig"
Jul 9 19:11:33.023: INFO: The user is now "e2e-test-templates2-nghzw-user"
Jul 9 19:11:33.023: INFO: Creating project "e2e-test-templates2-nghzw"
Jul 9 19:11:33.263: INFO: Waiting on permissions in project "e2e-test-templates2-nghzw" ...
[It] should create and delete objects across namespaces [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30
Jul 9 19:11:33.304: INFO: Running 'oc adm --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-templates2-nghzw policy add-role-to-user admin e2e-test-templates-b585v-user'
role "admin" added: "e2e-test-templates-b585v-user"
STEP: creating the templateinstance
STEP: deleting the templateinstance
[AfterEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:35.637: INFO: namespace : e2e-test-templates-b585v api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:41.769: INFO: namespace : e2e-test-templates2-nghzw api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:47.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:18.111 seconds]
[Conformance][templates] templateinstance cross-namespace test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:22
should create and delete objects across namespaces [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30
------------------------------
[sig-storage] EmptyDir volumes when FSGroup is specified
nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:37.783: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:39.394: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-t499c
STEP: Waiting for a default service account to be provisioned in namespace
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53
STEP: Creating a pod to test emptydir subpath on tmpfs
Jul 9 19:11:40.061: INFO: Waiting up to 5m0s for pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-t499c" to be "success or failure"
Jul 9 19:11:40.092: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.167765ms
Jul 9 19:11:42.123: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062039439s
STEP: Saw pod success
Jul 9 19:11:42.123: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:42.162: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:11:42.262: INFO: Waiting for pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:11:42.292: INFO: Pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:42.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-t499c" for this suite.
Jul 9 19:11:48.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:50.328: INFO: namespace: e2e-tests-emptydir-t499c, resource: bindings, ignored listing per whitelist
Jul 9 19:11:52.155: INFO: namespace e2e-tests-emptydir-t499c deletion completed in 9.826706369s
• [SLOW TEST:14.372 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
when FSGroup is specified
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53
------------------------------
[sig-storage] EmptyDir volumes when FSGroup is specified
volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:37.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:39.688: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-kmg6b
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 9 19:11:40.434: INFO: Waiting up to 5m0s for pod "pod-95357f3c-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-emptydir-kmg6b" to be "success or failure"
Jul 9 19:11:40.471: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.789536ms
Jul 9 19:11:42.542: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.108162582s
STEP: Saw pod success
Jul 9 19:11:42.542: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:42.581: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-95357f3c-83e6-11e8-992b-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:11:42.668: INFO: Waiting for pod pod-95357f3c-83e6-11e8-992b-28d244b00276 to disappear
Jul 9 19:11:42.704: INFO: Pod pod-95357f3c-83e6-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kmg6b" for this suite.
Jul 9 19:11:48.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:11:52.324: INFO: namespace: e2e-tests-emptydir-kmg6b, resource: bindings, ignored listing per whitelist
Jul 9 19:11:53.097: INFO: namespace e2e-tests-emptydir-kmg6b deletion completed in 10.351115117s
• [SLOW TEST:15.267 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
when FSGroup is specified
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61
------------------------------
[sig-storage] Projected
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:47.844: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:49.287: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-gr2vb
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-9ada9ad5-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:11:49.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-gr2vb" to be "success or failure"
Jul 9 19:11:49.955: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 27.730422ms
Jul 9 19:11:51.982: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.055067008s
STEP: Saw pod success
Jul 9 19:11:51.982: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:11:52.009: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:11:52.115: INFO: Waiting for pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:11:52.148: INFO: Pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:52.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gr2vb" for this suite.
Jul 9 19:11:58.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:12:00.976: INFO: namespace: e2e-tests-projected-gr2vb, resource: bindings, ignored listing per whitelist
Jul 9 19:12:02.037: INFO: namespace e2e-tests-projected-gr2vb deletion completed in 9.85639901s
• [SLOW TEST:14.194 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:53.098: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:11:55.298: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig"
Jul 9 19:11:55.298: INFO: The user is now "e2e-test-build-valuefrom-q75fk-user"
Jul 9 19:11:55.298: INFO: Creating project "e2e-test-build-valuefrom-q75fk"
Jul 9 19:11:55.416: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-q75fk" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27
Jul 9 19:11:55.477: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:11:55.628: INFO: Running scan #0
Jul 9 19:11:55.628: INFO: Checking language ruby
Jul 9 19:11:55.683: INFO: Checking tag 2.0
Jul 9 19:11:55.683: INFO: Checking tag 2.2
Jul 9 19:11:55.683: INFO: Checking tag 2.3
Jul 9 19:11:55.683: INFO: Checking tag 2.4
Jul 9 19:11:55.683: INFO: Checking tag 2.5
Jul 9 19:11:55.683: INFO: Checking tag latest
Jul 9 19:11:55.683: INFO: Checking language nodejs
Jul 9 19:11:55.725: INFO: Checking tag 0.10
Jul 9 19:11:55.725: INFO: Checking tag 4
Jul 9 19:11:55.725: INFO: Checking tag 6
Jul 9 19:11:55.725: INFO: Checking tag 8
Jul 9 19:11:55.725: INFO: Checking tag latest
Jul 9 19:11:55.725: INFO: Checking language perl
Jul 9 19:11:55.757: INFO: Checking tag 5.16
Jul 9 19:11:55.757: INFO: Checking tag 5.20
Jul 9 19:11:55.757: INFO: Checking tag 5.24
Jul 9 19:11:55.757: INFO: Checking tag latest
Jul 9 19:11:55.757: INFO: Checking language php
Jul 9 19:11:55.789: INFO: Checking tag 7.1
Jul 9 19:11:55.789: INFO: Checking tag latest
Jul 9 19:11:55.789: INFO: Checking tag 5.5
Jul 9 19:11:55.789: INFO: Checking tag 5.6
Jul 9 19:11:55.789: INFO: Checking tag 7.0
Jul 9 19:11:55.789: INFO: Checking language python
Jul 9 19:11:55.825: INFO: Checking tag 3.4
Jul 9 19:11:55.825: INFO: Checking tag 3.5
Jul 9 19:11:55.825: INFO: Checking tag 3.6
Jul 9 19:11:55.825: INFO: Checking tag latest
Jul 9 19:11:55.825: INFO: Checking tag 2.7
Jul 9 19:11:55.825: INFO: Checking tag 3.3
Jul 9 19:11:55.825: INFO: Checking language wildfly
Jul 9 19:11:55.860: INFO: Checking tag 9.0
Jul 9 19:11:55.860: INFO: Checking tag latest
Jul 9 19:11:55.860: INFO: Checking tag 10.0
Jul 9 19:11:55.860: INFO: Checking tag 10.1
Jul 9 19:11:55.860: INFO: Checking tag 11.0
Jul 9 19:11:55.860: INFO: Checking tag 12.0
Jul 9 19:11:55.860: INFO: Checking tag 8.1
Jul 9 19:11:55.860: INFO: Checking language mysql
Jul 9 19:11:55.890: INFO: Checking tag latest
Jul 9 19:11:55.890: INFO: Checking tag 5.5
Jul 9 19:11:55.890: INFO: Checking tag 5.6
Jul 9 19:11:55.890: INFO: Checking tag 5.7
Jul 9 19:11:55.890: INFO: Checking language postgresql
Jul 9 19:11:55.924: INFO: Checking tag latest
Jul 9 19:11:55.924: INFO: Checking tag 9.2
Jul 9 19:11:55.924: INFO: Checking tag 9.4
Jul 9 19:11:55.924: INFO: Checking tag 9.5
Jul 9 19:11:55.924: INFO: Checking tag 9.6
Jul 9 19:11:55.924: INFO: Checking language mongodb
Jul 9 19:11:55.963: INFO: Checking tag 3.4
Jul 9 19:11:55.963: INFO: Checking tag latest
Jul 9 19:11:55.963: INFO: Checking tag 2.4
Jul 9 19:11:55.963: INFO: Checking tag 2.6
Jul 9 19:11:55.963: INFO: Checking tag 3.2
Jul 9 19:11:55.963: INFO: Checking language jenkins
Jul 9 19:11:55.996: INFO: Checking tag latest
Jul 9 19:11:55.996: INFO: Checking tag 1
Jul 9 19:11:55.996: INFO: Checking tag 2
Jul 9 19:11:55.996: INFO: Success!
STEP: creating test image stream
Jul 9 19:11:55.996: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-is.json'
imagestream.image.openshift.io "test" created
STEP: creating test secret
Jul 9 19:11:56.351: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-secret.yaml'
secret "mysecret" created
STEP: creating test configmap
Jul 9 19:11:56.911: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-configmap.yaml'
configmap "myconfigmap" created
[It] should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122
STEP: creating test build config
Jul 9 19:11:57.312: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/failed-docker-build-value-from-config.yaml'
buildconfig.build.openshift.io "mydockertest" created
STEP: starting test build
Jul 9 19:11:57.636: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk mydockertest -o=name'
Jul 9 19:11:57.927: INFO:
start-build output with args [mydockertest -o=name]:
Error><nil>
StdOut>
build/mydockertest-1
StdErr>
Jul 9 19:11:57.928: INFO: Waiting for mydockertest-1 to complete
Jul 9 19:12:04.011: INFO: WaitForABuild returning with error: The build "mydockertest-1" status is "Error"
Jul 9 19:12:04.011: INFO: Done waiting for mydockertest-1: util.BuildResult{BuildPath:"build/mydockertest-1", BuildName:"mydockertest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mydockertest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421470300), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc420dca1e0)}
with error: The build "mydockertest-1" status is "Error"
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:12:04.100: INFO: namespace : e2e-test-build-valuefrom-q75fk api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:10.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:17.089 seconds]
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26
should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122
------------------------------
SS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431
Jul 9 19:12:10.506: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin
Jul 9 19:12:10.506: INFO: This plugin does not implement NetworkPolicy.
[AfterEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:10.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.316 seconds]
NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48
when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430
should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86
Jul 9 19:12:10.506: This plugin does not implement NetworkPolicy.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[k8s.io] Pods
should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:02.041: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:12:03.665: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-wvqk5
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127
[It] should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 9 19:12:07.146: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276"
Jul 9 19:12:07.146: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-pods-wvqk5" to be "terminated due to deadline exceeded"
Jul 9 19:12:07.174: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276": Phase="Running", Reason="", readiness=true. Elapsed: 27.773489ms
Jul 9 19:12:09.205: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058771469s
Jul 9 19:12:09.205: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:09.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wvqk5" for this suite.
Jul 9 19:12:15.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:12:17.208: INFO: namespace: e2e-tests-pods-wvqk5, resource: bindings, ignored listing per whitelist
Jul 9 19:12:18.839: INFO: namespace e2e-tests-pods-wvqk5 deletion completed in 9.598590478s
• [SLOW TEST:16.798 seconds]
[k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[k8s.io] InitContainer
should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:13.874: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:15.455: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-9dmbk
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40
[It] should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103
STEP: creating the pod
Jul 9 19:11:16.159: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:11:56.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-9dmbk" for this suite.
Jul 9 19:12:18.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:12:20.767: INFO: namespace: e2e-tests-init-container-9dmbk, resource: bindings, ignored listing per whitelist
Jul 9 19:12:22.186: INFO: namespace e2e-tests-init-container-9dmbk deletion completed in 25.976091167s
• [SLOW TEST:68.313 seconds]
[k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103
------------------------------
SS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:22.189: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:12:24.143: INFO: configPath is now "/tmp/e2e-test-resolve-local-names-x2lpn-user.kubeconfig"
Jul 9 19:12:24.143: INFO: The user is now "e2e-test-resolve-local-names-x2lpn-user"
Jul 9 19:12:24.143: INFO: Creating project "e2e-test-resolve-local-names-x2lpn"
Jul 9 19:12:24.351: INFO: Waiting on permissions in project "e2e-test-resolve-local-names-x2lpn" ...
[It] should perform lookup when the pod has the resolve-names annotation [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:73
Jul 9 19:12:24.400: INFO: Running 'oc import-image --config=/tmp/e2e-test-resolve-local-names-x2lpn-user.kubeconfig --namespace=e2e-test-resolve-local-names-x2lpn busybox:latest --confirm'
The import completed successfully.
Name: busybox
Namespace: e2e-test-resolve-local-names-x2lpn
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2018-07-10T02:12:26Z
Docker Pull Spec: docker-registry.default.svc:5000/e2e-test-resolve-local-names-x2lpn/busybox
Image Lookup: local=false
Unique Images: 1
Tags: 1
latest
tagged from busybox:latest
* busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Less than a second ago
Image Name: busybox:latest
Docker Image: busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Name: sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Created: Less than a second ago
Annotations: image.openshift.io/dockerLayersOrder=ascending
Image Size: 724.6kB
Image Created: 6 weeks ago
Author: <none>
Arch: amd64
Command: sh
Working Dir: <none>
User: <none>
Exposes Ports: <none>
Docker Labels: <none>
Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[AfterEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:12:26.312: INFO: namespace : e2e-test-resolve-local-names-x2lpn api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:32.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] [10.191 seconds]
[Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:14
should perform lookup when the pod has the resolve-names annotation [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:73
default image resolution is not configured, can't verify pod resolution
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:99
------------------------------
[Feature:DeploymentConfig] deploymentconfigs with multiple image change triggers [Conformance]
should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:10:48.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:10:51.252: INFO: configPath is now "/tmp/e2e-test-cli-deployment-92vwf-user.kubeconfig"
Jul 9 19:10:51.252: INFO: The user is now "e2e-test-cli-deployment-92vwf-user"
Jul 9 19:10:51.252: INFO: Creating project "e2e-test-cli-deployment-92vwf"
Jul 9 19:10:51.410: INFO: Waiting on permissions in project "e2e-test-cli-deployment-92vwf" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513
STEP: creating DC
STEP: verifying the deployment is marked complete
Jul 9 19:11:52.159: INFO: Latest rollout of dc/example (rc/example-1) is complete.
[AfterEach] with multiple image change triggers [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:509
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:11:54.263: INFO: namespace : e2e-test-cli-deployment-92vwf api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:34.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:105.552 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
with multiple image change triggers [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:507
should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:34.383: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:12:36.680: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-tscb6
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support existing single file subPath [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:167
Jul 9 19:12:37.452: INFO: No SSH Key for provider : 'GetSigner(...) not implemented for '
[AfterEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:37.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-tscb6" for this suite.
Jul 9 19:12:43.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:12:45.782: INFO: namespace: e2e-tests-hostpath-tscb6, resource: bindings, ignored listing per whitelist
Jul 9 19:12:48.312: INFO: namespace e2e-tests-hostpath-tscb6 deletion completed in 10.796412144s
S [SKIPPING] [13.929 seconds]
[sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34
should support existing single file subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:167
Jul 9 19:12:37.452: No SSH Key for provider : 'GetSigner(...) not implemented for '
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
S
------------------------------
[k8s.io] Pods
should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:10.508: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:12:12.627: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-9775r
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127
[It] should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470
Jul 9 19:12:13.409: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:17.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9775r" for this suite.
Jul 9 19:12:56.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:12:59.715: INFO: namespace: e2e-tests-pods-9775r, resource: bindings, ignored listing per whitelist
Jul 9 19:13:00.520: INFO: namespace e2e-tests-pods-9775r deletion completed in 42.593555306s
• [SLOW TEST:50.012 seconds]
[k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419
Jul 9 19:13:00.521: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:00.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:00.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Area:Networking] services
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10
when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418
should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40
Jul 9 19:13:00.521: This plugin does not isolate namespaces by default.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[Conformance][templates] templateinstance impersonation tests
should pass impersonation update tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:48.316: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:12:50.508: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-user.kubeconfig"
Jul 9 19:12:50.508: INFO: The user is now "e2e-test-templates-b5fkm-user"
Jul 9 19:12:50.508: INFO: Creating project "e2e-test-templates-b5fkm"
Jul 9 19:12:50.659: INFO: Waiting on permissions in project "e2e-test-templates-b5fkm" ...
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57
Jul 9 19:12:51.908: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-adminuser.kubeconfig"
Jul 9 19:12:52.180: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonateuser.kubeconfig"
Jul 9 19:12:52.429: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonatebygroupuser.kubeconfig"
Jul 9 19:12:52.677: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-edituser1.kubeconfig"
Jul 9 19:12:52.922: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-edituser2.kubeconfig"
Jul 9 19:12:53.178: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-viewuser.kubeconfig"
Jul 9 19:12:53.434: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonatebygroupuser.kubeconfig"
[It] should pass impersonation update tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252
STEP: testing as system:admin user
STEP: testing as e2e-test-templates-b5fkm-adminuser user
Jul 9 19:12:54.343: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-adminuser.kubeconfig"
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:12:54.783: INFO: namespace : e2e-test-templates-b5fkm api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:13:00.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221
• Failure [12.949 seconds]
[Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27
should pass impersonation update tests [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252
Expected an error to have occurred. Got:
<nil>: nil
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:322
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:18.841: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:12:20.604: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-7j6f8
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7j6f8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 9 19:12:21.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 9 19:12:43.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.2.242:8080/dial?request=hostName&protocol=http&host=10.2.2.238&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7j6f8 PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:12:43.789: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:12:44.203: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:12:44.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7j6f8" for this suite.
Jul 9 19:13:06.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:08.366: INFO: namespace: e2e-tests-pod-network-test-7j6f8, resource: bindings, ignored listing per whitelist
Jul 9 19:13:09.624: INFO: namespace e2e-tests-pod-network-test-7j6f8 deletion completed in 25.380544553s
• [SLOW TEST:50.783 seconds]
[sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:01.265: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:03.565: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-7mdjl
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 9 19:13:04.438: INFO: Waiting up to 5m0s for pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-7mdjl" to be "success or failure"
Jul 9 19:13:04.481: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 43.665856ms
Jul 9 19:13:06.523: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085696002s
STEP: Saw pod success
Jul 9 19:13:06.523: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:06.566: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:13:06.673: INFO: Waiting for pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 to disappear
Jul 9 19:13:06.717: INFO: Pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:06.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7mdjl" for this suite.
Jul 9 19:13:12.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:15.123: INFO: namespace: e2e-tests-emptydir-7mdjl, resource: bindings, ignored listing per whitelist
Jul 9 19:13:17.537: INFO: namespace e2e-tests-emptydir-7mdjl deletion completed in 10.770969057s
• [SLOW TEST:16.271 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[Feature:DeploymentConfig] deploymentconfigs paused [Conformance]
should disable actions on deployments [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:12:32.382: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:12:34.249: INFO: configPath is now "/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig"
Jul 9 19:12:34.249: INFO: The user is now "e2e-test-cli-deployment-lzhmc-user"
Jul 9 19:12:34.249: INFO: Creating project "e2e-test-cli-deployment-lzhmc"
Jul 9 19:12:34.415: INFO: Waiting on permissions in project "e2e-test-cli-deployment-lzhmc" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should disable actions on deployments [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742
STEP: verifying that we cannot start a new deployment via oc deploy
Jul 9 19:12:34.793: INFO: Running 'oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --latest'
Jul 9 19:12:35.082: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --latest] [] Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
Flag --latest has been deprecated, use 'oc rollout latest' instead
error: cannot deploy a paused deployment config
Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
Flag --latest has been deprecated, use 'oc rollout latest' instead
error: cannot deploy a paused deployment config
[] <nil> 0xc421067200 exit status 1 <nil> <nil> true [0xc420efe310 0xc420efe390 0xc420efe390] [0xc420efe310 0xc420efe390] [0xc420efe318 0xc420efe370] [0x916090 0x916190] 0xc420ea0600 <nil>}:
Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
Flag --latest has been deprecated, use 'oc rollout latest' instead
error: cannot deploy a paused deployment config
STEP: verifying that we cannot start a new deployment via oc rollout
Jul 9 19:12:35.082: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc latest dc/paused'
Jul 9 19:12:35.319: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc latest dc/paused] [] error: cannot deploy a paused deployment config
error: cannot deploy a paused deployment config
[] <nil> 0xc42090aed0 exit status 1 <nil> <nil> true [0xc4219600c0 0xc4219600e8 0xc4219600e8] [0xc4219600c0 0xc4219600e8] [0xc4219600c8 0xc4219600e0] [0x916090 0x916190] 0xc42199b9e0 <nil>}:
error: cannot deploy a paused deployment config
STEP: verifying that we cannot cancel a deployment
Jul 9 19:12:35.319: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc cancel dc/paused'
Jul 9 19:12:35.670: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc cancel dc/paused] [] unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused
there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused
unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused
there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused
[] <nil> 0xc42090b3b0 exit status 1 <nil> <nil> true [0xc4219600f8 0xc421960128 0xc421960128] [0xc4219600f8 0xc421960128] [0xc421960100 0xc421960118] [0x916090 0x916190] 0xc42199baa0 <nil>}:
unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused
there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused
STEP: verifying that we cannot retry a deployment
Jul 9 19:12:35.670: INFO: Running 'oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --retry'
Jul 9 19:12:35.890: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --retry] [] Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
error: cannot retry a paused deployment config
Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
error: cannot retry a paused deployment config
[] <nil> 0xc42090b860 exit status 1 <nil> <nil> true [0xc421960130 0xc421960200 0xc421960200] [0xc421960130 0xc421960200] [0xc421960138 0xc4219601f0] [0x916090 0x916190] 0xc42199bb60 <nil>}:
Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead.
error: cannot retry a paused deployment config
STEP: verifying that we cannot rollout retry a deployment
Jul 9 19:12:35.890: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc retry dc/paused'
Jul 9 19:12:36.152: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc retry dc/paused] [] error: unable to retry paused deployment config "paused"
error: unable to retry paused deployment config "paused"
[] <nil> 0xc42090bce0 exit status 1 <nil> <nil> true [0xc421960210 0xc421960280 0xc421960280] [0xc421960210 0xc421960280] [0xc421960220 0xc421960270] [0x916090 0x916190] 0xc42199bc20 <nil>}:
error: unable to retry paused deployment config "paused"
STEP: verifying that we cannot rollback a deployment
Jul 9 19:12:36.152: INFO: Running 'oc rollback --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --to-version 1'
Jul 9 19:12:36.396: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollback --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --to-version 1] [] error: cannot rollback a paused deployment config
error: cannot rollback a paused deployment config
[] <nil> 0xc4210ecb70 exit status 1 <nil> <nil> true [0xc421af61e8 0xc421af6220 0xc421af6220] [0xc421af61e8 0xc421af6220] [0xc421af61f8 0xc421af6210] [0x916090 0x916190] 0xc4215e0060 <nil>}:
error: cannot rollback a paused deployment config
Jul 9 19:12:41.132: INFO: Latest rollout of dc/paused (rc/paused-1) is complete.
STEP: making sure it updates observedGeneration after being paused
[AfterEach] paused [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:738
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:12:43.581: INFO: namespace : e2e-test-cli-deployment-lzhmc api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:23.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:51.262 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
paused [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:736
should disable actions on deployments [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742
------------------------------
SS
------------------------------
[k8s.io] Docker Containers
should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:09.625: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:11.246: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-zdwzn
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test use defaults
Jul 9 19:13:11.887: INFO: Waiting up to 5m0s for pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-containers-zdwzn" to be "success or failure"
Jul 9 19:13:11.916: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.099543ms
Jul 9 19:13:13.990: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103415267s
STEP: Saw pod success
Jul 9 19:13:13.990: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:14.019: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:13:14.135: INFO: Waiting for pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:13:14.172: INFO: Pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zdwzn" for this suite.
Jul 9 19:13:20.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:23.147: INFO: namespace: e2e-tests-containers-zdwzn, resource: bindings, ignored listing per whitelist
Jul 9 19:13:23.659: INFO: namespace e2e-tests-containers-zdwzn deletion completed in 9.455658364s
• [SLOW TEST:14.034 seconds]
[k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Secrets
optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:11:52.157: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:11:53.896: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-kqpsc
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Jul 9 19:11:54.615: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating secret with name s-test-opt-del-9db12ddf-83e6-11e8-8fe2-28d244b00276
STEP: Creating secret with name s-test-opt-upd-9db12e14-83e6-11e8-8fe2-28d244b00276
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9db12ddf-83e6-11e8-8fe2-28d244b00276
STEP: Updating secret s-test-opt-upd-9db12e14-83e6-11e8-8fe2-28d244b00276
STEP: Creating secret with name s-test-opt-create-9db12e26-83e6-11e8-8fe2-28d244b00276
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:05.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kqpsc" for this suite.
Jul 9 19:13:27.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:29.000: INFO: namespace: e2e-tests-secrets-kqpsc, resource: bindings, ignored listing per whitelist
Jul 9 19:13:31.240: INFO: namespace e2e-tests-secrets-kqpsc deletion completed in 25.968448725s
• [SLOW TEST:99.083 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion
should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:17.540: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:20.288: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-bzdhq
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test env composition
Jul 9 19:13:21.123: INFO: Waiting up to 5m0s for pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-var-expansion-bzdhq" to be "success or failure"
Jul 9 19:13:21.168: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 45.213081ms
Jul 9 19:13:23.210: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086529997s
Jul 9 19:13:25.269: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146297789s
STEP: Saw pod success
Jul 9 19:13:25.270: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:25.313: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:13:25.417: INFO: Waiting for pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 to disappear
Jul 9 19:13:25.458: INFO: Pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bzdhq" for this suite.
Jul 9 19:13:31.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:35.196: INFO: namespace: e2e-tests-var-expansion-bzdhq, resource: bindings, ignored listing per whitelist
Jul 9 19:13:36.455: INFO: namespace e2e-tests-var-expansion-bzdhq deletion completed in 10.949058678s
• [SLOW TEST:18.915 seconds]
[k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:23.660: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:25.211: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-d6xjb
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-map-d40d907f-83e6-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:13:25.896: INFO: Waiting up to 5m0s for pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-d6xjb" to be "success or failure"
Jul 9 19:13:25.928: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.358634ms
Jul 9 19:13:27.974: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077946634s
STEP: Saw pod success
Jul 9 19:13:27.974: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:28.002: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:13:28.077: INFO: Waiting for pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:13:28.104: INFO: Pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:28.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d6xjb" for this suite.
Jul 9 19:13:34.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:37.397: INFO: namespace: e2e-tests-secrets-d6xjb, resource: bindings, ignored listing per whitelist
Jul 9 19:13:37.621: INFO: namespace e2e-tests-secrets-d6xjb deletion completed in 9.480839271s
• [SLOW TEST:13.961 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-api-machinery] Downward API
should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:23.646: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:25.194: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-jvv2t
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward api env vars
Jul 9 19:13:25.945: INFO: Waiting up to 5m0s for pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-jvv2t" to be "success or failure"
Jul 9 19:13:25.974: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.397911ms
Jul 9 19:13:28.002: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057000308s
Jul 9 19:13:30.036: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091581003s
STEP: Saw pod success
Jul 9 19:13:30.036: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:30.068: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:13:30.149: INFO: Waiting for pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:13:30.177: INFO: Pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:30.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jvv2t" for this suite.
Jul 9 19:13:36.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:39.008: INFO: namespace: e2e-tests-downward-api-jvv2t, resource: bindings, ignored listing per whitelist
Jul 9 19:13:39.763: INFO: namespace e2e-tests-downward-api-jvv2t deletion completed in 9.552506513s
• [SLOW TEST:16.117 seconds]
[sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37
should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Projected
should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:31.244: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:32.967: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-c2mgg
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-projected-all-test-volume-d8b8fea4-83e6-11e8-8fe2-28d244b00276
STEP: Creating secret with name secret-projected-all-test-volume-d8b8fe90-83e6-11e8-8fe2-28d244b00276
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 9 19:13:33.762: INFO: Waiting up to 5m0s for pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-c2mgg" to be "success or failure"
Jul 9 19:13:33.794: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.255872ms
Jul 9 19:13:35.830: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068069492s
Jul 9 19:13:37.861: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099079179s
STEP: Saw pod success
Jul 9 19:13:37.862: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:37.894: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 container projected-all-volume-test: <nil>
STEP: delete the pod
Jul 9 19:13:37.981: INFO: Waiting for pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:13:38.012: INFO: Pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:38.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c2mgg" for this suite.
Jul 9 19:13:44.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:47.657: INFO: namespace: e2e-tests-projected-c2mgg, resource: bindings, ignored listing per whitelist
Jul 9 19:13:47.785: INFO: namespace e2e-tests-projected-c2mgg deletion completed in 9.733539364s
• [SLOW TEST:16.541 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:36.458: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:38.586: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-8rb45
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 9 19:13:39.444: INFO: Waiting up to 5m0s for pod "pod-dc233d27-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-8rb45" to be "success or failure"
Jul 9 19:13:39.505: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 60.863579ms
Jul 9 19:13:41.548: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103172176s
STEP: Saw pod success
Jul 9 19:13:41.548: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure"
Jul 9 19:13:41.593: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-dc233d27-83e6-11e8-881a-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:13:41.685: INFO: Waiting for pod pod-dc233d27-83e6-11e8-881a-28d244b00276 to disappear
Jul 9 19:13:41.727: INFO: Pod pod-dc233d27-83e6-11e8-881a-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:41.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8rb45" for this suite.
Jul 9 19:13:47.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:52.792: INFO: namespace: e2e-tests-emptydir-8rb45, resource: bindings, ignored listing per whitelist
Jul 9 19:13:52.924: INFO: namespace e2e-tests-emptydir-8rb45 deletion completed in 11.151615087s
• [SLOW TEST:16.467 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:00.523: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:02.562: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-gwdcd
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gwdcd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 9 19:13:03.346: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 9 19:13:25.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.2.248:8080/dial?request=hostName&protocol=udp&host=10.2.2.243&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gwdcd PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:13:25.983: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:13:26.338: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:26.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-gwdcd" for this suite.
Jul 9 19:13:48.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:13:52.820: INFO: namespace: e2e-tests-pod-network-test-gwdcd, resource: bindings, ignored listing per whitelist
Jul 9 19:13:53.066: INFO: namespace e2e-tests-pod-network-test-gwdcd deletion completed in 26.668327647s
• [SLOW TEST:52.543 seconds]
[sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431
Jul 9 19:13:53.079: INFO: This plugin does not implement NetworkPolicy.
[AfterEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:53.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48
when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430
should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245
Jul 9 19:13:53.079: This plugin does not implement NetworkPolicy.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
S
------------------------------
[Feature:DeploymentConfig] deploymentconfigs should respect image stream tag reference policy [Conformance]
resolve the image pull spec [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:47.786: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:13:49.617: INFO: configPath is now "/tmp/e2e-test-cli-deployment-vz7zx-user.kubeconfig"
Jul 9 19:13:49.618: INFO: The user is now "e2e-test-cli-deployment-vz7zx-user"
Jul 9 19:13:49.618: INFO: Creating project "e2e-test-cli-deployment-vz7zx"
Jul 9 19:13:49.769: INFO: Waiting on permissions in project "e2e-test-cli-deployment-vz7zx" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] resolve the image pull spec [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272
Jul 9 19:13:49.858: INFO: Running 'oc create --config=/tmp/e2e-test-cli-deployment-vz7zx-user.kubeconfig --namespace=e2e-test-cli-deployment-vz7zx -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/deployments/deployment-image-resolution-is.yaml'
imagestream.image.openshift.io "deployment-image-resolution" created
[AfterEach] should respect image stream tag reference policy [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:268
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:13:54.625: INFO: namespace : e2e-test-cli-deployment-vz7zx api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:14:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:12.916 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
should respect image stream tag reference policy [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:266
resolve the image pull spec [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272
------------------------------
[k8s.io] Pods
should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:39.764: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:41.217: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-jq7dk
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127
[It] should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jul 9 19:13:44.065: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-dd98635e-83e6-11e8-bd2e-28d244b00276", GenerateName:"", Namespace:"e2e-tests-pods-jq7dk", SelfLink:"/api/v1/namespaces/e2e-tests-pods-jq7dk/pods/pod-submit-remove-dd98635e-83e6-11e8-bd2e-28d244b00276", UID:"ddae555c-83e6-11e8-84c6-0af96768d57e", ResourceVersion:"73676", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"826854293", "name":"foo"}, Annotations:map[string]string{"openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ttjmn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc421245a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"k8s.gcr.io/nginx-slim-amd64:0.20", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ttjmn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc421245a80), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc421034538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-0-130-54.us-west-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc421245d40), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"default-dockercfg-nrch4"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785623, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.130.54", PodIP:"10.2.2.9", StartTime:(*v1.Time)(0xc421962340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc421962360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/nginx-slim-amd64:0.20", ImageID:"docker-pullable://k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b", ContainerID:"docker://a721f4ab308b2de0f53691e6f5d4ef5382b19d4044df015d2b6a98b0ecb64ed2"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:13:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jq7dk" for this suite.
Jul 9 19:13:59.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:14:02.828: INFO: namespace: e2e-tests-pods-jq7dk, resource: bindings, ignored listing per whitelist
Jul 9 19:14:03.321: INFO: namespace e2e-tests-pods-jq7dk deletion completed in 10.074916195s
• [SLOW TEST:23.557 seconds]
[k8s.io] Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] EmptyDir volumes when FSGroup is specified
new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:00.704: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:14:02.510: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-dt8nh
STEP: Waiting for a default service account to be provisioned in namespace
[It] new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 9 19:14:03.362: INFO: Waiting up to 5m0s for pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-dt8nh" to be "success or failure"
Jul 9 19:14:03.404: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.886007ms
Jul 9 19:14:05.443: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081038301s
STEP: Saw pod success
Jul 9 19:14:05.443: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:14:05.488: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:14:05.567: INFO: Waiting for pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:14:05.599: INFO: Pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:14:05.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dt8nh" for this suite.
Jul 9 19:14:11.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:14:14.948: INFO: namespace: e2e-tests-emptydir-dt8nh, resource: bindings, ignored listing per whitelist
Jul 9 19:14:15.767: INFO: namespace e2e-tests-emptydir-dt8nh deletion completed in 10.11986951s
• [SLOW TEST:15.063 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
when FSGroup is specified
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49
------------------------------
[Feature:AnnotationTrigger] Annotation trigger
reconciles after the image is overwritten [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:AnnotationTrigger] Annotation trigger
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:37.623: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:AnnotationTrigger] Annotation trigger
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:13:39.183: INFO: configPath is now "/tmp/e2e-test-cli-deployment-2hv6q-user.kubeconfig"
Jul 9 19:13:39.183: INFO: The user is now "e2e-test-cli-deployment-2hv6q-user"
Jul 9 19:13:39.183: INFO: Creating project "e2e-test-cli-deployment-2hv6q"
Jul 9 19:13:39.325: INFO: Waiting on permissions in project "e2e-test-cli-deployment-2hv6q" ...
[It] reconciles after the image is overwritten [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29
STEP: creating a Deployment
STEP: tagging the docker.io/library/centos:latest as test:v1 image to create ImageStream
Jul 9 19:13:39.422: INFO: Running 'oc tag --config=/tmp/e2e-test-cli-deployment-2hv6q-user.kubeconfig --namespace=e2e-test-cli-deployment-2hv6q docker.io/library/centos:latest test:v1'
Jul 9 19:13:39.670: INFO: Tag test:v1 set to docker.io/library/centos:latest.
STEP: waiting for the initial image to be replaced from ImageStream
STEP: setting Deployment image repeatedly to ' ' to fight with annotation trigger
STEP: waiting for the image to be injected by annotation trigger
[AfterEach] [Feature:AnnotationTrigger] Annotation trigger
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:13:43.532: INFO: namespace : e2e-test-cli-deployment-2hv6q api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:AnnotationTrigger] Annotation trigger
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:14:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:47.973 seconds]
[Feature:AnnotationTrigger] Annotation trigger
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:20
reconciles after the image is overwritten [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29
------------------------------
SS
------------------------------
[sig-storage] Downward API volume
should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:25.600: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:14:27.153: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-hhn2q
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:14:27.731: INFO: Waiting up to 5m0s for pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-hhn2q" to be "success or failure"
Jul 9 19:14:27.776: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 45.181634ms
Jul 9 19:14:29.816: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085654956s
Jul 9 19:14:31.847: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11590508s
STEP: Saw pod success
Jul 9 19:14:31.847: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:14:31.881: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:14:31.949: INFO: Waiting for pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 to disappear
Jul 9 19:14:31.981: INFO: Pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:14:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hhn2q" for this suite.
Jul 9 19:14:38.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:14:39.634: INFO: namespace: e2e-tests-downward-api-hhn2q, resource: bindings, ignored listing per whitelist
Jul 9 19:14:41.402: INFO: namespace e2e-tests-downward-api-hhn2q deletion completed in 9.382069444s
• [SLOW TEST:15.803 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:53.082: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:13:55.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-psv7q
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-psv7q
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 9 19:13:55.838: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 9 19:14:16.629: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.2.2.11:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-psv7q PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:14:16.629: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:14:16.952: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:14:16.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-psv7q" for this suite.
Jul 9 19:14:39.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:14:41.446: INFO: namespace: e2e-tests-pod-network-test-psv7q, resource: bindings, ignored listing per whitelist
Jul 9 19:14:43.230: INFO: namespace e2e-tests-pod-network-test-psv7q deletion completed in 26.238510831s
• [SLOW TEST:50.148 seconds]
[sig-network] Networking
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25
Granular Checks: Pods
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28
should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[Feature:DeploymentConfig] deploymentconfigs with multiple image change triggers [Conformance]
should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:15.770: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:14:17.604: INFO: configPath is now "/tmp/e2e-test-cli-deployment-42zb4-user.kubeconfig"
Jul 9 19:14:17.604: INFO: The user is now "e2e-test-cli-deployment-42zb4-user"
Jul 9 19:14:17.604: INFO: Creating project "e2e-test-cli-deployment-42zb4"
Jul 9 19:14:17.731: INFO: Waiting on permissions in project "e2e-test-cli-deployment-42zb4" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522
STEP: verifying the deployment is marked complete
Jul 9 19:14:25.461: INFO: Latest rollout of dc/example (rc/example-1) is complete.
[AfterEach] with multiple image change triggers [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:509
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:14:27.527: INFO: namespace : e2e-test-cli-deployment-42zb4 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:07.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:51.836 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
with multiple image change triggers [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:507
should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522
------------------------------
[Feature:Builds][Conformance] oc new-app
should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:03.322: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:14:05.052: INFO: configPath is now "/tmp/e2e-test-new-app-xn8nh-user.kubeconfig"
Jul 9 19:14:05.052: INFO: The user is now "e2e-test-new-app-xn8nh-user"
Jul 9 19:14:05.052: INFO: Creating project "e2e-test-new-app-xn8nh"
Jul 9 19:14:05.274: INFO: Waiting on permissions in project "e2e-test-new-app-xn8nh" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:26
Jul 9 19:14:05.347: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:30
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:14:05.505: INFO: Running scan #0
Jul 9 19:14:05.505: INFO: Checking language ruby
Jul 9 19:14:05.552: INFO: Checking tag 2.0
Jul 9 19:14:05.552: INFO: Checking tag 2.2
Jul 9 19:14:05.552: INFO: Checking tag 2.3
Jul 9 19:14:05.552: INFO: Checking tag 2.4
Jul 9 19:14:05.552: INFO: Checking tag 2.5
Jul 9 19:14:05.552: INFO: Checking tag latest
Jul 9 19:14:05.552: INFO: Checking language nodejs
Jul 9 19:14:05.596: INFO: Checking tag 0.10
Jul 9 19:14:05.596: INFO: Checking tag 4
Jul 9 19:14:05.596: INFO: Checking tag 6
Jul 9 19:14:05.596: INFO: Checking tag 8
Jul 9 19:14:05.596: INFO: Checking tag latest
Jul 9 19:14:05.596: INFO: Checking language perl
Jul 9 19:14:05.647: INFO: Checking tag 5.20
Jul 9 19:14:05.647: INFO: Checking tag 5.24
Jul 9 19:14:05.647: INFO: Checking tag latest
Jul 9 19:14:05.647: INFO: Checking tag 5.16
Jul 9 19:14:05.647: INFO: Checking language php
Jul 9 19:14:05.689: INFO: Checking tag 7.1
Jul 9 19:14:05.689: INFO: Checking tag latest
Jul 9 19:14:05.689: INFO: Checking tag 5.5
Jul 9 19:14:05.689: INFO: Checking tag 5.6
Jul 9 19:14:05.689: INFO: Checking tag 7.0
Jul 9 19:14:05.689: INFO: Checking language python
Jul 9 19:14:05.740: INFO: Checking tag latest
Jul 9 19:14:05.740: INFO: Checking tag 2.7
Jul 9 19:14:05.740: INFO: Checking tag 3.3
Jul 9 19:14:05.740: INFO: Checking tag 3.4
Jul 9 19:14:05.740: INFO: Checking tag 3.5
Jul 9 19:14:05.740: INFO: Checking tag 3.6
Jul 9 19:14:05.740: INFO: Checking language wildfly
Jul 9 19:14:05.783: INFO: Checking tag latest
Jul 9 19:14:05.783: INFO: Checking tag 10.0
Jul 9 19:14:05.783: INFO: Checking tag 10.1
Jul 9 19:14:05.783: INFO: Checking tag 11.0
Jul 9 19:14:05.783: INFO: Checking tag 12.0
Jul 9 19:14:05.783: INFO: Checking tag 8.1
Jul 9 19:14:05.783: INFO: Checking tag 9.0
Jul 9 19:14:05.783: INFO: Checking language mysql
Jul 9 19:14:05.830: INFO: Checking tag 5.5
Jul 9 19:14:05.830: INFO: Checking tag 5.6
Jul 9 19:14:05.830: INFO: Checking tag 5.7
Jul 9 19:14:05.830: INFO: Checking tag latest
Jul 9 19:14:05.830: INFO: Checking language postgresql
Jul 9 19:14:05.879: INFO: Checking tag 9.4
Jul 9 19:14:05.879: INFO: Checking tag 9.5
Jul 9 19:14:05.879: INFO: Checking tag 9.6
Jul 9 19:14:05.879: INFO: Checking tag latest
Jul 9 19:14:05.879: INFO: Checking tag 9.2
Jul 9 19:14:05.879: INFO: Checking language mongodb
Jul 9 19:14:05.930: INFO: Checking tag 2.6
Jul 9 19:14:05.931: INFO: Checking tag 3.2
Jul 9 19:14:05.931: INFO: Checking tag 3.4
Jul 9 19:14:05.931: INFO: Checking tag latest
Jul 9 19:14:05.931: INFO: Checking tag 2.4
Jul 9 19:14:05.931: INFO: Checking language jenkins
Jul 9 19:14:05.973: INFO: Checking tag 1
Jul 9 19:14:05.973: INFO: Checking tag 2
Jul 9 19:14:05.973: INFO: Checking tag latest
Jul 9 19:14:05.973: INFO: Success!
[It] should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49
STEP: calling oc new-app
Jul 9 19:14:05.973: INFO: Running 'oc new-app --config=/tmp/e2e-test-new-app-xn8nh-user.kubeconfig --namespace=e2e-test-new-app-xn8nh https://github.com/openshift/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678'
--> Found image 5c36a77 (2 weeks old) in image stream "openshift/nodejs" under tag "8" for "nodejs"
Node.js 8
---------
Node.js 8 available as container is a base platform for building and running various Node.js 8 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs8
* The source repository appears to match: nodejs
* A source build using source code from https://github.com/openshift/nodejs-ex will be created
* The resulting image will be pushed to image stream "a234567890123456789012345678901234567890123456789012345678:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "a234567890123456789012345678901234567890123456789012345678"
* Port 8080/tcp will be load balanced by service "a234567890123456789012345678901234567890123456789012345678"
* Other containers can access this service through the hostname "a234567890123456789012345678901234567890123456789012345678"
--> Creating resources ...
imagestream "a234567890123456789012345678901234567890123456789012345678" created
buildconfig "a234567890123456789012345678901234567890123456789012345678" created
deploymentconfig "a234567890123456789012345678901234567890123456789012345678" created
service "a234567890123456789012345678901234567890123456789012345678" created
--> Success
Build scheduled, use 'oc logs -f bc/a234567890123456789012345678901234567890123456789012345678' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/a234567890123456789012345678901234567890123456789012345678'
Run 'oc status' to view your app.
STEP: waiting for the build to complete
STEP: waiting for the deployment to complete
Jul 9 19:14:45.592: INFO: waiting for deploymentconfig e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678 to be available with version 1
Jul 9 19:14:49.680: INFO: deploymentconfig e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678 available after 4.08804044s
pods: a23456789012345678901234567890123456789012345678901234567895p48
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:40
[AfterEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:14:49.742: INFO: namespace : e2e-test-new-app-xn8nh api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:11.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:68.485 seconds]
[Feature:Builds][Conformance] oc new-app
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:16
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:24
should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49
------------------------------
S
------------------------------
[Feature:DeploymentConfig] deploymentconfigs when tagging images [Conformance]
should successfully tag the deployed image [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:43.232: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:14:45.156: INFO: configPath is now "/tmp/e2e-test-cli-deployment-j6sbn-user.kubeconfig"
Jul 9 19:14:45.156: INFO: The user is now "e2e-test-cli-deployment-j6sbn-user"
Jul 9 19:14:45.156: INFO: Creating project "e2e-test-cli-deployment-j6sbn"
Jul 9 19:14:45.297: INFO: Waiting on permissions in project "e2e-test-cli-deployment-j6sbn" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should successfully tag the deployed image [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441
STEP: creating the deployment config fixture
STEP: verifying the deployment is marked complete
Jul 9 19:14:54.877: INFO: Latest rollout of dc/tag-images (rc/tag-images-1) is complete.
STEP: verifying the deployer service account can update imagestreamtags and user can get them
STEP: verifying the post deployment action happened: tag is set
[AfterEach] when tagging images [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:437
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:14:57.102: INFO: namespace : e2e-test-cli-deployment-j6sbn api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:19.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:35.977 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
when tagging images [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:435
should successfully tag the deployed image [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441
------------------------------
S
------------------------------
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:07.607: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:15:09.399: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig"
Jul 9 19:15:09.399: INFO: The user is now "e2e-test-build-valuefrom-6mbp7-user"
Jul 9 19:15:09.399: INFO: Creating project "e2e-test-build-valuefrom-6mbp7"
Jul 9 19:15:09.541: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-6mbp7" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27
Jul 9 19:15:09.595: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:15:09.731: INFO: Running scan #0
Jul 9 19:15:09.731: INFO: Checking language ruby
Jul 9 19:15:09.768: INFO: Checking tag 2.0
Jul 9 19:15:09.768: INFO: Checking tag 2.2
Jul 9 19:15:09.768: INFO: Checking tag 2.3
Jul 9 19:15:09.768: INFO: Checking tag 2.4
Jul 9 19:15:09.768: INFO: Checking tag 2.5
Jul 9 19:15:09.768: INFO: Checking tag latest
Jul 9 19:15:09.768: INFO: Checking language nodejs
Jul 9 19:15:09.812: INFO: Checking tag 0.10
Jul 9 19:15:09.812: INFO: Checking tag 4
Jul 9 19:15:09.812: INFO: Checking tag 6
Jul 9 19:15:09.812: INFO: Checking tag 8
Jul 9 19:15:09.812: INFO: Checking tag latest
Jul 9 19:15:09.812: INFO: Checking language perl
Jul 9 19:15:09.850: INFO: Checking tag 5.16
Jul 9 19:15:09.850: INFO: Checking tag 5.20
Jul 9 19:15:09.850: INFO: Checking tag 5.24
Jul 9 19:15:09.850: INFO: Checking tag latest
Jul 9 19:15:09.850: INFO: Checking language php
Jul 9 19:15:09.892: INFO: Checking tag latest
Jul 9 19:15:09.892: INFO: Checking tag 5.5
Jul 9 19:15:09.892: INFO: Checking tag 5.6
Jul 9 19:15:09.892: INFO: Checking tag 7.0
Jul 9 19:15:09.892: INFO: Checking tag 7.1
Jul 9 19:15:09.892: INFO: Checking language python
Jul 9 19:15:09.928: INFO: Checking tag 2.7
Jul 9 19:15:09.928: INFO: Checking tag 3.3
Jul 9 19:15:09.928: INFO: Checking tag 3.4
Jul 9 19:15:09.928: INFO: Checking tag 3.5
Jul 9 19:15:09.928: INFO: Checking tag 3.6
Jul 9 19:15:09.928: INFO: Checking tag latest
Jul 9 19:15:09.928: INFO: Checking language wildfly
Jul 9 19:15:09.966: INFO: Checking tag latest
Jul 9 19:15:09.966: INFO: Checking tag 10.0
Jul 9 19:15:09.966: INFO: Checking tag 10.1
Jul 9 19:15:09.966: INFO: Checking tag 11.0
Jul 9 19:15:09.966: INFO: Checking tag 12.0
Jul 9 19:15:09.966: INFO: Checking tag 8.1
Jul 9 19:15:09.966: INFO: Checking tag 9.0
Jul 9 19:15:09.966: INFO: Checking language mysql
Jul 9 19:15:10.002: INFO: Checking tag 5.5
Jul 9 19:15:10.002: INFO: Checking tag 5.6
Jul 9 19:15:10.002: INFO: Checking tag 5.7
Jul 9 19:15:10.002: INFO: Checking tag latest
Jul 9 19:15:10.002: INFO: Checking language postgresql
Jul 9 19:15:10.041: INFO: Checking tag 9.5
Jul 9 19:15:10.041: INFO: Checking tag 9.6
Jul 9 19:15:10.041: INFO: Checking tag latest
Jul 9 19:15:10.041: INFO: Checking tag 9.2
Jul 9 19:15:10.041: INFO: Checking tag 9.4
Jul 9 19:15:10.041: INFO: Checking language mongodb
Jul 9 19:15:10.081: INFO: Checking tag 3.2
Jul 9 19:15:10.081: INFO: Checking tag 3.4
Jul 9 19:15:10.081: INFO: Checking tag latest
Jul 9 19:15:10.081: INFO: Checking tag 2.4
Jul 9 19:15:10.081: INFO: Checking tag 2.6
Jul 9 19:15:10.081: INFO: Checking language jenkins
Jul 9 19:15:10.115: INFO: Checking tag 1
Jul 9 19:15:10.115: INFO: Checking tag 2
Jul 9 19:15:10.115: INFO: Checking tag latest
Jul 9 19:15:10.115: INFO: Success!
STEP: creating test image stream
Jul 9 19:15:10.115: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-is.json'
imagestream.image.openshift.io "test" created
STEP: creating test secret
Jul 9 19:15:10.486: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-secret.yaml'
secret "mysecret" created
STEP: creating test configmap
Jul 9 19:15:10.765: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-configmap.yaml'
configmap "myconfigmap" created
[It] should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105
STEP: creating test build config
Jul 9 19:15:11.083: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/failed-sti-build-value-from-config.yaml'
buildconfig.build.openshift.io "mys2itest" created
STEP: starting test build
Jul 9 19:15:11.382: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 mys2itest -o=name'
Jul 9 19:15:11.722: INFO:
start-build output with args [mys2itest -o=name]:
Error><nil>
StdOut>
build/mys2itest-1
StdErr>
Jul 9 19:15:11.723: INFO: Waiting for mys2itest-1 to complete
Jul 9 19:15:17.798: INFO: WaitForABuild returning with error: The build "mys2itest-1" status is "Error"
Jul 9 19:15:17.798: INFO: Done waiting for mys2itest-1: util.BuildResult{BuildPath:"build/mys2itest-1", BuildName:"mys2itest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mys2itest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420eb8f00), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42078c1e0)}
with error: The build "mys2itest-1" status is "Error"
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:15:17.867: INFO: namespace : e2e-test-build-valuefrom-6mbp7 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:23.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:16.351 seconds]
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26
should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105
------------------------------
[Conformance][Area:Networking][Feature:Router] The HAProxy router
should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:23.960: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:15:25.857: INFO: configPath is now "/tmp/e2e-test-router-scoped-65jwn-user.kubeconfig"
Jul 9 19:15:25.857: INFO: The user is now "e2e-test-router-scoped-65jwn-user"
Jul 9 19:15:25.857: INFO: Creating project "e2e-test-router-scoped-65jwn"
Jul 9 19:15:25.994: INFO: Waiting on permissions in project "e2e-test-router-scoped-65jwn" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:48
Jul 9 19:15:26.062: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-65jwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router'
--> Deploying template "e2e-test-router-scoped-65jwn/" for "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" to project e2e-test-router-scoped-65jwn
* With parameters:
* IMAGE=openshift/origin-haproxy-router
* SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"]
--> Creating resources ...
pod "router-scoped" created
pod "router-override" created
pod "router-override-domains" created
rolebinding "system-router" created
route "route-1" created
route "route-2" created
route "route-override-domain-1" created
route "route-override-domain-2" created
service "endpoints" created
pod "endpoint-1" created
--> Success
Access your application via route 'first.example.com'
Access your application via route 'second.example.com'
Access your application via route 'y.a.null.ptr'
Access your application via route 'main.void.str'
Run 'oc status' to view your app.
[It] should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169
Jul 9 19:15:27.169: INFO: Creating new exec pod
STEP: creating a scoped router with overridden domains from a config file "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml"
STEP: waiting for the healthz endpoint to respond
Jul 9 19:15:34.315: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c
set -e
for i in $(seq 1 180); do
code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.35' "http://10.2.2.35:1936/healthz" ) || rc=$?
if [[ "${rc:-0}" -eq 0 ]]; then
echo $code
if [[ $code -eq 200 ]]; then
exit 0
fi
if [[ $code -ne 503 ]]; then
exit 1
fi
else
echo "error ${rc}" 1>&2
fi
sleep 1
done
'
Jul 9 19:15:34.940: INFO: stderr: ""
STEP: waiting for the valid route to respond
Jul 9 19:15:34.940: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c
set -e
for i in $(seq 1 180); do
code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter" ) || rc=$?
if [[ "${rc:-0}" -eq 0 ]]; then
echo $code
if [[ $code -eq 200 ]]; then
exit 0
fi
if [[ $code -ne 503 ]]; then
exit 1
fi
else
echo "error ${rc}" 1>&2
fi
sleep 1
done
'
Jul 9 19:15:35.607: INFO: stderr: ""
STEP: checking that the stored domain name does not match a route
Jul 9 19:15:35.607: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: y.a.null.ptr' "http://10.2.2.35/Letter"'
Jul 9 19:15:36.246: INFO: stderr: ""
STEP: checking that route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test matches a route
Jul 9 19:15:36.246: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter"'
Jul 9 19:15:36.960: INFO: stderr: ""
STEP: checking that route-override-domain-2-e2e-test-router-scoped-65jwn.apps.veto.test matches a route
Jul 9 19:15:36.960: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-override-domain-2-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter"'
Jul 9 19:15:37.613: INFO: stderr: ""
STEP: checking that the router reported the correct ingress and override
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:36
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:15:37.766: INFO: namespace : e2e-test-router-scoped-65jwn api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:51.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:27.904 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:26
The HAProxy router
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:67
should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169
------------------------------
S
------------------------------
[Feature:DeploymentConfig] deploymentconfigs viewing rollout history [Conformance]
should print the rollout history [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:14:41.404: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:14:43.030: INFO: configPath is now "/tmp/e2e-test-cli-deployment-bvczx-user.kubeconfig"
Jul 9 19:14:43.030: INFO: The user is now "e2e-test-cli-deployment-bvczx-user"
Jul 9 19:14:43.030: INFO: Creating project "e2e-test-cli-deployment-bvczx"
Jul 9 19:14:43.210: INFO: Waiting on permissions in project "e2e-test-cli-deployment-bvczx" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should print the rollout history [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602
STEP: waiting for the first rollout to complete
Jul 9 19:14:57.434: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
STEP: updating the deployment config in order to trigger a new rollout
STEP: waiting for the second rollout to complete
Jul 9 19:15:12.069: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete.
Jul 9 19:15:12.069: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-bvczx-user.kubeconfig --namespace=e2e-test-cli-deployment-bvczx history dc/deployment-simple'
STEP: checking the history for substrings
deploymentconfigs "deployment-simple"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
[AfterEach] viewing rollout history [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:598
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:15:14.466: INFO: namespace : e2e-test-cli-deployment-bvczx api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:52.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:71.121 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
viewing rollout history [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:596
should print the rollout history [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:52.526: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:15:54.037: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-vj5sj
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 9 19:15:54.718: INFO: Waiting up to 5m0s for pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-emptydir-vj5sj" to be "success or failure"
Jul 9 19:15:54.748: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.638ms
Jul 9 19:15:56.780: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062550676s
STEP: Saw pod success
Jul 9 19:15:56.780: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:15:56.929: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:15:57.118: INFO: Waiting for pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 to disappear
Jul 9 19:15:57.146: INFO: Pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:57.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vj5sj" for this suite.
Jul 9 19:16:03.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:04.671: INFO: namespace: e2e-tests-emptydir-vj5sj, resource: bindings, ignored listing per whitelist
Jul 9 19:16:06.533: INFO: namespace e2e-tests-emptydir-vj5sj deletion completed in 9.355363525s
• [SLOW TEST:14.008 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431
Jul 9 19:16:06.536: INFO: This plugin does not implement NetworkPolicy.
[AfterEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:06.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48
when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430
should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282
Jul 9 19:16:06.536: This plugin does not implement NetworkPolicy.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[sig-storage] Downward API volume
should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:51.866: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:15:53.579: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-rj487
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:15:54.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-rj487" to be "success or failure"
Jul 9 19:15:54.377: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 74.260454ms
Jul 9 19:15:56.621: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317905061s
STEP: Saw pod success
Jul 9 19:15:56.621: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:15:56.683: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:15:56.753: INFO: Waiting for pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:15:56.784: INFO: Pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:15:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rj487" for this suite.
Jul 9 19:16:03.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:05.110: INFO: namespace: e2e-tests-downward-api-rj487, resource: bindings, ignored listing per whitelist
Jul 9 19:16:06.679: INFO: namespace e2e-tests-downward-api-rj487 deletion completed in 9.741913411s
• [SLOW TEST:14.813 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Projected
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:06.538: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:08.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-pmhns
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name projected-configmap-test-volume-35243cd3-83e7-11e8-8401-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:16:08.783: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-pmhns" to be "success or failure"
Jul 9 19:16:08.837: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 53.581543ms
Jul 9 19:16:10.865: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082002351s
STEP: Saw pod success
Jul 9 19:16:10.865: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:16:10.900: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:16:10.967: INFO: Waiting for pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 to disappear
Jul 9 19:16:10.996: INFO: Pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:10.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pmhns" for this suite.
Jul 9 19:16:17.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:19.587: INFO: namespace: e2e-tests-projected-pmhns, resource: bindings, ignored listing per whitelist
Jul 9 19:16:20.420: INFO: namespace e2e-tests-projected-pmhns deletion completed in 9.389700495s
• [SLOW TEST:13.882 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419
Jul 9 19:16:20.423: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:20.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:20.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Area:Networking] services
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10
when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418
should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60
Jul 9 19:16:20.423: This plugin does not isolate namespaces by default.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:06.680: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:08.428: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-pvdqq
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 9 19:16:09.175: INFO: Waiting up to 5m0s for pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-pvdqq" to be "success or failure"
Jul 9 19:16:09.206: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.751906ms
Jul 9 19:16:11.255: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079832714s
Jul 9 19:16:13.287: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111368217s
STEP: Saw pod success
Jul 9 19:16:13.287: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:16:13.322: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:16:13.405: INFO: Waiting for pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:16:13.436: INFO: Pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:13.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pvdqq" for this suite.
Jul 9 19:16:19.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:21.562: INFO: namespace: e2e-tests-emptydir-pvdqq, resource: bindings, ignored listing per whitelist
Jul 9 19:16:23.503: INFO: namespace e2e-tests-emptydir-pvdqq deletion completed in 10.027196829s
• [SLOW TEST:16.823 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[Feature:DeploymentConfig] deploymentconfigs with enhanced status [Conformance]
should include various info in status [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:19.211: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:15:21.149: INFO: configPath is now "/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig"
Jul 9 19:15:21.149: INFO: The user is now "e2e-test-cli-deployment-25qpw-user"
Jul 9 19:15:21.149: INFO: Creating project "e2e-test-cli-deployment-25qpw"
Jul 9 19:15:21.313: INFO: Waiting on permissions in project "e2e-test-cli-deployment-25qpw" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should include various info in status [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539
STEP: verifying the deployment is marked complete
Jul 9 19:15:34.015: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
STEP: verifying that status.replicas is set
Jul 9 19:15:34.015: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.replicas}"'
STEP: verifying that status.updatedReplicas is set
Jul 9 19:15:34.240: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.updatedReplicas}"'
STEP: verifying that status.availableReplicas is set
Jul 9 19:15:34.503: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.availableReplicas}"'
STEP: verifying that status.unavailableReplicas is set
Jul 9 19:15:34.763: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.unavailableReplicas}"'
[AfterEach] with enhanced status [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:535
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:15:37.115: INFO: namespace : e2e-test-cli-deployment-25qpw api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:29.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:70.063 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
with enhanced status [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:532
should include various info in status [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:23.505: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:16:25.180: INFO: configPath is now "/tmp/e2e-test-router-stress-vn5rl-user.kubeconfig"
Jul 9 19:16:25.180: INFO: The user is now "e2e-test-router-stress-vn5rl-user"
Jul 9 19:16:25.180: INFO: Creating project "e2e-test-router-stress-vn5rl"
Jul 9 19:16:25.318: INFO: Waiting on permissions in project "e2e-test-router-stress-vn5rl" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:16:25.456: INFO: namespace : e2e-test-router-stress-vn5rl api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:31.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [8.058 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30
The HAProxy router [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86
converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168
no router installed on the cluster
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:20.425: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:21.998: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-trb4p
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-3d680430-83e7-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:16:22.645: INFO: Waiting up to 5m0s for pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-trb4p" to be "success or failure"
Jul 9 19:16:22.676: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.889356ms
Jul 9 19:16:24.706: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061005967s
Jul 9 19:16:26.735: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089959761s
STEP: Saw pod success
Jul 9 19:16:26.735: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:16:26.775: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:16:26.841: INFO: Waiting for pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 to disappear
Jul 9 19:16:26.872: INFO: Pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-trb4p" for this suite.
Jul 9 19:16:33.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:35.481: INFO: namespace: e2e-tests-secrets-trb4p, resource: bindings, ignored listing per whitelist
Jul 9 19:16:36.427: INFO: namespace e2e-tests-secrets-trb4p deletion completed in 9.462154922s
• [SLOW TEST:16.002 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[Feature:DeploymentConfig] deploymentconfigs rolled back [Conformance]
should rollback to an older deployment [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:15:11.809: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:15:13.424: INFO: configPath is now "/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig"
Jul 9 19:15:13.424: INFO: The user is now "e2e-test-cli-deployment-fb2n9-user"
Jul 9 19:15:13.424: INFO: Creating project "e2e-test-cli-deployment-fb2n9"
Jul 9 19:15:13.552: INFO: Waiting on permissions in project "e2e-test-cli-deployment-fb2n9" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should rollback to an older deployment [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842
Jul 9 19:15:27.148: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
Jul 9 19:15:27.148: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 latest deployment-simple'
STEP: verifying that we are on the second version
Jul 9 19:15:27.463: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 dc/deployment-simple --output=jsonpath="{.status.latestVersion}"'
Jul 9 19:15:42.267: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete.
STEP: verifying that we can rollback
Jul 9 19:15:42.267: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 undo dc/deployment-simple'
Jul 9 19:15:56.225: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-3) is complete.
STEP: verifying that we are on the third version
Jul 9 19:15:56.225: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 dc/deployment-simple --output=jsonpath="{.status.latestVersion}"'
[AfterEach] rolled back [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:838
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:15:58.788: INFO: namespace : e2e-test-cli-deployment-fb2n9 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:44.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:93.049 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
rolled back [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:836
should rollback to an older deployment [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842
------------------------------
[sig-storage] Projected
should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:36.429: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:37.910: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-78tdl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:16:38.533: INFO: Waiting up to 5m0s for pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-78tdl" to be "success or failure"
Jul 9 19:16:38.566: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.032762ms
Jul 9 19:16:40.599: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066337047s
STEP: Saw pod success
Jul 9 19:16:40.599: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:16:40.628: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:16:40.698: INFO: Waiting for pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 to disappear
Jul 9 19:16:40.726: INFO: Pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-78tdl" for this suite.
Jul 9 19:16:46.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:48.296: INFO: namespace: e2e-tests-projected-78tdl, resource: bindings, ignored listing per whitelist
Jul 9 19:16:50.296: INFO: namespace e2e-tests-projected-78tdl deletion completed in 9.537525694s
• [SLOW TEST:13.866 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907
------------------------------
S
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:44.859: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:46.729: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-67sp9
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support existing directory subPath [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:121
Jul 9 19:16:47.360: INFO: No SSH Key for provider : 'GetSigner(...) not implemented for '
[AfterEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:47.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-67sp9" for this suite.
Jul 9 19:16:53.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:16:55.401: INFO: namespace: e2e-tests-hostpath-67sp9, resource: bindings, ignored listing per whitelist
Jul 9 19:16:57.014: INFO: namespace e2e-tests-hostpath-67sp9 deletion completed in 9.602960881s
S [SKIPPING] [12.155 seconds]
[sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34
should support existing directory subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:121
Jul 9 19:16:47.360: No SSH Key for provider : 'GetSigner(...) not implemented for '
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[sig-storage] Projected
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:50.298: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:51.762: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-sz99l
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-4f2e0a0f-83e7-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:16:52.463: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-sz99l" to be "success or failure"
Jul 9 19:16:52.494: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.28511ms
Jul 9 19:16:54.554: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.090533248s
STEP: Saw pod success
Jul 9 19:16:54.554: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:16:54.607: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:16:54.693: INFO: Waiting for pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 to disappear
Jul 9 19:16:54.721: INFO: Pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:16:54.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sz99l" for this suite.
Jul 9 19:17:00.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:03.092: INFO: namespace: e2e-tests-projected-sz99l, resource: bindings, ignored listing per whitelist
Jul 9 19:17:05.032: INFO: namespace e2e-tests-projected-sz99l deletion completed in 10.271422297s
• [SLOW TEST:14.735 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:57.017: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:58.550: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-h5zmm
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-test-volume-5343002a-83e7-11e8-bd2e-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:16:59.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-configmap-h5zmm" to be "success or failure"
Jul 9 19:16:59.342: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.851956ms
Jul 9 19:17:01.372: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063354557s
STEP: Saw pod success
Jul 9 19:17:01.372: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:17:01.401: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 container configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:17:01.579: INFO: Waiting for pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:17:01.636: INFO: Pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:01.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h5zmm" for this suite.
Jul 9 19:17:07.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:11.117: INFO: namespace: e2e-tests-configmap-h5zmm, resource: bindings, ignored listing per whitelist
Jul 9 19:17:11.562: INFO: namespace e2e-tests-configmap-h5zmm deletion completed in 9.885698637s
• [SLOW TEST:14.545 seconds]
[sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[k8s.io] Sysctls
should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:29.275: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:16:31.583: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-vp8nj
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56
[It] should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142
STEP: Creating a pod with one valid and two invalid sysctls
[AfterEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-sysctl-vp8nj".
STEP: Found 1 events.
Jul 9 19:16:32.407: INFO: At 2018-07-09 19:16:32 -0700 PDT - event for sysctl-433830bf-83e7-11e8-992b-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-vp8nj/sysctl-433830bf-83e7-11e8-992b-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:16:32.747: INFO: POD NODE PHASE GRACE CONDITIONS
Jul 9 19:16:32.747: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: deployment-simple-2-x8xwq ip-10-0-130-54.us-west-2.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:31 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:23 -0700 PDT ContainersNotReady containers with unready status: [myapp]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [myapp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:31 -0700 PDT }]
Jul 9 19:16:32.747: INFO: deployment-simple-3-htj8x ip-10-0-130-54.us-west-2.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:45 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:52 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:45 -0700 PDT }]
Jul 9 19:16:32.747: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }]
Jul 9 19:16:32.747: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }]
Jul 9 19:16:32.747: INFO: sysctl-433830bf-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:16:32.747: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:16:32.747: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:16:32.747: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }]
Jul 9 19:16:32.747: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:16:32.747: INFO:
Jul 9 19:16:32.800: INFO:
Logging node info for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:16:32.837: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:76056,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365150208 0} {<nil>} 8169092Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260292608 0} {<nil>} 8066692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:16:32.837: INFO:
Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:16:32.875: INFO:
Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:16:32.991: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:16:32.991: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container metrics-server ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Container metrics-server-nanny ready: true, restart count 0
Jul 9 19:16:32.991: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container exec ready: true, restart count 0
Jul 9 19:16:32.991: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container node-agent ready: true, restart count 3
Jul 9 19:16:32.991: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container directory-sync ready: true, restart count 0
Jul 9 19:16:32.991: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container webconsole ready: true, restart count 0
Jul 9 19:16:32.991: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container default-http-backend ready: true, restart count 0
Jul 9 19:16:32.991: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container router ready: true, restart count 0
Jul 9 19:16:32.991: INFO: deployment-simple-3-htj8x started at 2018-07-09 19:15:45 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container myapp ready: true, restart count 0
Jul 9 19:16:32.991: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Init container git-clone ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Init container manage-dockerfile ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Container sti-build ready: false, restart count 0
Jul 9 19:16:32.991: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:16:32.991: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container registry ready: true, restart count 0
Jul 9 19:16:32.991: INFO: deployment-simple-2-x8xwq started at 2018-07-09 19:15:31 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container myapp ready: false, restart count 0
Jul 9 19:16:32.991: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded)
Jul 9 19:16:32.991: INFO: Container alert-buffer ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Container alertmanager ready: true, restart count 0
Jul 9 19:16:32.991: INFO: Container alertmanager-proxy ready: true, restart count 0
Jul 9 19:16:32.992: INFO: Container alerts-proxy ready: true, restart count 0
Jul 9 19:16:32.992: INFO: Container prom-proxy ready: true, restart count 0
Jul 9 19:16:32.992: INFO: Container prometheus ready: true, restart count 0
Jul 9 19:16:32.992: INFO: sysctl-433830bf-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:16:32 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:16:32.992: INFO: Container test-container ready: false, restart count 0
W0709 19:16:33.039551 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:16:33.169: INFO:
Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:16:33.169: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s}
Jul 9 19:16:33.169: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s}
Jul 9 19:16:33.169: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s}
Jul 9 19:16:33.169: INFO:
Logging node info for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:16:33.236: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:76069,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:16:33.236: INFO:
Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:16:33.286: INFO:
Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:17:03.379: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250)
Jul 9 19:17:03.379: INFO:
Logging node info for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:03.416: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:76531,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:17:03.417: INFO:
Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:03.452: INFO:
Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:03.641: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.641: INFO: Container kube-apiserver ready: true, restart count 4
Jul 9 19:17:03.641: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.641: INFO: Container tectonic-node-controller ready: true, restart count 0
Jul 9 19:17:03.641: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.641: INFO: Container tectonic-alm-operator ready: true, restart count 0
Jul 9 19:17:03.641: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.641: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0
Jul 9 19:17:03.641: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.641: INFO: Container node-agent ready: true, restart count 4
Jul 9 19:17:03.642: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container pod-checkpointer ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:17:03.642: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:17:03.642: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container openshift-controller-manager ready: true, restart count 3
Jul 9 19:17:03.642: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container tectonic-node-controller-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container kube-core-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container tectonic-utility-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container kube-addon-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at <nil> (0+0 container statuses recorded)
Jul 9 19:17:03.642: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container kube-controller-manager ready: true, restart count 1
Jul 9 19:17:03.642: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded)
Jul 9 19:17:03.642: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0
Jul 9 19:17:03.642: INFO: Container tectonic-stats-emitter ready: true, restart count 0
Jul 9 19:17:03.642: INFO: Container tectonic-stats-extender ready: true, restart count 0
Jul 9 19:17:03.642: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container tectonic-channel-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:17:03.642: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container openshift-apiserver ready: true, restart count 0
Jul 9 19:17:03.642: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container tectonic-network-operator ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container dnsmasq ready: true, restart count 0
Jul 9 19:17:03.642: INFO: Container kubedns ready: true, restart count 0
Jul 9 19:17:03.642: INFO: Container sidecar ready: true, restart count 0
Jul 9 19:17:03.642: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container kube-scheduler ready: true, restart count 0
Jul 9 19:17:03.642: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:03.642: INFO: Container tectonic-clu ready: true, restart count 0
W0709 19:17:03.683471 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:17:03.841: INFO:
Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:17:03.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sysctl-vp8nj" for this suite.
Jul 9 19:17:10.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:13.714: INFO: namespace: e2e-tests-sysctl-vp8nj, resource: bindings, ignored listing per whitelist
Jul 9 19:17:14.686: INFO: namespace e2e-tests-sysctl-vp8nj deletion completed in 10.762470175s
• Failure [45.411 seconds]
[k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142
Expected
<nil>: nil
not to be nil
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:177
------------------------------
S
------------------------------
[Conformance][templates] templateinstance impersonation tests
should pass impersonation creation tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:05.033: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:17:06.734: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-user.kubeconfig"
Jul 9 19:17:06.734: INFO: The user is now "e2e-test-templates-6rmnp-user"
Jul 9 19:17:06.734: INFO: Creating project "e2e-test-templates-6rmnp"
Jul 9 19:17:06.939: INFO: Waiting on permissions in project "e2e-test-templates-6rmnp" ...
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57
Jul 9 19:17:08.098: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-adminuser.kubeconfig"
Jul 9 19:17:08.432: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonateuser.kubeconfig"
Jul 9 19:17:08.778: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonatebygroupuser.kubeconfig"
Jul 9 19:17:09.017: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-edituser1.kubeconfig"
Jul 9 19:17:09.331: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-edituser2.kubeconfig"
Jul 9 19:17:09.592: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-viewuser.kubeconfig"
Jul 9 19:17:09.870: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonatebygroupuser.kubeconfig"
[It] should pass impersonation creation tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231
STEP: testing as system:admin user
STEP: testing as e2e-test-templates-6rmnp-adminuser user
Jul 9 19:17:10.199: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-adminuser.kubeconfig"
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:17:10.333: INFO: namespace : e2e-test-templates-6rmnp api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:17:16.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] [Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221
• Failure [11.727 seconds]
[Conformance][templates] templateinstance impersonation tests
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27
should pass impersonation creation tests [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231
Expected an error to have occurred. Got:
<nil>: nil
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:241
------------------------------
[Conformance][templates] templateservicebroker security test
should pass security tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:164
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:16.761: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:17:18.405: INFO: configPath is now "/tmp/e2e-test-templates-m7s7v-user.kubeconfig"
Jul 9 19:17:18.405: INFO: The user is now "e2e-test-templates-m7s7v-user"
Jul 9 19:17:18.405: INFO: Creating project "e2e-test-templates-m7s7v"
Jul 9 19:17:18.548: INFO: Waiting on permissions in project "e2e-test-templates-m7s7v" ...
[BeforeEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:45
[AfterEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:17:18.846: INFO: namespace : e2e-test-templates-m7s7v api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:17:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] [Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:78
• Failure in Spec Setup (BeforeEach) [8.202 seconds]
[Conformance][templates] templateservicebroker security test
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:28
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:150
should pass security tests [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:164
Expected error:
<*errors.StatusError | 0xc42182d170>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
Status: "Failure",
Message: "services \"apiserver\" not found",
Reason: "NotFound",
Details: {Name: "apiserver", Group: "", Kind: "services", UID: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
services "apiserver" not found
not to have occurred
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:52
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion
should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:14.689: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:17:16.748: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-cw78h
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test substitution in container's args
Jul 9 19:17:17.550: INFO: Waiting up to 5m0s for pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-var-expansion-cw78h" to be "success or failure"
Jul 9 19:17:17.594: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 43.983466ms
Jul 9 19:17:19.634: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083132661s
Jul 9 19:17:21.681: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131014186s
STEP: Saw pod success
Jul 9 19:17:21.681: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:17:21.721: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:17:21.869: INFO: Waiting for pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 to disappear
Jul 9 19:17:21.908: INFO: Pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:21.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-cw78h" for this suite.
Jul 9 19:17:28.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:31.972: INFO: namespace: e2e-tests-var-expansion-cw78h, resource: bindings, ignored listing per whitelist
Jul 9 19:17:32.438: INFO: namespace e2e-tests-var-expansion-cw78h deletion completed in 10.478758253s
• [SLOW TEST:17.749 seconds]
[k8s.io] Variable Expansion
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[Feature:DeploymentConfig] deploymentconfigs with minimum ready seconds set [Conformance]
should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:16:31.565: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:16:33.458: INFO: configPath is now "/tmp/e2e-test-cli-deployment-gtldl-user.kubeconfig"
Jul 9 19:16:33.458: INFO: The user is now "e2e-test-cli-deployment-gtldl-user"
Jul 9 19:16:33.458: INFO: Creating project "e2e-test-cli-deployment-gtldl"
Jul 9 19:16:33.574: INFO: Waiting on permissions in project "e2e-test-cli-deployment-gtldl" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008
STEP: verifying the deployment is created
STEP: verifying that all pods are ready
Jul 9 19:16:37.910: INFO: All replicas are ready.
STEP: verifying that the deployment is still running
STEP: waiting for the deployment to finish
Jul 9 19:17:38.001: INFO: Finished waiting for deployment.
[AfterEach] with minimum ready seconds set [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1004
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:17:40.444: INFO: namespace : e2e-test-cli-deployment-gtldl api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:46.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:74.949 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
with minimum ready seconds set [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1002
should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:32.446: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:17:34.454: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-d7w4d
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 9 19:17:35.304: INFO: Waiting up to 5m0s for pod "pod-68bac026-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-emptydir-d7w4d" to be "success or failure"
Jul 9 19:17:35.340: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.039848ms
Jul 9 19:17:37.380: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076062691s
STEP: Saw pod success
Jul 9 19:17:37.380: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:17:37.422: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-68bac026-83e7-11e8-992b-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:17:37.505: INFO: Waiting for pod pod-68bac026-83e7-11e8-992b-28d244b00276 to disappear
Jul 9 19:17:37.550: INFO: Pod pod-68bac026-83e7-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:37.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d7w4d" for this suite.
Jul 9 19:17:43.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:46.551: INFO: namespace: e2e-tests-emptydir-d7w4d, resource: bindings, ignored listing per whitelist
Jul 9 19:17:47.894: INFO: namespace e2e-tests-emptydir-d7w4d deletion completed in 10.300527545s
• [SLOW TEST:15.449 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:46.516: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:17:48.338: INFO: configPath is now "/tmp/e2e-test-router-stress-lzm6c-user.kubeconfig"
Jul 9 19:17:48.338: INFO: The user is now "e2e-test-router-stress-lzm6c-user"
Jul 9 19:17:48.338: INFO: Creating project "e2e-test-router-stress-lzm6c"
Jul 9 19:17:48.461: INFO: Waiting on permissions in project "e2e-test-router-stress-lzm6c" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:17:48.594: INFO: namespace : e2e-test-router-stress-lzm6c api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:54.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [8.153 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21
The HAProxy router [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68
should serve routes that were created from an ingress [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79
no router installed on the cluster
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48
------------------------------
S
------------------------------
[k8s.io] Sysctls
should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:11.563: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:17:13.354: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-xg4rw
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56
[It] should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-sysctl-xg4rw".
STEP: Found 5 events.
Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:14 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-xg4rw/sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:14 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulling: pulling image "busybox"
Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Successfully pulled image "busybox"
Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:17:18.370: INFO: POD NODE PHASE GRACE CONDITIONS
Jul 9 19:17:18.370: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: minreadytest-1-chctk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT }]
Jul 9 19:17:18.371: INFO: minreadytest-1-deploy ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:34 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:34 -0700 PDT }]
Jul 9 19:17:18.371: INFO: minreadytest-1-fddc7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT }]
Jul 9 19:17:18.371: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }]
Jul 9 19:17:18.371: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }]
Jul 9 19:17:18.371: INFO: sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT }]
Jul 9 19:17:18.371: INFO: var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT ContainersNotReady containers with unready status: [dapi-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:17:18.371: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:17:18.371: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:17:18.371: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:17:18.371: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:17:18.371: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }]
Jul 9 19:17:18.372: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:17:18.372: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:17:18.372: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }]
Jul 9 19:17:18.372: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:17:18.372: INFO:
Jul 9 19:17:18.405: INFO:
Logging node info for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:17:18.439: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:76832,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365150208 0} {<nil>} 8169092Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260292608 0} {<nil>} 8066692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:17:18.440: INFO:
Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:17:18.472: INFO:
Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:17:18.584: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container default-http-backend ready: true, restart count 0
Jul 9 19:17:18.584: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container router ready: true, restart count 0
Jul 9 19:17:18.584: INFO: minreadytest-1-chctk started at 2018-07-09 19:16:35 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container myapp ready: true, restart count 0
Jul 9 19:17:18.584: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Init container git-clone ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Init container manage-dockerfile ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container sti-build ready: false, restart count 0
Jul 9 19:17:18.584: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:17:18.584: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container registry ready: true, restart count 0
Jul 9 19:17:18.584: INFO: minreadytest-1-deploy started at 2018-07-09 19:16:34 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container deployment ready: true, restart count 0
Jul 9 19:17:18.584: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container alert-buffer ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container alertmanager ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container alertmanager-proxy ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container alerts-proxy ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container prom-proxy ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container prometheus ready: true, restart count 0
Jul 9 19:17:18.584: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:17:18.584: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container metrics-server ready: true, restart count 0
Jul 9 19:17:18.584: INFO: Container metrics-server-nanny ready: true, restart count 0
Jul 9 19:17:18.584: INFO: sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 started at 2018-07-09 19:17:14 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container test-container ready: false, restart count 0
Jul 9 19:17:18.584: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container exec ready: true, restart count 0
Jul 9 19:17:18.584: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container node-agent ready: true, restart count 3
Jul 9 19:17:18.584: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container directory-sync ready: true, restart count 0
Jul 9 19:17:18.584: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container webconsole ready: true, restart count 0
Jul 9 19:17:18.584: INFO: minreadytest-1-fddc7 started at 2018-07-09 19:16:35 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container myapp ready: true, restart count 0
Jul 9 19:17:18.584: INFO: var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:17:17 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:18.584: INFO: Container dapi-container ready: false, restart count 0
W0709 19:17:18.623181 11748 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:17:18.703: INFO:
Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:17:18.703: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s}
Jul 9 19:17:18.703: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s}
Jul 9 19:17:18.703: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s}
Jul 9 19:17:18.703: INFO:
Logging node info for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:17:18.737: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:76717,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:17:18.737: INFO:
Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:17:18.767: INFO:
Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:17:48.799: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250)
Jul 9 19:17:48.799: INFO:
Logging node info for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:48.838: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:77099,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:17:48.838: INFO:
Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:48.875: INFO:
Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:17:49.001: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-clu ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded)
Jul 9 19:17:49.001: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0
Jul 9 19:17:49.001: INFO: Container tectonic-stats-emitter ready: true, restart count 0
Jul 9 19:17:49.001: INFO: Container tectonic-stats-extender ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-channel-operator ready: true, restart count 0
Jul 9 19:17:49.001: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:17:49.001: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container openshift-apiserver ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-network-operator ready: true, restart count 0
Jul 9 19:17:49.001: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container dnsmasq ready: true, restart count 0
Jul 9 19:17:49.001: INFO: Container kubedns ready: true, restart count 0
Jul 9 19:17:49.001: INFO: Container sidecar ready: true, restart count 0
Jul 9 19:17:49.001: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container kube-scheduler ready: true, restart count 0
Jul 9 19:17:49.001: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container kube-apiserver ready: true, restart count 4
Jul 9 19:17:49.001: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-node-controller ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-alm-operator ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.001: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0
Jul 9 19:17:49.001: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container node-agent ready: true, restart count 4
Jul 9 19:17:49.002: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container pod-checkpointer ready: true, restart count 0
Jul 9 19:17:49.002: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:17:49.002: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:17:49.002: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container openshift-controller-manager ready: true, restart count 3
Jul 9 19:17:49.002: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container tectonic-node-controller-operator ready: true, restart count 0
Jul 9 19:17:49.002: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container kube-core-operator ready: true, restart count 0
Jul 9 19:17:49.002: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container tectonic-utility-operator ready: true, restart count 0
Jul 9 19:17:49.002: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container kube-addon-operator ready: true, restart count 0
Jul 9 19:17:49.002: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at <nil> (0+0 container statuses recorded)
Jul 9 19:17:49.002: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:17:49.002: INFO: Container kube-controller-manager ready: true, restart count 1
W0709 19:17:49.037552 11748 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:17:49.151: INFO:
Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:17:49.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sysctl-xg4rw" for this suite.
Jul 9 19:17:55.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:17:58.581: INFO: namespace: e2e-tests-sysctl-xg4rw, resource: bindings, ignored listing per whitelist
Jul 9 19:17:59.396: INFO: namespace e2e-tests-sysctl-xg4rw deletion completed in 10.163694124s
• Failure [47.833 seconds]
[k8s.io] Sysctls
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60
Expected
<string>: kernel.shm_rmid_forced = 0
to contain substring
<string>: kernel.shm_rmid_forced = 1
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:98
------------------------------
S
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:47.900: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:17:49.996: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-gd54g
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:17:50.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-downward-api-gd54g" to be "success or failure"
Jul 9 19:17:50.785: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.183911ms
Jul 9 19:17:52.822: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07606692s
STEP: Saw pod success
Jul 9 19:17:52.822: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:17:52.862: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:17:52.960: INFO: Waiting for pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 to disappear
Jul 9 19:17:52.996: INFO: Pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:17:52.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gd54g" for this suite.
Jul 9 19:17:59.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:18:01.251: INFO: namespace: e2e-tests-downward-api-gd54g, resource: bindings, ignored listing per whitelist
Jul 9 19:18:03.494: INFO: namespace e2e-tests-downward-api-gd54g deletion completed in 10.449564971s
• [SLOW TEST:15.594 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431
Jul 9 19:18:03.496: INFO: This plugin does not implement NetworkPolicy.
[AfterEach] when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:03.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds]
NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48
when using a plugin that implements NetworkPolicy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430
should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177
Jul 9 19:18:03.496: This plugin does not implement NetworkPolicy.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:59.398: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:18:01.024: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-qlgsc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-7871b510-83e7-11e8-bd2e-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:18:01.696: INFO: Waiting up to 5m0s for pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-qlgsc" to be "success or failure"
Jul 9 19:18:01.727: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.880909ms
Jul 9 19:18:03.756: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060054186s
Jul 9 19:18:05.790: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09359757s
STEP: Saw pod success
Jul 9 19:18:05.790: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:18:05.821: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:18:05.887: INFO: Waiting for pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:18:05.925: INFO: Pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:05.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qlgsc" for this suite.
Jul 9 19:18:12.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:18:15.286: INFO: namespace: e2e-tests-secrets-qlgsc, resource: bindings, ignored listing per whitelist
Jul 9 19:18:15.316: INFO: namespace e2e-tests-secrets-qlgsc deletion completed in 9.35852938s
• [SLOW TEST:15.917 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] Downward API volume
should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:15.317: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:18:16.827: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-g87bq
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:18:17.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-g87bq" to be "success or failure"
Jul 9 19:18:17.591: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.518953ms
Jul 9 19:18:19.623: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064857681s
STEP: Saw pod success
Jul 9 19:18:19.623: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:18:19.654: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:18:19.721: INFO: Waiting for pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:18:19.755: INFO: Pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:19.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g87bq" for this suite.
Jul 9 19:18:25.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:18:28.874: INFO: namespace: e2e-tests-downward-api-g87bq, resource: bindings, ignored listing per whitelist
Jul 9 19:18:29.199: INFO: namespace e2e-tests-downward-api-g87bq deletion completed in 9.409267552s
• [SLOW TEST:13.882 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[image_ecosystem][mongodb] openshift mongodb image creating from a template
should instantiate the template [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [image_ecosystem][mongodb] openshift mongodb image
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:54.671: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [image_ecosystem][mongodb] openshift mongodb image
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:17:56.971: INFO: configPath is now "/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig"
Jul 9 19:17:56.971: INFO: The user is now "e2e-test-mongodb-create-mxp59-user"
Jul 9 19:17:56.971: INFO: Creating project "e2e-test-mongodb-create-mxp59"
Jul 9 19:17:57.125: INFO: Waiting on permissions in project "e2e-test-mongodb-create-mxp59" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:22
Jul 9 19:17:57.190: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[It] should instantiate the template [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34
openshift namespace image streams OK
STEP: creating a new app
Jul 9 19:17:57.626: INFO: Running 'oc new-app --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 -f /tmp/fixture-testdata-dir333495585/examples/db-templates/mongodb-ephemeral-template.json'
--> Deploying template "e2e-test-mongodb-create-mxp59/mongodb-ephemeral" for "/tmp/fixture-testdata-dir333495585/examples/db-templates/mongodb-ephemeral-template.json" to project e2e-test-mongodb-create-mxp59
MongoDB (Ephemeral)
---------
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
The following service(s) have been created in your project: mongodb.
Username: userSH5
Password: rYXxAfAPgqyge1eS
Database Name: sampledb
Connection URL: mongodb://userSH5:rYXxAfAPgqyge1eS@mongodb/sampledb
For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=mongodb
* MongoDB Connection Username=userSH5 # generated
* MongoDB Connection Password=rYXxAfAPgqyge1eS # generated
* MongoDB Database Name=sampledb
* MongoDB Admin Password=b2glYaLmorpxNjYS # generated
* Version of MongoDB Image=3.2
--> Creating resources ...
secret "mongodb" created
service "mongodb" created
deploymentconfig "mongodb" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/mongodb'
Run 'oc status' to view your app.
STEP: waiting for the deployment to complete
Jul 9 19:17:59.934: INFO: waiting for deploymentconfig e2e-test-mongodb-create-mxp59/mongodb to be available with version 1
Jul 9 19:18:21.005: INFO: deploymentconfig e2e-test-mongodb-create-mxp59/mongodb available after 21.071302293s
pods: mongodb-1-zkx79
STEP: expecting the mongodb pod is running
STEP: expecting the mongodb service is answering for ping
Jul 9 19:18:22.044: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet --eval '{"ping", 1}''
STEP: expecting that we can insert a new record
Jul 9 19:18:22.755: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet "$MONGODB_DATABASE" --username "$MONGODB_USER" --password "$MONGODB_PASSWORD" --eval 'db.foo.save({ "status": "passed" })''
STEP: expecting that we can read a record
Jul 9 19:18:23.426: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet "$MONGODB_DATABASE" --username "$MONGODB_USER" --password "$MONGODB_PASSWORD" --eval 'printjson(db.foo.find({}, {_id: 0}).toArray())''
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:26
[AfterEach] [image_ecosystem][mongodb] openshift mongodb image
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:18:24.226: INFO: namespace : e2e-test-mongodb-create-mxp59 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [image_ecosystem][mongodb] openshift mongodb image
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:46.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:51.631 seconds]
[image_ecosystem][mongodb] openshift mongodb image
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:15
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:21
creating from a template
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:33
should instantiate the template [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34
------------------------------
[k8s.io] InitContainer
should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:29.200: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:18:30.691: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-ljvgk
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40
[It] should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44
STEP: creating the pod
Jul 9 19:18:31.273: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-ljvgk" for this suite.
Jul 9 19:18:46.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:18:50.027: INFO: namespace: e2e-tests-init-container-ljvgk, resource: bindings, ignored listing per whitelist
Jul 9 19:18:50.057: INFO: namespace e2e-tests-init-container-ljvgk deletion completed in 9.488643845s
• [SLOW TEST:20.857 seconds]
[k8s.io] InitContainer
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44
------------------------------
[sig-storage] Secrets
should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:50.060: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:18:51.647: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-mtlqm
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-96a7f343-83e7-11e8-bd2e-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:18:52.378: INFO: Waiting up to 5m0s for pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-mtlqm" to be "success or failure"
Jul 9 19:18:52.411: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.682627ms
Jul 9 19:18:54.449: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07052138s
Jul 9 19:18:56.479: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10078221s
STEP: Saw pod success
Jul 9 19:18:56.479: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:18:56.530: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:18:56.599: INFO: Waiting for pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:18:56.627: INFO: Pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:56.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mtlqm" for this suite.
Jul 9 19:19:02.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:19:05.783: INFO: namespace: e2e-tests-secrets-mtlqm, resource: bindings, ignored listing per whitelist
Jul 9 19:19:06.308: INFO: namespace e2e-tests-secrets-mtlqm deletion completed in 9.638494379s
• [SLOW TEST:16.248 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[Area:Networking] network isolation when using a plugin that does not isolate namespaces by default
should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:407
Jul 9 19:18:46.575: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin
[BeforeEach] when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:46.575: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation1-f5qp8
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:48.506: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation2-fvc9k
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15
Jul 9 19:18:50.646: INFO: Using ip-10-0-130-54.us-west-2.compute.internal for test ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal])
Jul 9 19:18:52.794: INFO: Target pod IP:port is 10.2.2.72:8080
Jul 9 19:18:52.794: INFO: Creating an exec pod on node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:52.794: INFO: Creating new exec pod
Jul 9 19:18:56.956: INFO: Waiting up to 10s to wget 10.2.2.72:8080
Jul 9 19:18:56.956: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-tests-net-isolation2-fvc9k execpod-sourceip-ip-10-0-130-54.us-west-2.compute.internaltjbwd -- /bin/sh -c wget -T 30 -qO- 10.2.2.72:8080'
Jul 9 19:18:57.616: INFO: stderr: ""
Jul 9 19:18:57.616: INFO: Cleaning up the exec pod
[AfterEach] when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:18:57.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-isolation1-f5qp8" for this suite.
Jul 9 19:19:03.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:19:05.852: INFO: namespace: e2e-tests-net-isolation1-f5qp8, resource: bindings, ignored listing per whitelist
Jul 9 19:19:07.599: INFO: namespace e2e-tests-net-isolation1-f5qp8 deletion completed in 9.836500252s
[AfterEach] when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:19:07.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-isolation2-fvc9k" for this suite.
Jul 9 19:19:13.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:19:15.952: INFO: namespace: e2e-tests-net-isolation2-fvc9k, resource: bindings, ignored listing per whitelist
Jul 9 19:19:17.605: INFO: namespace e2e-tests-net-isolation2-fvc9k deletion completed in 9.968094137s
• [SLOW TEST:31.301 seconds]
[Area:Networking] network isolation
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10
when using a plugin that does not isolate namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:406
should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:17.606: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:19:19.438: INFO: configPath is now "/tmp/e2e-test-unprivileged-router-q8gbw-user.kubeconfig"
Jul 9 19:19:19.438: INFO: The user is now "e2e-test-unprivileged-router-q8gbw-user"
Jul 9 19:19:19.438: INFO: Creating project "e2e-test-unprivileged-router-q8gbw"
Jul 9 19:19:19.591: INFO: Waiting on permissions in project "e2e-test-unprivileged-router-q8gbw" ...
[BeforeEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:41
Jul 9 19:19:19.625: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-unprivileged-router-q8gbw -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml -p=IMAGE=openshift/origin-haproxy-router -p=SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]'
warning: --param no longer accepts comma-separated lists of values. "SCOPE=[\"--name=test-unprivileged\", \"--namespace=$(POD_NAMESPACE)\", \"--loglevel=4\", \"--labels=select=first\", \"--update-status=false\"]" will be treated as a single key-value pair.
--> Deploying template "e2e-test-unprivileged-router-q8gbw/" for "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" to project e2e-test-unprivileged-router-q8gbw
* With parameters:
* IMAGE=openshift/origin-haproxy-router
* SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]
--> Creating resources ...
pod "router-scoped" created
pod "router-override" created
pod "router-override-domains" created
rolebinding "system-router" created
route "route-1" created
route "route-2" created
route "route-override-domain-1" created
route "route-override-domain-2" created
service "endpoints" created
pod "endpoint-1" created
--> Success
Access your application via route 'first.example.com'
Access your application via route 'second.example.com'
Access your application via route 'y.a.null.ptr'
Access your application via route 'main.void.str'
Run 'oc status' to view your app.
[It] should run even if it has no access to update status [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:55
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:29
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:19:20.742: INFO: namespace : e2e-test-unprivileged-router-q8gbw api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:19:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] [23.219 seconds]
[Conformance][Area:Networking][Feature:Router]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:19
The HAProxy router
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:54
should run even if it has no access to update status [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:55
test temporarily disabled
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:56
------------------------------
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:06.309: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:19:07.882: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig"
Jul 9 19:19:07.882: INFO: The user is now "e2e-test-build-valuefrom-frlkw-user"
Jul 9 19:19:07.882: INFO: Creating project "e2e-test-build-valuefrom-frlkw"
Jul 9 19:19:08.044: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-frlkw" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27
Jul 9 19:19:08.105: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Jul 9 19:19:08.242: INFO: Running scan #0
Jul 9 19:19:08.242: INFO: Checking language ruby
Jul 9 19:19:08.287: INFO: Checking tag latest
Jul 9 19:19:08.287: INFO: Checking tag 2.0
Jul 9 19:19:08.287: INFO: Checking tag 2.2
Jul 9 19:19:08.287: INFO: Checking tag 2.3
Jul 9 19:19:08.287: INFO: Checking tag 2.4
Jul 9 19:19:08.287: INFO: Checking tag 2.5
Jul 9 19:19:08.287: INFO: Checking language nodejs
Jul 9 19:19:08.327: INFO: Checking tag 4
Jul 9 19:19:08.327: INFO: Checking tag 6
Jul 9 19:19:08.327: INFO: Checking tag 8
Jul 9 19:19:08.327: INFO: Checking tag latest
Jul 9 19:19:08.327: INFO: Checking tag 0.10
Jul 9 19:19:08.327: INFO: Checking language perl
Jul 9 19:19:08.376: INFO: Checking tag 5.16
Jul 9 19:19:08.376: INFO: Checking tag 5.20
Jul 9 19:19:08.376: INFO: Checking tag 5.24
Jul 9 19:19:08.376: INFO: Checking tag latest
Jul 9 19:19:08.376: INFO: Checking language php
Jul 9 19:19:08.415: INFO: Checking tag 7.1
Jul 9 19:19:08.415: INFO: Checking tag latest
Jul 9 19:19:08.415: INFO: Checking tag 5.5
Jul 9 19:19:08.415: INFO: Checking tag 5.6
Jul 9 19:19:08.415: INFO: Checking tag 7.0
Jul 9 19:19:08.415: INFO: Checking language python
Jul 9 19:19:08.455: INFO: Checking tag latest
Jul 9 19:19:08.455: INFO: Checking tag 2.7
Jul 9 19:19:08.455: INFO: Checking tag 3.3
Jul 9 19:19:08.455: INFO: Checking tag 3.4
Jul 9 19:19:08.456: INFO: Checking tag 3.5
Jul 9 19:19:08.456: INFO: Checking tag 3.6
Jul 9 19:19:08.456: INFO: Checking language wildfly
Jul 9 19:19:08.510: INFO: Checking tag 10.1
Jul 9 19:19:08.510: INFO: Checking tag 11.0
Jul 9 19:19:08.510: INFO: Checking tag 12.0
Jul 9 19:19:08.510: INFO: Checking tag 8.1
Jul 9 19:19:08.510: INFO: Checking tag 9.0
Jul 9 19:19:08.510: INFO: Checking tag latest
Jul 9 19:19:08.510: INFO: Checking tag 10.0
Jul 9 19:19:08.510: INFO: Checking language mysql
Jul 9 19:19:08.550: INFO: Checking tag 5.6
Jul 9 19:19:08.550: INFO: Checking tag 5.7
Jul 9 19:19:08.550: INFO: Checking tag latest
Jul 9 19:19:08.550: INFO: Checking tag 5.5
Jul 9 19:19:08.550: INFO: Checking language postgresql
Jul 9 19:19:08.614: INFO: Checking tag 9.2
Jul 9 19:19:08.615: INFO: Checking tag 9.4
Jul 9 19:19:08.615: INFO: Checking tag 9.5
Jul 9 19:19:08.615: INFO: Checking tag 9.6
Jul 9 19:19:08.615: INFO: Checking tag latest
Jul 9 19:19:08.615: INFO: Checking language mongodb
Jul 9 19:19:08.657: INFO: Checking tag 2.6
Jul 9 19:19:08.657: INFO: Checking tag 3.2
Jul 9 19:19:08.657: INFO: Checking tag 3.4
Jul 9 19:19:08.657: INFO: Checking tag latest
Jul 9 19:19:08.657: INFO: Checking tag 2.4
Jul 9 19:19:08.657: INFO: Checking language jenkins
Jul 9 19:19:08.711: INFO: Checking tag 2
Jul 9 19:19:08.711: INFO: Checking tag latest
Jul 9 19:19:08.711: INFO: Checking tag 1
Jul 9 19:19:08.711: INFO: Success!
STEP: creating test image stream
Jul 9 19:19:08.711: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-is.json'
imagestream.image.openshift.io "test" created
STEP: creating test secret
Jul 9 19:19:08.959: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-secret.yaml'
secret "mysecret" created
STEP: creating test configmap
Jul 9 19:19:09.260: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-configmap.yaml'
configmap "myconfigmap" created
[It] should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61
STEP: creating test successful build config
Jul 9 19:19:09.551: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/successful-sti-build-value-from-config.yaml'
buildconfig.build.openshift.io "mys2itest" created
STEP: starting test build
Jul 9 19:19:09.872: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw mys2itest -o=name'
Jul 9 19:19:10.143: INFO:
start-build output with args [mys2itest -o=name]:
Error><nil>
StdOut>
build/mys2itest-1
StdErr>
Jul 9 19:19:10.144: INFO: Waiting for mys2itest-1 to complete
Jul 9 19:19:36.260: INFO: Done waiting for mys2itest-1: util.BuildResult{BuildPath:"build/mys2itest-1", BuildName:"mys2itest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mys2itest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421e63b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42096a1e0)}
with error: <nil>
Jul 9 19:19:36.260: INFO: Running 'oc logs --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f build/mys2itest-1 --timestamps'
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:19:36.858: INFO: namespace : e2e-test-build-valuefrom-frlkw api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:19:42.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:36.620 seconds]
[Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26
should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61
------------------------------
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts
should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:18:03.500: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:18:05.498: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-e2e-kubelet-etc-hosts-mt788
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 9 19:18:14.519: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:18:14.520: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:18:14.844: INFO: Exec stderr: ""
Jul 9 19:18:14.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:18:14.844: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:18:15.158: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 9 19:18:15.159: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 9 19:18:15.159: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
Jul 9 19:18:15.361: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-e2e-kubelet-etc-hosts-mt788".
STEP: Found 19 events.
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:06 -0700 PDT - event for test-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-mt788/test-pod to ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:07 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:09 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Failed: Error: failed to start container "busybox-3": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/tmp/etc-hosts291332576\\\" to rootfs \\\"/var/lib/docker/overlay2/28b4fd916cf3ee847aa8b641cf8791ffc74146925446e90f4b63b4739457a8a4/merged\\\" at \\\"/var/lib/docker/overlay2/28b4fd916cf3ee847aa8b641cf8791ffc74146925446e90f4b63b4739457a8a4/merged/etc/hosts\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:09 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:11 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Failed: Error: failed to start container "busybox-3": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/tmp/etc-hosts291332576\\\" to rootfs \\\"/var/lib/docker/overlay2/fb1e65fec82cd9b220a6b4c104980f69c2be4b9ca18b8032df78bc0a3c65cac6/merged\\\" at \\\"/var/lib/docker/overlay2/fb1e65fec82cd9b220a6b4c104980f69c2be4b9ca18b8032df78bc0a3c65cac6/merged/etc/hosts\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:11 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} BackOff: Back-off restarting failed container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:12 -0700 PDT - event for test-host-network-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-mt788/test-host-network-pod to ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine
Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:18:15.580: INFO: POD NODE PHASE GRACE CONDITIONS
Jul 9 19:18:15.580: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: mongodb-1-deploy ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:01 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:01 -0700 PDT }]
Jul 9 19:18:15.580: INFO: mongodb-1-zkx79 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT ContainersNotReady containers with unready status: [mongodb]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [mongodb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT }]
Jul 9 19:18:15.580: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }]
Jul 9 19:18:15.580: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }]
Jul 9 19:18:15.580: INFO: test-host-network-pod ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:12 -0700 PDT }]
Jul 9 19:18:15.580: INFO: test-pod ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT ContainersNotReady containers with unready status: [busybox-3]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [busybox-3]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT }]
Jul 9 19:18:15.580: INFO: pod-host-path-test ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT ContainersNotReady containers with unready status: [test-container-1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container-1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:18:15.580: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:18:15.580: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:18:15.580: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }]
Jul 9 19:18:15.580: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:18:15.580: INFO:
Jul 9 19:18:15.618: INFO:
Logging node info for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:15.658: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:77546,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365150208 0} {<nil>} 8169092Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260292608 0} {<nil>} 8066692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:18:15.658: INFO:
Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:15.703: INFO:
Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:15.838: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container router ready: true, restart count 0
Jul 9 19:18:15.838: INFO: mongodb-1-deploy started at 2018-07-09 19:18:01 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container deployment ready: true, restart count 0
Jul 9 19:18:15.838: INFO: pod-host-path-test started at 2018-07-09 19:17:27 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container test-container-1 ready: false, restart count 0
Jul 9 19:18:15.838: INFO: Container test-container-2 ready: true, restart count 0
Jul 9 19:18:15.838: INFO: mongodb-1-zkx79 started at 2018-07-09 19:18:02 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container mongodb ready: false, restart count 0
Jul 9 19:18:15.838: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Init container git-clone ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Init container manage-dockerfile ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container sti-build ready: false, restart count 0
Jul 9 19:18:15.838: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container default-http-backend ready: true, restart count 0
Jul 9 19:18:15.838: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container registry ready: true, restart count 0
Jul 9 19:18:15.838: INFO: test-pod started at 2018-07-09 19:18:06 -0700 PDT (0+3 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container busybox-1 ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container busybox-2 ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container busybox-3 ready: false, restart count 1
Jul 9 19:18:15.838: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container alert-buffer ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container alertmanager ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container alertmanager-proxy ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container alerts-proxy ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container prom-proxy ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container prometheus ready: true, restart count 0
Jul 9 19:18:15.838: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:18:15.838: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container metrics-server ready: true, restart count 0
Jul 9 19:18:15.838: INFO: Container metrics-server-nanny ready: true, restart count 0
Jul 9 19:18:15.838: INFO: test-host-network-pod started at 2018-07-09 19:18:12 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:18:15.838: INFO: Container busybox-1 ready: true, restart count 0
Jul 9 19:18:15.839: INFO: Container busybox-2 ready: true, restart count 0
Jul 9 19:18:15.839: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.839: INFO: Container exec ready: true, restart count 0
Jul 9 19:18:15.839: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:18:15.839: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:18:15.839: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:18:15.839: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.839: INFO: Container directory-sync ready: true, restart count 0
Jul 9 19:18:15.839: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.839: INFO: Container webconsole ready: true, restart count 0
Jul 9 19:18:15.839: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:15.839: INFO: Container node-agent ready: true, restart count 3
W0709 19:18:15.878394 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:18:16.036: INFO:
Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:18:16.036: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s}
Jul 9 19:18:16.036: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s}
Jul 9 19:18:16.036: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s}
Jul 9 19:18:16.036: INFO:
Logging node info for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:18:16.083: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:77561,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:18:16.083: INFO:
Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:18:16.120: INFO:
Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:18:46.159: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250)
Jul 9 19:18:46.159: INFO:
Logging node info for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:18:46.202: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:77848,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:18:46.202: INFO:
Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:18:46.238: INFO:
Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:18:46.385: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-addon-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at <nil> (0+0 container statuses recorded)
Jul 9 19:18:46.385: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-controller-manager ready: true, restart count 1
Jul 9 19:18:46.385: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-scheduler ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-clu ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded)
Jul 9 19:18:46.385: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0
Jul 9 19:18:46.385: INFO: Container tectonic-stats-emitter ready: true, restart count 0
Jul 9 19:18:46.385: INFO: Container tectonic-stats-extender ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-channel-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:18:46.385: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container openshift-apiserver ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-network-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container dnsmasq ready: true, restart count 0
Jul 9 19:18:46.385: INFO: Container kubedns ready: true, restart count 0
Jul 9 19:18:46.385: INFO: Container sidecar ready: true, restart count 0
Jul 9 19:18:46.385: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-apiserver ready: true, restart count 4
Jul 9 19:18:46.385: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-node-controller ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-alm-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container node-agent ready: true, restart count 4
Jul 9 19:18:46.385: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-utility-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container pod-checkpointer ready: true, restart count 0
Jul 9 19:18:46.385: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:18:46.385: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:18:46.385: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container openshift-controller-manager ready: true, restart count 3
Jul 9 19:18:46.385: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container tectonic-node-controller-operator ready: true, restart count 0
Jul 9 19:18:46.385: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:18:46.385: INFO: Container kube-core-operator ready: true, restart count 0
W0709 19:18:46.427641 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:18:46.567: INFO:
Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:18:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-mt788" for this suite.
Jul 9 19:19:44.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:19:47.721: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-mt788, resource: bindings, ignored listing per whitelist
Jul 9 19:19:48.939: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-mt788 deletion completed in 1m2.271310319s
• Failure [105.439 seconds]
[k8s.io] KubeletManagedEtcHosts
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
failed to execute command in pod test-pod, container busybox-3: unable to upgrade connection: container not found ("busybox-3")
Expected error:
<*errors.errorString | 0xc421007490>: {
s: "unable to upgrade connection: container not found (\"busybox-3\")",
}
unable to upgrade connection: container not found ("busybox-3")
not to have occurred
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:40.826: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:19:42.560: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-mq4jz
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-test-volume-map-b4fc7a45-83e7-11e8-8fe2-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:19:43.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-configmap-mq4jz" to be "success or failure"
Jul 9 19:19:43.308: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.425503ms
Jul 9 19:19:45.341: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071906859s
STEP: Saw pod success
Jul 9 19:19:45.341: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:19:45.374: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 container configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:19:45.459: INFO: Waiting for pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:19:45.490: INFO: Pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:19:45.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mq4jz" for this suite.
Jul 9 19:19:51.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:19:55.501: INFO: namespace: e2e-tests-configmap-mq4jz, resource: bindings, ignored listing per whitelist
Jul 9 19:19:55.605: INFO: namespace e2e-tests-configmap-mq4jz deletion completed in 10.039097537s
• [SLOW TEST:14.779 seconds]
[sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] Projected
should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:55.608: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:19:57.646: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-zd4xr
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469
STEP: Creating configMap with name projected-configmap-test-volume-map-bdf766a8-83e7-11e8-8fe2-28d244b00276
STEP: Creating a pod to test consume configMaps
Jul 9 19:19:58.338: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-zd4xr" to be "success or failure"
Jul 9 19:19:58.369: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.096945ms
Jul 9 19:20:00.402: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063701216s
STEP: Saw pod success
Jul 9 19:20:00.402: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:20:00.441: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul 9 19:20:00.515: INFO: Waiting for pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:20:00.548: INFO: Pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zd4xr" for this suite.
Jul 9 19:20:06.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:20:08.966: INFO: namespace: e2e-tests-projected-zd4xr, resource: bindings, ignored listing per whitelist
Jul 9 19:20:10.513: INFO: namespace e2e-tests-projected-zd4xr deletion completed in 9.924000537s
• [SLOW TEST:14.905 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469
------------------------------
S
------------------------------
[Feature:DeploymentConfig] deploymentconfigs with failing hook [Conformance]
should get all logs from retried hooks [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:48.942: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:19:50.938: INFO: configPath is now "/tmp/e2e-test-cli-deployment-rhvgs-user.kubeconfig"
Jul 9 19:19:50.938: INFO: The user is now "e2e-test-cli-deployment-rhvgs-user"
Jul 9 19:19:50.938: INFO: Creating project "e2e-test-cli-deployment-rhvgs"
Jul 9 19:19:51.216: INFO: Waiting on permissions in project "e2e-test-cli-deployment-rhvgs" ...
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should get all logs from retried hooks [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819
Jul 9 19:19:55.728: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-rhvgs-user.kubeconfig --namespace=e2e-test-cli-deployment-rhvgs dc/hook'
STEP: checking the logs for substrings
--> pre: Running hook pod ...
pre hook logs
--> pre: Retrying hook pod (retry #1)
pre hook logs
[AfterEach] with failing hook [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:815
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:19:58.080: INFO: namespace : e2e-test-cli-deployment-rhvgs api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:16.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:27.219 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
with failing hook [Conformance]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:813
should get all logs from retried hooks [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:16.165: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:20:18.277: INFO: configPath is now "/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig"
Jul 9 19:20:18.277: INFO: The user is now "e2e-test-resolve-local-names-m7cbv-user"
Jul 9 19:20:18.277: INFO: Creating project "e2e-test-resolve-local-names-m7cbv"
Jul 9 19:20:18.629: INFO: Waiting on permissions in project "e2e-test-resolve-local-names-m7cbv" ...
[It] should update standard Kube object image fields when local names are on [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:19
Jul 9 19:20:18.671: INFO: Running 'oc import-image --config=/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig --namespace=e2e-test-resolve-local-names-m7cbv busybox:latest --confirm'
The import completed successfully.
Name: busybox
Namespace: e2e-test-resolve-local-names-m7cbv
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2018-07-10T02:20:20Z
Docker Pull Spec: docker-registry.default.svc:5000/e2e-test-resolve-local-names-m7cbv/busybox
Image Lookup: local=false
Unique Images: 1
Tags: 1
latest
tagged from busybox:latest
* busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Less than a second ago
Image Name: busybox:latest
Docker Image: busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Name: sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Created: Less than a second ago
Annotations: image.openshift.io/dockerLayersOrder=ascending
Image Size: 724.6kB
Image Created: 6 weeks ago
Author: <none>
Arch: amd64
Command: sh
Working Dir: <none>
User: <none>
Exposes Ports: <none>
Docker Labels: <none>
Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Jul 9 19:20:20.358: INFO: Running 'oc set image-lookup --config=/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig --namespace=e2e-test-resolve-local-names-m7cbv busybox'
imagestream "busybox" updated
[AfterEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:20:20.894: INFO: namespace : e2e-test-resolve-local-names-m7cbv api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:26.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] [10.827 seconds]
[Feature:ImageLookup][registry] Image policy
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:14
should update standard Kube object image fields when local names are on [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:19
default image resolution is not configured, can't verify pod resolution
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:43
------------------------------
[sig-api-machinery] Secrets
should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-api-machinery] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:10.518: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:20:12.297: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-gp26s
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-c6b47a70-83e7-11e8-8fe2-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:20:12.999: INFO: Waiting up to 5m0s for pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-secrets-gp26s" to be "success or failure"
Jul 9 19:20:13.038: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.646084ms
Jul 9 19:20:15.071: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072119091s
Jul 9 19:20:17.105: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10677299s
STEP: Saw pod success
Jul 9 19:20:17.105: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:20:17.157: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 container secret-env-test: <nil>
STEP: delete the pod
Jul 9 19:20:17.233: INFO: Waiting for pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:20:17.276: INFO: Pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-api-machinery] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:17.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gp26s" for this suite.
Jul 9 19:20:23.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:20:26.387: INFO: namespace: e2e-tests-secrets-gp26s, resource: bindings, ignored listing per whitelist
Jul 9 19:20:27.380: INFO: namespace e2e-tests-secrets-gp26s deletion completed in 10.067981102s
• [SLOW TEST:16.862 seconds]
[sig-api-machinery] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:30
should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-storage] Projected
should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:26.993: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:20:28.974: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-lknkb
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name projected-secret-test-d0c46100-83e7-11e8-992b-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:20:29.903: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-lknkb" to be "success or failure"
Jul 9 19:20:29.944: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 40.806839ms
Jul 9 19:20:32.008: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104830832s
STEP: Saw pod success
Jul 9 19:20:32.008: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure"
Jul 9 19:20:32.058: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:20:32.142: INFO: Waiting for pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 to disappear
Jul 9 19:20:32.178: INFO: Pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:32.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lknkb" for this suite.
Jul 9 19:20:38.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:20:40.743: INFO: namespace: e2e-tests-projected-lknkb, resource: bindings, ignored listing per whitelist
Jul 9 19:20:42.537: INFO: namespace e2e-tests-projected-lknkb deletion completed in 10.317801375s
• [SLOW TEST:15.544 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[Feature:Builds][pullsecret][Conformance] docker build using a pull secret Building from a template
should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:27.383: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:20:29.233: INFO: configPath is now "/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig"
Jul 9 19:20:29.233: INFO: The user is now "e2e-test-docker-build-pullsecret-r2mt4-user"
Jul 9 19:20:29.233: INFO: Creating project "e2e-test-docker-build-pullsecret-r2mt4"
Jul 9 19:20:29.392: INFO: Waiting on permissions in project "e2e-test-docker-build-pullsecret-r2mt4" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:26
Jul 9 19:20:29.469: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:30
STEP: waiting for builder service account
[It] should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44
STEP: calling oc create -f "/tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-docker-build-pullsecret.json"
Jul 9 19:20:29.604: INFO: Running 'oc create --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-docker-build-pullsecret.json'
imagestream.image.openshift.io "image1" created
buildconfig.build.openshift.io "docker-build" created
buildconfig.build.openshift.io "docker-build-pull" created
STEP: starting a build
Jul 9 19:20:29.987: INFO: Running 'oc start-build --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 docker-build -o=name'
Jul 9 19:20:30.249: INFO:
start-build output with args [docker-build -o=name]:
Error><nil>
StdOut>
build/docker-build-1
StdErr>
Jul 9 19:20:30.251: INFO: Waiting for docker-build-1 to complete
Jul 9 19:20:36.331: INFO: Done waiting for docker-build-1: util.BuildResult{BuildPath:"build/docker-build-1", BuildName:"docker-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421140900), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004c5a0)}
with error: <nil>
STEP: starting a second build that pulls the image from the first build
Jul 9 19:20:36.331: INFO: Running 'oc start-build --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 docker-build-pull -o=name'
Jul 9 19:20:36.645: INFO:
start-build output with args [docker-build-pull -o=name]:
Error><nil>
StdOut>
build/docker-build-pull-1
StdErr>
Jul 9 19:20:36.646: INFO: Waiting for docker-build-pull-1 to complete
Jul 9 19:20:42.742: INFO: Done waiting for docker-build-pull-1: util.BuildResult{BuildPath:"build/docker-build-pull-1", BuildName:"docker-build-pull-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-pull-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420bb4f00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004c5a0)}
with error: <nil>
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:36
[AfterEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:20:42.812: INFO: namespace : e2e-test-docker-build-pullsecret-r2mt4 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:48.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:21.558 seconds]
[Feature:Builds][pullsecret][Conformance] docker build using a pull secret
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:12
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:24
Building from a template
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:43
should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44
------------------------------
SS
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:48.947: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:20:50.734: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-hfpvx
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:20:51.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-hfpvx" to be "success or failure"
Jul 9 19:20:51.493: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.998508ms
Jul 9 19:20:53.529: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075876986s
STEP: Saw pod success
Jul 9 19:20:53.529: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:20:53.565: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:20:53.637: INFO: Waiting for pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:20:53.669: INFO: Pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:53.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hfpvx" for this suite.
Jul 9 19:20:59.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:03.187: INFO: namespace: e2e-tests-downward-api-hfpvx, resource: bindings, ignored listing per whitelist
Jul 9 19:21:03.809: INFO: namespace e2e-tests-downward-api-hfpvx deletion completed in 10.105937921s
• [SLOW TEST:14.862 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] HostPath
should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:17:24.970: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:17:26.667: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-2j5jw
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89
STEP: Creating a pod to test hostPath subPath
Jul 9 19:17:27.363: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-2j5jw" to be "success or failure"
Jul 9 19:17:27.412: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 48.766725ms
Jul 9 19:17:29.443: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.080036384s
Jul 9 19:17:31.472: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.10831245s
Jul 9 19:17:33.500: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.137065362s
Jul 9 19:17:35.529: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.165850876s
Jul 9 19:17:37.584: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.220718947s
Jul 9 19:17:39.613: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.250143104s
Jul 9 19:17:41.652: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 14.288646069s
Jul 9 19:17:43.682: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 16.318657765s
Jul 9 19:17:45.716: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 18.353057718s
Jul 9 19:17:47.748: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 20.384946151s
Jul 9 19:17:49.778: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 22.415270934s
Jul 9 19:17:51.875: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 24.511994921s
Jul 9 19:17:53.910: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 26.546765122s
Jul 9 19:17:55.966: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 28.602672218s
Jul 9 19:17:57.994: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 30.63105058s
Jul 9 19:18:00.022: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 32.658880066s
Jul 9 19:18:02.060: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 34.696653256s
Jul 9 19:18:04.090: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 36.726588775s
Jul 9 19:18:06.125: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 38.761327977s
Jul 9 19:18:08.157: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 40.793418512s
Jul 9 19:18:10.187: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 42.823558149s
Jul 9 19:18:12.219: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 44.855600786s
Jul 9 19:18:14.247: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 46.883459371s
Jul 9 19:18:16.279: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 48.916003037s
Jul 9 19:18:18.308: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 50.945136203s
Jul 9 19:18:20.346: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 52.982891108s
Jul 9 19:18:22.375: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 55.012244154s
Jul 9 19:18:24.405: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 57.04207056s
Jul 9 19:18:26.441: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 59.077526327s
Jul 9 19:18:28.481: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m1.118107634s
Jul 9 19:18:30.517: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m3.153982095s
Jul 9 19:18:32.568: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m5.204918284s
Jul 9 19:18:34.606: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m7.242487448s
Jul 9 19:18:36.649: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m9.285458494s
Jul 9 19:18:38.696: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m11.332911416s
Jul 9 19:18:40.743: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m13.380168508s
Jul 9 19:18:42.772: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m15.409187535s
Jul 9 19:18:44.810: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m17.447200031s
Jul 9 19:18:46.872: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m19.508415647s
Jul 9 19:18:48.899: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m21.536103758s
Jul 9 19:18:50.930: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m23.566458835s
Jul 9 19:18:52.958: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m25.594760198s
Jul 9 19:18:54.994: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m27.630472752s
Jul 9 19:18:57.024: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m29.660624298s
Jul 9 19:18:59.056: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m31.692742004s
Jul 9 19:19:01.087: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m33.72369698s
Jul 9 19:19:03.138: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m35.775192157s
Jul 9 19:19:05.168: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m37.805126824s
Jul 9 19:19:07.198: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m39.83462179s
Jul 9 19:19:09.245: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m41.881375253s
Jul 9 19:19:11.274: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m43.910492024s
Jul 9 19:19:13.308: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m45.944920471s
Jul 9 19:19:15.359: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m47.996208568s
Jul 9 19:19:17.413: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.049810858s
Jul 9 19:19:19.456: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.092516178s
Jul 9 19:19:21.485: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.121965085s
Jul 9 19:19:23.516: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.153138837s
Jul 9 19:19:25.545: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.181907531s
Jul 9 19:19:27.578: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.215177602s
Jul 9 19:19:29.623: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.259343602s
Jul 9 19:19:31.653: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.289689214s
Jul 9 19:19:33.683: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.319717386s
Jul 9 19:19:35.718: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.354924772s
Jul 9 19:19:37.754: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.390980101s
Jul 9 19:19:39.785: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.422225392s
Jul 9 19:19:41.816: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.452619505s
Jul 9 19:19:43.846: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.482444513s
Jul 9 19:19:45.885: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.521624607s
Jul 9 19:19:47.915: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.551836216s
Jul 9 19:19:49.946: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.583179984s
Jul 9 19:19:51.975: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.611292454s
Jul 9 19:19:54.004: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.640859662s
Jul 9 19:19:56.033: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.66970714s
Jul 9 19:19:58.061: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.698201213s
Jul 9 19:20:00.092: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.72898788s
Jul 9 19:20:02.125: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.761317261s
Jul 9 19:20:04.155: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.791796274s
Jul 9 19:20:06.184: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.820527217s
Jul 9 19:20:08.223: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.85949819s
Jul 9 19:20:10.254: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.890730071s
Jul 9 19:20:12.289: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.92582505s
Jul 9 19:20:14.323: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.959632845s
Jul 9 19:20:16.355: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.991470927s
Jul 9 19:20:18.388: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m51.024328865s
Jul 9 19:20:20.423: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m53.059502388s
Jul 9 19:20:22.470: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m55.106788934s
Jul 9 19:20:24.508: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m57.144505221s
Jul 9 19:20:26.537: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m59.173553392s
Jul 9 19:20:28.572: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 3m1.209025858s
Jul 9 19:20:30.605: INFO: Pod "pod-host-path-test": Phase="Failed", Reason="", readiness=false. Elapsed: 3m3.242171498s
Jul 9 19:20:30.665: INFO: Output of node "ip-10-0-130-54.us-west-2.compute.internal" pod "pod-host-path-test" container "test-container-1": content of file "/test-volume/test-file": mount-tester new file
mode of file "/test-volume/test-file": -rw-r--r--
Jul 9 19:20:30.734: INFO: Output of node "ip-10-0-130-54.us-west-2.compute.internal" pod "pod-host-path-test" container "test-container-2": Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying
STEP: delete the pod
Jul 9 19:20:30.894: INFO: Waiting for pod pod-host-path-test to disappear
Jul 9 19:20:30.929: INFO: Pod pod-host-path-test no longer exists
Jul 9 19:20:30.929: INFO: Unexpected error occurred: expected pod "pod-host-path-test" success: pod "pod-host-path-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort}
[AfterEach] [sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-hostpath-2j5jw".
STEP: Found 7 events.
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:27 -0700 PDT - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned e2e-tests-hostpath-2j5jw/pod-host-path-test to ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container
Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container
Jul 9 19:20:31.095: INFO: POD NODE PHASE GRACE CONDITIONS
Jul 9 19:20:31.095: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: docker-build-1-build ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT ContainersNotInitialized containers with incomplete status: [manage-dockerfile]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT ContainersNotReady containers with unready status: [docker-build]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [docker-build]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT }]
Jul 9 19:20:31.095: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }]
Jul 9 19:20:31.095: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }]
Jul 9 19:20:31.095: INFO: pod-configmaps-b625b422-83e7-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:45 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:45 -0700 PDT }]
Jul 9 19:20:31.095: INFO: pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT ContainersNotReady containers with unready status: [secret-volume-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [secret-volume-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:20:31.095: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }]
Jul 9 19:20:31.095: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:20:31.095: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }]
Jul 9 19:20:31.095: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }]
Jul 9 19:20:31.095: INFO:
Jul 9 19:20:31.137: INFO:
Logging node info for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:20:31.173: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:79048,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365150208 0} {<nil>} 8169092Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260292608 0} {<nil>} 8066692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test@sha256:ee11e7c7dbb2d609aaa42c8806ef1bf5663df95dd925e6ab424b4439dbaf75fd docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test:latest] 613134548} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:91955c14f978a0f48918eecc8b3772faf1615e943daccf9bb051a51cba30422f] 465041680} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:20:31.173: INFO:
Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:20:31.202: INFO:
Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:20:31.311: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container alert-buffer ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container alertmanager ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container alertmanager-proxy ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container alerts-proxy ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container prom-proxy ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container prometheus ready: true, restart count 0
Jul 9 19:20:31.311: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:20:31.311: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container registry ready: true, restart count 0
Jul 9 19:20:31.311: INFO: docker-build-1-build started at 2018-07-09 19:20:30 -0700 PDT (1+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Init container manage-dockerfile ready: false, restart count 0
Jul 9 19:20:31.311: INFO: Container docker-build ready: false, restart count 0
Jul 9 19:20:31.311: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container exec ready: true, restart count 0
Jul 9 19:20:31.311: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:20:31.311: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container metrics-server ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container metrics-server-nanny ready: true, restart count 0
Jul 9 19:20:31.311: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container webconsole ready: true, restart count 0
Jul 9 19:20:31.311: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container node-agent ready: true, restart count 3
Jul 9 19:20:31.311: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container directory-sync ready: true, restart count 0
Jul 9 19:20:31.311: INFO: pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:20:29 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container secret-volume-test ready: false, restart count 0
Jul 9 19:20:31.311: INFO: pod-configmaps-b625b422-83e7-11e8-bd2e-28d244b00276 started at 2018-07-09 19:19:45 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container configmap-volume-test ready: true, restart count 0
Jul 9 19:20:31.311: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Init container git-clone ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Init container manage-dockerfile ready: true, restart count 0
Jul 9 19:20:31.311: INFO: Container sti-build ready: false, restart count 0
Jul 9 19:20:31.311: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container default-http-backend ready: true, restart count 0
Jul 9 19:20:31.311: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:20:31.311: INFO: Container router ready: true, restart count 0
W0709 19:20:31.345070 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:20:31.473: INFO:
Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal
Jul 9 19:20:31.473: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.274794s}
Jul 9 19:20:31.473: INFO:
Logging node info for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:20:31.505: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:79139,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:20:31.505: INFO:
Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:20:31.534: INFO:
Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal
Jul 9 19:21:01.575: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250)
Jul 9 19:21:01.575: INFO:
Logging node info for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:21:01.605: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:79463,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8365146112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8260288512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},}
Jul 9 19:21:01.605: INFO:
Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:21:01.638: INFO:
Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal
Jul 9 19:21:01.741: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded)
Jul 9 19:21:01.741: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0
Jul 9 19:21:01.741: INFO: Container tectonic-stats-emitter ready: true, restart count 0
Jul 9 19:21:01.741: INFO: Container tectonic-stats-extender ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-channel-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-proxy ready: true, restart count 0
Jul 9 19:21:01.741: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container openshift-apiserver ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-network-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container dnsmasq ready: true, restart count 0
Jul 9 19:21:01.741: INFO: Container kubedns ready: true, restart count 0
Jul 9 19:21:01.741: INFO: Container sidecar ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-scheduler ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-clu ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-apiserver ready: true, restart count 4
Jul 9 19:21:01.741: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-node-controller ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-alm-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container node-agent ready: true, restart count 4
Jul 9 19:21:01.741: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container pod-checkpointer ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container install-cni ready: true, restart count 0
Jul 9 19:21:01.741: INFO: Container kube-flannel ready: true, restart count 0
Jul 9 19:21:01.741: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container openshift-controller-manager ready: true, restart count 3
Jul 9 19:21:01.741: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-node-controller-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-core-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container tectonic-utility-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-addon-operator ready: true, restart count 0
Jul 9 19:21:01.741: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at <nil> (0+0 container statuses recorded)
Jul 9 19:21:01.741: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded)
Jul 9 19:21:01.741: INFO: Container kube-controller-manager ready: true, restart count 1
W0709 19:21:01.775908 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 9 19:21:01.867: INFO:
Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:21:01.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-2j5jw" for this suite.
Jul 9 19:21:08.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:10.794: INFO: namespace: e2e-tests-hostpath-2j5jw, resource: bindings, ignored listing per whitelist
Jul 9 19:21:11.468: INFO: namespace e2e-tests-hostpath-2j5jw deletion completed in 9.534952564s
• Failure [226.498 seconds]
[sig-storage] HostPath
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34
should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89
Expected error:
<*errors.errorString | 0xc422109f10>: {
s: "expected pod \"pod-host-path-test\" success: pod \"pod-host-path-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort}",
}
expected pod "pod-host-path-test" success: pod "pod-host-path-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort}
not to have occurred
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2290
------------------------------
[sig-storage] Downward API volume
should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:20:42.538: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:20:44.601: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-qpnsn
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating the pod
Jul 9 19:20:48.175: INFO: Successfully updated pod "labelsupdateda06d733-83e7-11e8-992b-28d244b00276"
[AfterEach] [sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:50.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qpnsn" for this suite.
Jul 9 19:21:12.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:15.398: INFO: namespace: e2e-tests-downward-api-qpnsn, resource: bindings, ignored listing per whitelist
Jul 9 19:21:16.709: INFO: namespace e2e-tests-downward-api-qpnsn deletion completed in 26.411067994s
• [SLOW TEST:34.170 seconds]
[sig-storage] Downward API volume
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-api-machinery] Downward API
should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:03.810: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:21:05.487: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-bcvkn
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward api env vars
Jul 9 19:21:06.209: INFO: Waiting up to 5m0s for pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-bcvkn" to be "success or failure"
Jul 9 19:21:06.241: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.021781ms
Jul 9 19:21:08.278: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068938105s
Jul 9 19:21:10.317: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107594978s
STEP: Saw pod success
Jul 9 19:21:10.317: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:21:10.352: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 container dapi-container: <nil>
STEP: delete the pod
Jul 9 19:21:10.434: INFO: Waiting for pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:21:10.471: INFO: Pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:21:10.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bcvkn" for this suite.
Jul 9 19:21:16.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:19.200: INFO: namespace: e2e-tests-downward-api-bcvkn, resource: bindings, ignored listing per whitelist
Jul 9 19:21:20.501: INFO: namespace e2e-tests-downward-api-bcvkn deletion completed in 9.989426573s
• [SLOW TEST:16.691 seconds]
[sig-api-machinery] Downward API
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37
should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] ConfigMap
updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:19:42.933: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:19:44.473: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-bc6g9
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Jul 9 19:19:45.108: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating configMap with name configmap-test-upd-b620d07a-83e7-11e8-bd2e-28d244b00276
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b620d07a-83e7-11e8-bd2e-28d244b00276
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:20:58.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bc6g9" for this suite.
Jul 9 19:21:20.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:23.203: INFO: namespace: e2e-tests-configmap-bc6g9, resource: bindings, ignored listing per whitelist
Jul 9 19:21:24.227: INFO: namespace e2e-tests-configmap-bc6g9 deletion completed in 25.372270298s
• [SLOW TEST:101.294 seconds]
[sig-storage] ConfigMap
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers
should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:20.503: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:21:22.197: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-f9hmf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test override all
Jul 9 19:21:22.891: INFO: Waiting up to 5m0s for pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-containers-f9hmf" to be "success or failure"
Jul 9 19:21:22.922: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.522108ms
Jul 9 19:21:24.953: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061966293s
Jul 9 19:21:26.984: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093040223s
STEP: Saw pod success
Jul 9 19:21:26.984: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:21:27.020: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:21:27.092: INFO: Waiting for pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:21:27.124: INFO: Pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:21:27.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-f9hmf" for this suite.
Jul 9 19:21:33.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:36.736: INFO: namespace: e2e-tests-containers-f9hmf, resource: bindings, ignored listing per whitelist
Jul 9 19:21:37.289: INFO: namespace e2e-tests-containers-f9hmf deletion completed in 10.128596748s
• [SLOW TEST:16.786 seconds]
[k8s.io] Docker Containers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669
should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Projected
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:24.230: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:21:25.848: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-pjp6w
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87
Jul 9 19:21:26.743: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secret-namespace-gr7ll
STEP: Creating projection with secret that has name projected-secret-test-f29dc6de-83e7-11e8-bd2e-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:21:27.344: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-projected-pjp6w" to be "success or failure"
Jul 9 19:21:27.373: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 28.949029ms
Jul 9 19:21:29.402: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058308202s
Jul 9 19:21:31.439: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095428007s
STEP: Saw pod success
Jul 9 19:21:31.439: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure"
Jul 9 19:21:31.470: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:21:31.543: INFO: Waiting for pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 to disappear
Jul 9 19:21:31.572: INFO: Pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:21:31.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pjp6w" for this suite.
Jul 9 19:21:37.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:39.688: INFO: namespace: e2e-tests-projected-pjp6w, resource: bindings, ignored listing per whitelist
Jul 9 19:21:41.290: INFO: namespace e2e-tests-projected-pjp6w deletion completed in 9.679346612s
STEP: Destroying namespace "e2e-tests-secret-namespace-gr7ll" for this suite.
Jul 9 19:21:47.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:49.741: INFO: namespace: e2e-tests-secret-namespace-gr7ll, resource: bindings, ignored listing per whitelist
Jul 9 19:21:50.910: INFO: namespace e2e-tests-secret-namespace-gr7ll deletion completed in 9.619386792s
• [SLOW TEST:26.680 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87
------------------------------
[sig-storage] Projected
should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:37.290: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:21:39.041: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-k98n9
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jul 9 19:21:39.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-k98n9" to be "success or failure"
Jul 9 19:21:39.790: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.60253ms
Jul 9 19:21:41.824: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064754136s
Jul 9 19:21:43.862: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102430482s
STEP: Saw pod success
Jul 9 19:21:43.862: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:21:43.893: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 container client-container: <nil>
STEP: delete the pod
Jul 9 19:21:43.966: INFO: Waiting for pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:21:44.000: INFO: Pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:21:44.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k98n9" for this suite.
Jul 9 19:21:50.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:21:53.641: INFO: namespace: e2e-tests-projected-k98n9, resource: bindings, ignored listing per whitelist
Jul 9 19:21:54.031: INFO: namespace e2e-tests-projected-k98n9 deletion completed in 9.979733746s
• [SLOW TEST:16.741 seconds]
[sig-storage] Projected
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[Feature:Builds] build have source revision metadata started build
should contain source revision information [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds] build have source revision metadata
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:11.475: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds] build have source revision metadata
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:21:13.068: INFO: configPath is now "/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig"
Jul 9 19:21:13.068: INFO: The user is now "e2e-test-cli-build-revision-fnrv9-user"
Jul 9 19:21:13.068: INFO: Creating project "e2e-test-cli-build-revision-fnrv9"
Jul 9 19:21:13.247: INFO: Waiting on permissions in project "e2e-test-cli-build-revision-fnrv9" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:22
Jul 9 19:21:13.309: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:26
STEP: waiting for builder service account
Jul 9 19:21:13.452: INFO: Running 'oc create --config=/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig --namespace=e2e-test-cli-build-revision-fnrv9 -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/test-build-revision.json'
buildconfig.build.openshift.io "sample-build" created
[It] should contain source revision information [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41
STEP: starting the build
Jul 9 19:21:13.718: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig --namespace=e2e-test-cli-build-revision-fnrv9 sample-build -o=name'
Jul 9 19:21:13.999: INFO:
start-build output with args [sample-build -o=name]:
Error><nil>
StdOut>
build/sample-build-1
StdErr>
Jul 9 19:21:14.000: INFO: Waiting for sample-build-1 to complete
Jul 9 19:21:50.092: INFO: Done waiting for sample-build-1: util.BuildResult{BuildPath:"build/sample-build-1", BuildName:"sample-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/sample-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4214f4c00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3860)}
with error: <nil>
STEP: verifying the status of "build/sample-build-1"
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:33
[AfterEach] [Feature:Builds] build have source revision metadata
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:21:50.244: INFO: namespace : e2e-test-cli-build-revision-fnrv9 api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds] build have source revision metadata
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:21:56.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• [SLOW TEST:44.870 seconds]
[Feature:Builds] build have source revision metadata
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:14
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:21
started build
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:40
should contain source revision information [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41
------------------------------
S
------------------------------
[Feature:Builds][Conformance] imagechangetriggers
imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Builds][Conformance] imagechangetriggers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:54.032: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Builds][Conformance] imagechangetriggers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:21:55.831: INFO: configPath is now "/tmp/e2e-test-imagechangetriggers-bdpwn-user.kubeconfig"
Jul 9 19:21:55.831: INFO: The user is now "e2e-test-imagechangetriggers-bdpwn-user"
Jul 9 19:21:55.831: INFO: Creating project "e2e-test-imagechangetriggers-bdpwn"
Jul 9 19:21:56.015: INFO: Waiting on permissions in project "e2e-test-imagechangetriggers-bdpwn" ...
[BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:25
Jul 9 19:21:56.062: INFO:
docker info output:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 20
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-128-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.495 GiB
Name: yifan-coreos
ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: yifan
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[JustBeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:29
STEP: waiting for builder service account
[It] imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42
Jul 9 19:21:56.207: INFO: Running 'oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml'
Jul 9 19:21:57.056: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml] [] imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "bc-source" created
buildconfig.build.openshift.io "bc-docker" created
buildconfig.build.openshift.io "bc-custom" created
Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "bc-source" created
buildconfig.build.openshift.io "bc-docker" created
buildconfig.build.openshift.io "bc-custom" created
Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found
[] <nil> 0xc42157e6c0 exit status 1 <nil> <nil> true [0xc42113e058 0xc42113e080 0xc42113e080] [0xc42113e058 0xc42113e080] [0xc42113e060 0xc42113e078] [0x916090 0x916190] 0xc4210f51a0 <nil>}:
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "bc-source" created
buildconfig.build.openshift.io "bc-docker" created
buildconfig.build.openshift.io "bc-custom" created
Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "bc-source" created
buildconfig.build.openshift.io "bc-docker" created
buildconfig.build.openshift.io "bc-custom" created
Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found
[AfterEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:35
Jul 9 19:21:57.091: INFO: Dumping pod state for namespace e2e-test-imagechangetriggers-bdpwn
Jul 9 19:21:57.091: INFO: Running 'oc get --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn pods -o yaml'
Jul 9 19:21:57.369: INFO: apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
[AfterEach] [Feature:Builds][Conformance] imagechangetriggers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72
STEP: Deleting namespaces
Jul 9 19:21:57.477: INFO: namespace : e2e-test-imagechangetriggers-bdpwn api call to delete is complete
STEP: Waiting for namespaces to vanish
[AfterEach] [Feature:Builds][Conformance] imagechangetriggers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Dumping a list of prepulled images on each node...
Jul 9 19:22:03.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
• Failure [9.575 seconds]
[Feature:Builds][Conformance] imagechangetriggers
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:16
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:24
imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel] [It]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42
Expected error:
<*util.ExitError | 0xc4220cff20>: {
Cmd: "oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml",
StdErr: "imagestream.image.openshift.io \"nodejs-ex\" created\nbuildconfig.build.openshift.io \"bc-source\" created\nbuildconfig.build.openshift.io \"bc-docker\" created\nbuildconfig.build.openshift.io \"bc-custom\" created\nError from server: Jenkins pipeline template openshift/jenkins-ephemeral not found",
ExitError: {
ProcessState: {
pid: 16527,
status: 256,
rusage: {
Utime: {Sec: 0, Usec: 136000},
Stime: {Sec: 0, Usec: 4000},
Maxrss: 97516,
Ixrss: 0,
Idrss: 0,
Isrss: 0,
Minflt: 6969,
Majflt: 0,
Nswap: 0,
Inblock: 0,
Oublock: 0,
Msgsnd: 0,
Msgrcv: 0,
Nsignals: 0,
Nvcsw: 736,
Nivcsw: 58,
},
},
Stderr: nil,
},
}
exit status 1
not to have occurred
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:44
------------------------------
SS
------------------------------
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419
Jul 9 19:22:03.610: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:22:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[AfterEach] when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:22:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
[Area:Networking] network isolation
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10
when using a plugin that isolates namespaces by default
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418
should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53
Jul 9 19:22:03.610: This plugin does not isolate namespaces by default.
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296
------------------------------
SS
------------------------------
[sig-storage] Secrets
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:21:56.347: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:21:58.562: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-tmdcm
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-0618e1eb-83e8-11e8-8401-28d244b00276
STEP: Creating a pod to test consume secrets
Jul 9 19:21:59.352: INFO: Waiting up to 5m0s for pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-tmdcm" to be "success or failure"
Jul 9 19:21:59.388: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 35.321902ms
Jul 9 19:22:01.417: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065057597s
STEP: Saw pod success
Jul 9 19:22:01.417: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure"
Jul 9 19:22:01.455: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 container secret-volume-test: <nil>
STEP: delete the pod
Jul 9 19:22:01.536: INFO: Waiting for pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 to disappear
Jul 9 19:22:01.575: INFO: Pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 no longer exists
[AfterEach] [sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:22:01.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tmdcm" for this suite.
Jul 9 19:22:07.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:22:10.011: INFO: namespace: e2e-tests-secrets-tmdcm, resource: bindings, ignored listing per whitelist
Jul 9 19:22:11.419: INFO: namespace e2e-tests-secrets-tmdcm deletion completed in 9.808353935s
• [SLOW TEST:15.072 seconds]
[sig-storage] Secrets
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes when FSGroup is specified
volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:22:03.613: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
STEP: Building a namespace api object
Jul 9 19:22:05.347: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-qbpr9
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 9 19:22:06.225: INFO: Waiting up to 5m0s for pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-qbpr9" to be "success or failure"
Jul 9 19:22:06.258: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.066381ms
Jul 9 19:22:08.296: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071268095s
STEP: Saw pod success
Jul 9 19:22:08.296: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276" satisfied condition "success or failure"
Jul 9 19:22:08.333: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 container test-container: <nil>
STEP: delete the pod
Jul 9 19:22:08.409: INFO: Waiting for pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 to disappear
Jul 9 19:22:08.439: INFO: Pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul 9 19:22:08.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qbpr9" for this suite.
Jul 9 19:22:14.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 9 19:22:16.785: INFO: namespace: e2e-tests-emptydir-qbpr9, resource: bindings, ignored listing per whitelist
Jul 9 19:22:18.670: INFO: namespace e2e-tests-emptydir-qbpr9 deletion completed in 10.197573968s
• [SLOW TEST:15.057 seconds]
[sig-storage] EmptyDir volumes
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
when FSGroup is specified
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
------------------------------
[Feature:Prometheus][Feature:Builds] Prometheus when installed to the cluster
should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:36
[BeforeEach] [Top Level]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul 9 19:13:52.928: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig
[BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83
Jul 9 19:13:55.279: INFO: configPath is now "/tmp/e2e-test-prometheus-4fqst-user.kubeconfig"
Jul 9 19:13:55.279: INFO: The user is now "e2e-test-prometheus-4fqst-user"
Jul 9 19:13:55.279: INFO: Creating project "e2e-test-prometheus-4fqst"
Jul 9 19:13:55.406: INFO: Waiting on permissions in project "e2e-test-prometheus-4fqst" ...
[BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:31
[It] should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel]
/home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:36
Jul 9 19:14:15.291: INFO: Creating new exec pod
STEP: verifying the oauth-proxy reports a 403 on the root URL
Jul 9 19:14:17.440: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -k -s -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443"'
Jul 9 19:14:18.221: INFO: stderr: ""
STEP: verifying a service account token is able to authenticate
Jul 9 19:14:18.221: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443/graph"'
Jul 9 19:14:18.987: INFO: stderr: ""
STEP: waiting for builder service account
STEP: calling oc new-app /tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json
Jul 9 19:14:19.193: INFO: Running 'oc new-app --config=/tmp/e2e-test-prometheus-4fqst-user.kubeconfig --namespace=e2e-test-prometheus-4fqst /tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json'
--> Deploying template "e2e-test-prometheus-4fqst/nodejs-helloworld-sample" for "/tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json" to project e2e-test-prometheus-4fqst
nodejs-helloworld-sample
---------
This example shows how to create a simple nodejs application in openshift origin v3
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Administrator Username=adminFGD # generated
* Administrator Password=D1026mG7 # generated
--> Creating resources ...
service "frontend-prod" created
route "frontend" created
deploymentconfig "frontend-prod" created
service "frontend" created
imagestream "origin-nodejs-sample" created
imagestream "origin-nodejs-sample2" created
imagestream "origin-nodejs-sample3" created
imagestream "nodejs-010-centos7" created
buildconfig "frontend" created
deploymentconfig "frontend" created
--> Success
Access your application via route 'frontend-e2e-test-prometheus-4fqst.yifan-test-cluster.coreservices.team.coreos.systems'
Use 'oc start-build frontend' to start a build.
Run 'oc status' to view your app.
STEP: wait on imagestreams used by build
Jul 9 19:14:20.197: INFO: Running scan #0
Jul 9 19:14:20.197: INFO: Checking language ruby
Jul 9 19:14:20.231: INFO: Checking tag 2.0
Jul 9 19:14:20.231: INFO: Checking tag 2.2
Jul 9 19:14:20.231: INFO: Checking tag 2.3
Jul 9 19:14:20.231: INFO: Checking tag 2.4
Jul 9 19:14:20.231: INFO: Checking tag 2.5
Jul 9 19:14:20.231: INFO: Checking tag latest
Jul 9 19:14:20.231: INFO: Checking language nodejs
Jul 9 19:14:20.263: INFO: Checking tag 6
Jul 9 19:14:20.263: INFO: Checking tag 8
Jul 9 19:14:20.263: INFO: Checking tag latest
Jul 9 19:14:20.263: INFO: Checking tag 0.10
Jul 9 19:14:20.263: INFO: Checking tag 4
Jul 9 19:14:20.263: INFO: Checking language perl
Jul 9 19:14:20.298: INFO: Checking tag 5.24
Jul 9 19:14:20.299: INFO: Checking tag latest
Jul 9 19:14:20.299: INFO: Checking tag 5.16
Jul 9 19:14:20.299: INFO: Checking tag 5.20
Jul 9 19:14:20.299: INFO: Checking language php
Jul 9 19:14:20.345: INFO: Checking tag 5.5
Jul 9 19:14:20.345: INFO: Checking tag 5.6
Jul 9 19:14:20.345: INFO: Checking tag 7.0
Jul 9 19:14:20.345: INFO: Checking tag 7.1
Jul 9 19:14:20.345: INFO: Checking tag latest
Jul 9 19:14:20.345: INFO: Checking language python
Jul 9 19:14:20.384: INFO: Checking tag 3.6
Jul 9 19:14:20.384: INFO: Checking tag latest
Jul 9 19:14:20.384: INFO: Checking tag 2.7
Jul 9 19:14:20.384: INFO: Checking tag 3.3
Jul 9 19:14:20.384: INFO: Checking tag 3.4
Jul 9 19:14:20.384: INFO: Checking tag 3.5
Jul 9 19:14:20.384: INFO: Checking language wildfly
Jul 9 19:14:20.421: INFO: Checking tag 12.0
Jul 9 19:14:20.421: INFO: Checking tag 8.1
Jul 9 19:14:20.421: INFO: Checking tag 9.0
Jul 9 19:14:20.421: INFO: Checking tag latest
Jul 9 19:14:20.421: INFO: Checking tag 10.0
Jul 9 19:14:20.421: INFO: Checking tag 10.1
Jul 9 19:14:20.421: INFO: Checking tag 11.0
Jul 9 19:14:20.421: INFO: Checking language mysql
Jul 9 19:14:20.457: INFO: Checking tag 5.5
Jul 9 19:14:20.457: INFO: Checking tag 5.6
Jul 9 19:14:20.457: INFO: Checking tag 5.7
Jul 9 19:14:20.457: INFO: Checking tag latest
Jul 9 19:14:20.457: INFO: Checking language postgresql
Jul 9 19:14:20.499: INFO: Checking tag 9.2
Jul 9 19:14:20.499: INFO: Checking tag 9.4
Jul 9 19:14:20.499: INFO: Checking tag 9.5
Jul 9 19:14:20.499: INFO: Checking tag 9.6
Jul 9 19:14:20.499: INFO: Checking tag latest
Jul 9 19:14:20.499: INFO: Checking language mongodb
Jul 9 19:14:20.536: INFO: Checking tag 2.4
Jul 9 19:14:20.536: INFO: Checking tag 2.6
Jul 9 19:14:20.536: INFO: Checking tag 3.2
Jul 9 19:14:20.536: INFO: Checking tag 3.4
Jul 9 19:14:20.536: INFO: Checking tag latest
Jul 9 19:14:20.536: INFO: Checking language jenkins
Jul 9 19:14:20.572: INFO: Checking tag 1
Jul 9 19:14:20.572: INFO: Checking tag 2
Jul 9 19:14:20.572: INFO: Checking tag latest
Jul 9 19:14:20.572: INFO: Success!
STEP: explicitly set up image stream tag, avoid timing window
Jul 9 19:14:20.572: INFO: Running 'oc tag --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-prometheus-4fqst openshift/nodejs:latest e2e-test-prometheus-4fqst/nodejs-010-centos7:latest'
Tag nodejs-010-centos7:latest set to openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653.
STEP: start build
Jul 9 19:14:20.995: INFO: Running 'oc start-build --config=/tmp/e2e-test-prometheus-4fqst-user.kubeconfig --namespace=e2e-test-prometheus-4fqst frontend -o=name'
Jul 9 19:14:21.289: INFO:
start-build output with args [frontend -o=name]:
Error><nil>
StdOut>
build/frontend-1
StdErr>
STEP: verifying build completed successfully
Jul 9 19:14:21.289: INFO: Waiting for frontend-1 to complete
Jul 9 19:14:52.358: INFO: Done waiting for frontend-1: util.BuildResult{BuildPath:"build/frontend-1", BuildName:"frontend-1", StartBuildStdErr:"", StartBuildStdOut:"build/frontend-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421143b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4213a0f00)}
with error: <nil>
STEP: verifying a service account token is able to query terminal build metrics from the Prometheus API
STEP: perform prometheus metric query openshift_build_total
Jul 9 19:14:52.358: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:14:53.120: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:14:54.120: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:14:54.934: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:14:55.934: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:14:56.718: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:14:57.719: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:14:58.476: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:14:59.476: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:00.218: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:01.218: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:02.077: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:03.078: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:03.776: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:04.777: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:05.641: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:06.642: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:07.364: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:08.364: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:09.095: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:10.095: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:10.990: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:11.991: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:12.773: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:13.773: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:14.625: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:15.625: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:16.414: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:17.418: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:18.229: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:19.229: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:20.069: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:21.069: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:21.826: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:22.826: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:23.812: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:24.812: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:25.761: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:26.762: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:28.055: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:29.055: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:30.219: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:31.220: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:32.062: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:33.062: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:33.842: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:34.842: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:35.569: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:36.569: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:37.322: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:38.323: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:39.067: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:40.067: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:40.882: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:41.883: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:42.809: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:43.809: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:44.699: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:45.699: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:46.455: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:47.456: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:48.256: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:49.256: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:50.301: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:51.302: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:52.189: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:53.190: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:53.968: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:54.968: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:55.783: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:56.783: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:57.758: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:15:58.759: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:15:59.653: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:00.653: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:01.398: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:02.398: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:03.192: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:04.192: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:04.957: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:05.958: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:06.765: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:07.766: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:08.582: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:09.583: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:10.604: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:11.604: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:12.413: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:13.414: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:14.499: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:15.499: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:16.253: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:17.253: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:18.003: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:19.003: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:19.824: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:20.824: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:21.662: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:22.662: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:23.635: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:24.635: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:25.388: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:26.388: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:27.262: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:28.262: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:29.065: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:30.065: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:30.786: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:31.786: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:32.580: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:33.580: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:34.487: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:35.488: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:36.475: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:37.476: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:38.330: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:39.331: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:40.115: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:41.116: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:41.920: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:42.921: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:43.671: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:44.671: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:45.433: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:46.433: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:47.207: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:48.208: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:48.930: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:49.930: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:50.801: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:51.801: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:52.660: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:53.660: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:54.407: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:55.408: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:56.151: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:57.152: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:57.934: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:16:58.934: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:16:59.832: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:17:00.833: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"'
Jul 9 19:17:01.884: INFO: stderr: ""
query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total
Jul 9 19:17:02.884: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment