Skip to content

Instantly share code, notes, and snippets.

@codevulture
Created May 24, 2016 05:03
Show Gist options
  • Save codevulture/c3257fc075ec468190d0d549b65976d8 to your computer and use it in GitHub Desktop.
Save codevulture/c3257fc075ec468190d0d549b65976d8 to your computer and use it in GitHub Desktop.
root@janonymous-virtual-machine:~/etcd-v2.3.5-linux-amd64# git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Counting objects: 271049, done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 271049 (delta 12), reused 4 (delta 4), pack-reused 271021
Receiving objects: 100% (271049/271049), 235.50 MiB | 695.00 KiB/s, done.
Resolving deltas: 100% (177741/177741), done.
Checking connectivity... done.
root@janonymous-virtual-machine:~/etcd-v2.3.5-linux-amd64# cd kubernetes
root@janonymous-virtual-machine:~/etcd-v2.3.5-linux-amd64/kubernetes# make release
build/release.sh
+++ [0523 16:46:50] Verifying Prerequisites....
+++ [0523 16:47:04] Building Docker image kube-build:build-573d825ce9.
INFO[4130] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[4130] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
INFO[4131] Layer sha256:bafb45ee3eb760dc8dfda5871db3680d4453a78cf7e8ac5deba24ac6fdbe811b cleaned up
INFO[4131] Layer sha256:bafb45ee3eb760dc8dfda5871db3680d4453a78cf7e8ac5deba24ac6fdbe811b cleaned up
+++ [0523 16:53:46] Running build command....
ERRO[4170] Handler for GET /v1.23/containers/kube-build-data-573d825ce9/json returned error: No such container: kube-build-data-573d825ce9
ERRO[4170] Handler for GET /v1.23/images/kube-build-data-573d825ce9/json returned error: No such image: kube-build-data-573d825ce9
Error: No such image or container: kube-build-data-573d825ce9
ERRO[4170] Handler for GET /v1.23/containers/kube-build-data-573d825ce9/json returned error: No such container: kube-build-data-573d825ce9
ERRO[4170] Handler for GET /v1.23/images/kube-build-data-573d825ce9/json returned error: No such image: kube-build-data-573d825ce9
+++ [0523 16:53:46] Creating data container
INFO[4171] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[4171] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
ERRO[4171] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: No such container: kube-build-573d825ce9
ERRO[4171] Handler for POST /v1.23/containers/kube-build-573d825ce9/wait returned error: No such container: kube-build-573d825ce9
ERRO[4171] Handler for DELETE /v1.23/containers/kube-build-573d825ce9 returned error: No such container: kube-build-573d825ce9
INFO[4171] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[4171] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
Go version: go version go1.6.2 linux/amd64
+++ [0523 16:53:48] Multiple platforms requested and available 11G >= threshold 11G, building platforms in parallel
+++ [0523 16:53:48] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
+++ [0523 16:53:49] Building go targets for linux/amd64
linux/arm
linux/arm64 in parallel (output will appear in a burst when complete):
cmd/kube-dns
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/kubemark
cmd/hyperkube
federation/cmd/federated-apiserver
federation/cmd/federation-controller-manager
plugin/cmd/kube-scheduler
2016-05-23 18:01:32.341626 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)
2016-05-23 18:01:32.366933 I | etcdserver: saved snapshot at index 10001
2016-05-23 18:01:32.367378 I | etcdserver: compacted raft log at 5001
+++ [0523 16:53:49] linux/amd64: go build started
+++ [0523 18:18:36] linux/amd64: go build finished
+++ [0523 16:53:49] linux/arm: go build started
+++ [0523 18:19:36] linux/arm: go build finished
+++ [0523 16:53:49] linux/arm64: go build started
+++ [0523 18:19:33] linux/arm64: go build finished
Go version: go version go1.6.2 linux/amd64
+++ [0523 18:19:36] Multiple platforms requested and available 11G >= threshold 11G, building platforms in parallel
+++ [0523 18:19:36] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
+++ [0523 18:19:36] Building go targets for linux/amd64
linux/386
linux/arm
linux/arm64
darwin/amd64
darwin/386
windows/amd64
windows/386 in parallel (output will appear in a burst when complete):
cmd/kubectl
+++ [0523 18:19:36] linux/amd64: go build started
+++ [0523 18:28:18] linux/amd64: go build finished
+++ [0523 18:19:36] linux/386: go build started
+++ [0523 18:47:11] linux/386: go build finished
+++ [0523 18:19:36] linux/arm: go build started
+++ [0523 18:28:27] linux/arm: go build finished
+++ [0523 18:19:36] linux/arm64: go build started
+++ [0523 18:28:23] linux/arm64: go build finished
+++ [0523 18:19:36] darwin/amd64: go build started
+++ [0523 18:46:42] darwin/amd64: go build finished
+++ [0523 18:19:36] darwin/386: go build started
+++ [0523 18:47:08] darwin/386: go build finished
+++ [0523 18:19:36] windows/amd64: go build started
+++ [0523 18:46:49] windows/amd64: go build finished
+++ [0523 18:19:36] windows/386: go build started
+++ [0523 18:47:12] windows/386: go build finished
Go version: go version go1.6.2 linux/amd64
+++ [0523 18:47:13] Multiple platforms requested and available 11G >= threshold 11G, building platforms in parallel
+++ [0523 18:47:13] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
+++ [0523 18:47:13] Building go targets for linux/amd64
darwin/amd64
windows/amd64
linux/arm in parallel (output will appear in a burst when complete):
cmd/integration
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/mungedocs
cmd/genbashcomp
cmd/genswaggertypedocs
cmd/linkcheck
examples/k8petstore/web-server/src
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
test/e2e_node/e2e_node.test
2016-05-23 19:25:34.831917 I | etcdserver: start to snapshot (applied: 20002, lastsnap: 10001)
2016-05-23 19:25:34.863834 I | etcdserver: saved snapshot at index 20002
2016-05-23 19:25:34.864331 I | etcdserver: compacted raft log at 15002
+++ [0523 18:47:13] linux/amd64: go build started
+++ [0523 19:12:55] linux/amd64: go build finished
+++ [0523 18:47:13] darwin/amd64: go build started
+++ [0523 19:34:15] darwin/amd64: go build finished
+++ [0523 18:47:13] windows/amd64: go build started
+++ [0523 19:34:22] windows/amd64: go build finished
+++ [0523 18:47:13] linux/arm: go build started
+++ [0523 19:13:07] linux/arm: go build finished
+++ [0523 19:34:22] Placing binaries
ERRO[14093] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: Container 98e949d5da3794765292fe8d710cc1223b7605088375622502c870625768ce0b is not running
+++ [0523 19:39:10] Running build command....
ERRO[14094] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: No such container: kube-build-573d825ce9
ERRO[14094] Handler for POST /v1.23/containers/kube-build-573d825ce9/wait returned error: No such container: kube-build-573d825ce9
ERRO[14094] Handler for DELETE /v1.23/containers/kube-build-573d825ce9 returned error: No such container: kube-build-573d825ce9
INFO[14095] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[14095] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
Running tests for APIVersion: v1,extensions/v1beta1,metrics/v1alpha1,federation/v1alpha1 with etcdPrefix: registry
+++ [0523 19:39:12] Running tests without code coverage
ok k8s.io/kubernetes/cluster/addons/dns/kube2sky 0.024s
ok k8s.io/kubernetes/cmd/genutils 0.004s
ok k8s.io/kubernetes/cmd/hyperkube 0.085s
ok k8s.io/kubernetes/cmd/kube-apiserver/app 0.041s
ok k8s.io/kubernetes/cmd/kube-apiserver/app/options 0.022s
ok k8s.io/kubernetes/cmd/kube-proxy/app 0.049s
ok k8s.io/kubernetes/cmd/kubelet/app 0.041s
ok k8s.io/kubernetes/cmd/kubernetes-discovery/discoverysummarizer 5.078s
ok k8s.io/kubernetes/cmd/libs/go2idl/client-gen/testoutput/clientset_generated/test_internalclientset 0.017s
ok k8s.io/kubernetes/cmd/libs/go2idl/client-gen/testoutput/clientset_generated/test_internalclientset/typed/testgroup.k8s.io/unversioned 0.037s
ok k8s.io/kubernetes/cmd/libs/go2idl/generator 0.012s
ok k8s.io/kubernetes/cmd/libs/go2idl/import-boss/generators 0.011s
ok k8s.io/kubernetes/cmd/libs/go2idl/namer 0.004s
ok k8s.io/kubernetes/cmd/libs/go2idl/parser 0.048s
ok k8s.io/kubernetes/cmd/libs/go2idl/types 0.003s
ok k8s.io/kubernetes/cmd/mungedocs 0.019s
ok k8s.io/kubernetes/contrib/mesos/cmd/km 0.057s
ok k8s.io/kubernetes/contrib/mesos/pkg/election 1.767s
ok k8s.io/kubernetes/contrib/mesos/pkg/executor 3.288s
ok k8s.io/kubernetes/contrib/mesos/pkg/minion/tasks 0.009s
ok k8s.io/kubernetes/contrib/mesos/pkg/node 0.031s
ok k8s.io/kubernetes/contrib/mesos/pkg/offers 4.144s
ok k8s.io/kubernetes/contrib/mesos/pkg/podutil 0.021s
ok k8s.io/kubernetes/contrib/mesos/pkg/proc 0.064s
ok k8s.io/kubernetes/contrib/mesos/pkg/queue 9.122s
ok k8s.io/kubernetes/contrib/mesos/pkg/redirfd 0.003s
ok k8s.io/kubernetes/contrib/mesos/pkg/runtime 0.208s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/components/deleter 0.034s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/components/framework 0.033s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/config 0.006s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/constraint 0.004s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/executorinfo 0.074s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/integration 8.077s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/podtask 0.070s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/podtask/hostport 0.011s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/resources 0.021s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/service 0.040s
ok k8s.io/kubernetes/contrib/mesos/pkg/service 0.020s
ok k8s.io/kubernetes/examples 0.548s
ok k8s.io/kubernetes/examples/apiserver 0.087s
ok k8s.io/kubernetes/federation/apis/federation/install 0.018s
ok k8s.io/kubernetes/federation/apis/federation/validation 0.017s
ok k8s.io/kubernetes/federation/cmd/federated-apiserver/app 0.073s
ok k8s.io/kubernetes/federation/pkg/federation-controller/cluster 0.027s
ok k8s.io/kubernetes/federation/registry/cluster 0.041s
ok k8s.io/kubernetes/federation/registry/cluster/etcd 7.514s
ok k8s.io/kubernetes/hack/cmd/teststale 0.023s
ok k8s.io/kubernetes/pkg/admission 0.019s
ok k8s.io/kubernetes/pkg/api 3.215s
ok k8s.io/kubernetes/pkg/api/endpoints 0.025s
ok k8s.io/kubernetes/pkg/api/errors 0.011s
ok k8s.io/kubernetes/pkg/api/install 0.019s
ok k8s.io/kubernetes/pkg/api/meta 0.070s
ok k8s.io/kubernetes/pkg/api/pod 0.009s
ok k8s.io/kubernetes/pkg/api/resource 0.021s
ok k8s.io/kubernetes/pkg/api/service 0.005s
ok k8s.io/kubernetes/pkg/api/testapi 0.020s
ok k8s.io/kubernetes/pkg/api/unversioned 0.012s
ok k8s.io/kubernetes/pkg/api/unversioned/validation 0.005s
ok k8s.io/kubernetes/pkg/api/util 0.004s
ok k8s.io/kubernetes/pkg/api/v1 0.033s
ok k8s.io/kubernetes/pkg/api/validation 0.741s
ok k8s.io/kubernetes/pkg/apimachinery/registered 0.006s
ok k8s.io/kubernetes/pkg/apis/abac/v0 0.009s
ok k8s.io/kubernetes/pkg/apis/apps/validation 0.016s
ok k8s.io/kubernetes/pkg/apis/authorization/validation 0.010s
ok k8s.io/kubernetes/pkg/apis/autoscaling/validation 0.020s
ok k8s.io/kubernetes/pkg/apis/batch/validation 0.017s
ok k8s.io/kubernetes/pkg/apis/componentconfig 0.010s
ok k8s.io/kubernetes/pkg/apis/componentconfig/install 0.010s
ok k8s.io/kubernetes/pkg/apis/extensions/install 0.018s
ok k8s.io/kubernetes/pkg/apis/extensions/v1beta1 0.023s
ok k8s.io/kubernetes/pkg/apis/extensions/validation 0.048s
ok k8s.io/kubernetes/pkg/apis/policy/validation 0.017s
ok k8s.io/kubernetes/pkg/apis/rbac/validation 0.017s
ok k8s.io/kubernetes/pkg/apiserver 1.553s
ok k8s.io/kubernetes/pkg/auth/authenticator/bearertoken 0.005s
ok k8s.io/kubernetes/pkg/auth/authorizer/abac 0.089s
ok k8s.io/kubernetes/pkg/auth/authorizer/union 0.005s
ok k8s.io/kubernetes/pkg/auth/handlers 0.011s
ok k8s.io/kubernetes/pkg/client/cache 0.248s
ok k8s.io/kubernetes/pkg/client/chaosclient 0.007s
ok k8s.io/kubernetes/pkg/client/leaderelection 0.020s
ok k8s.io/kubernetes/pkg/client/record 0.230s
ok k8s.io/kubernetes/pkg/client/restclient 0.067s
ok k8s.io/kubernetes/pkg/client/transport 0.008s
ok k8s.io/kubernetes/pkg/client/typed/discovery 0.065s
ok k8s.io/kubernetes/pkg/client/typed/dynamic 0.078s
ok k8s.io/kubernetes/pkg/client/unversioned 0.240s
ok k8s.io/kubernetes/pkg/client/unversioned/auth 0.064s
ok k8s.io/kubernetes/pkg/client/unversioned/clientcmd 0.065s
ok k8s.io/kubernetes/pkg/client/unversioned/clientcmd/api 0.038s
ok k8s.io/kubernetes/pkg/client/unversioned/portforward 0.075s
ok k8s.io/kubernetes/pkg/client/unversioned/remotecommand 0.113s
ok k8s.io/kubernetes/pkg/client/unversioned/testclient 0.028s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/aws 0.016s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/gce 0.024s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/mesos 0.931s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/openstack 0.010s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/ovirt 0.013s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/rackspace 0.010s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/vsphere 0.116s
ok k8s.io/kubernetes/pkg/controller 0.071s
ok k8s.io/kubernetes/pkg/controller/daemon 0.180s
ok k8s.io/kubernetes/pkg/controller/deployment 0.030s
ok k8s.io/kubernetes/pkg/controller/endpoint 0.046s
ok k8s.io/kubernetes/pkg/controller/framework 0.301s
ok k8s.io/kubernetes/pkg/controller/garbagecollector 0.024s
ok k8s.io/kubernetes/pkg/controller/gc 0.042s
ok k8s.io/kubernetes/pkg/controller/job 0.053s
ok k8s.io/kubernetes/pkg/controller/namespace 0.030s
ok k8s.io/kubernetes/pkg/controller/node 0.089s
ok k8s.io/kubernetes/pkg/controller/persistentvolume 2.513s
ok k8s.io/kubernetes/pkg/controller/petset 0.088s
ok k8s.io/kubernetes/pkg/controller/podautoscaler 4.052s
ok k8s.io/kubernetes/pkg/controller/podautoscaler/metrics 0.027s
ok k8s.io/kubernetes/pkg/controller/replicaset 9.782s
ok k8s.io/kubernetes/pkg/controller/replication 9.805s
ok k8s.io/kubernetes/pkg/controller/resourcequota 0.052s
ok k8s.io/kubernetes/pkg/controller/route 0.082s
ok k8s.io/kubernetes/pkg/controller/service 0.028s
ok k8s.io/kubernetes/pkg/controller/serviceaccount 0.037s
ok k8s.io/kubernetes/pkg/controller/volume/cache 0.004s
ok k8s.io/kubernetes/pkg/conversion 0.408s
ok k8s.io/kubernetes/pkg/conversion/queryparams 0.005s
ok k8s.io/kubernetes/pkg/credentialprovider 2.013s
ok k8s.io/kubernetes/pkg/credentialprovider/aws 0.018s
ok k8s.io/kubernetes/pkg/credentialprovider/gcp 0.029s
ok k8s.io/kubernetes/pkg/dns 0.068s
ok k8s.io/kubernetes/pkg/fieldpath 0.018s
ok k8s.io/kubernetes/pkg/fields 0.007s
ok k8s.io/kubernetes/pkg/genericapiserver 7.102s
ok k8s.io/kubernetes/pkg/healthz 0.010s
ok k8s.io/kubernetes/pkg/httplog 0.006s
ok k8s.io/kubernetes/pkg/kubectl 2.248s
ok k8s.io/kubernetes/pkg/kubectl/cmd 0.240s
ok k8s.io/kubernetes/pkg/kubectl/cmd/config 0.706s
ok k8s.io/kubernetes/pkg/kubectl/cmd/util 0.327s
ok k8s.io/kubernetes/pkg/kubectl/cmd/util/editor 0.025s
ok k8s.io/kubernetes/pkg/kubectl/resource 0.146s
ok k8s.io/kubernetes/pkg/kubelet 0.434s
ok k8s.io/kubernetes/pkg/kubelet/client 0.057s
ok k8s.io/kubernetes/pkg/kubelet/cm 0.025s
ok k8s.io/kubernetes/pkg/kubelet/config 0.114s
ok k8s.io/kubernetes/pkg/kubelet/container 0.028s
ok k8s.io/kubernetes/pkg/kubelet/custommetrics 0.011s
ok k8s.io/kubernetes/pkg/kubelet/dockertools 0.132s
ok k8s.io/kubernetes/pkg/kubelet/envvars 0.010s
ok k8s.io/kubernetes/pkg/kubelet/eviction 0.035s
ok k8s.io/kubernetes/pkg/kubelet/lifecycle 0.020s
ok k8s.io/kubernetes/pkg/kubelet/network 0.020s
ok k8s.io/kubernetes/pkg/kubelet/network/cni 0.198s
ok k8s.io/kubernetes/pkg/kubelet/network/exec 0.132s
ok k8s.io/kubernetes/pkg/kubelet/network/hairpin 0.005s
ok k8s.io/kubernetes/pkg/kubelet/network/kubenet 0.023s
ok k8s.io/kubernetes/pkg/kubelet/pleg 0.026s
ok k8s.io/kubernetes/pkg/kubelet/pod 0.019s
ok k8s.io/kubernetes/pkg/kubelet/prober 10.471s
ok k8s.io/kubernetes/pkg/kubelet/prober/results 0.035s
ok k8s.io/kubernetes/pkg/kubelet/qos 0.010s
ok k8s.io/kubernetes/pkg/kubelet/qos/util 0.010s
ok k8s.io/kubernetes/pkg/kubelet/rkt 1.055s
ok k8s.io/kubernetes/pkg/kubelet/server 0.571s
ok k8s.io/kubernetes/pkg/kubelet/server/stats 0.040s
ok k8s.io/kubernetes/pkg/kubelet/status 0.039s
ok k8s.io/kubernetes/pkg/kubelet/types 0.027s
ok k8s.io/kubernetes/pkg/kubelet/util/cache 0.016s
ok k8s.io/kubernetes/pkg/kubelet/util/format 0.078s
ok k8s.io/kubernetes/pkg/kubelet/util/queue 0.005s
ok k8s.io/kubernetes/pkg/labels 0.006s
ok k8s.io/kubernetes/pkg/master 19.330s
ok k8s.io/kubernetes/pkg/probe/exec 0.004s
ok k8s.io/kubernetes/pkg/probe/http 3.009s
ok k8s.io/kubernetes/pkg/probe/tcp 0.006s
ok k8s.io/kubernetes/pkg/proxy/config 0.027s
ok k8s.io/kubernetes/pkg/proxy/iptables 0.011s
^[
^[
^[
^[
ok k8s.io/kubernetes/pkg/proxy/userspace 5.682s
ok k8s.io/kubernetes/pkg/quota 0.039s
ok k8s.io/kubernetes/pkg/registry/componentstatus 0.021s
ok k8s.io/kubernetes/pkg/registry/configmap 0.061s
ok k8s.io/kubernetes/pkg/registry/configmap/etcd 7.404s
ok k8s.io/kubernetes/pkg/registry/controller 0.040s
ok k8s.io/kubernetes/pkg/registry/controller/etcd 13.491s
ok k8s.io/kubernetes/pkg/registry/daemonset 0.021s
ok k8s.io/kubernetes/pkg/registry/daemonset/etcd 8.324s
ok k8s.io/kubernetes/pkg/registry/deployment 0.021s
ok k8s.io/kubernetes/pkg/registry/deployment/etcd 12.089s
ok k8s.io/kubernetes/pkg/registry/endpoint 0.024s
ok k8s.io/kubernetes/pkg/registry/endpoint/etcd 7.045s
ok k8s.io/kubernetes/pkg/registry/event 0.021s
ok k8s.io/kubernetes/pkg/registry/event/etcd 1.911s
ok k8s.io/kubernetes/pkg/registry/experimental/controller/etcd 1.105s
ok k8s.io/kubernetes/pkg/registry/generic 0.037s
ok k8s.io/kubernetes/pkg/registry/generic/registry 11.498s
ok k8s.io/kubernetes/pkg/registry/generic/rest 0.098s
ok k8s.io/kubernetes/pkg/registry/horizontalpodautoscaler 0.021s
ok k8s.io/kubernetes/pkg/registry/horizontalpodautoscaler/etcd 6.874s
ok k8s.io/kubernetes/pkg/registry/ingress 0.025s
ok k8s.io/kubernetes/pkg/registry/ingress/etcd 8.628s
^[
^[
ok k8s.io/kubernetes/pkg/registry/job 0.022s
ok k8s.io/kubernetes/pkg/registry/job/etcd 7.303s
ok k8s.io/kubernetes/pkg/registry/limitrange 0.057s
ok k8s.io/kubernetes/pkg/registry/limitrange/etcd 7.335s
ok k8s.io/kubernetes/pkg/registry/namespace 0.023s
^[
^[
^[
^[
ok k8s.io/kubernetes/pkg/registry/namespace/etcd 7.259s
ok k8s.io/kubernetes/pkg/registry/node 0.021s
ok k8s.io/kubernetes/pkg/registry/node/etcd 7.308s
ok k8s.io/kubernetes/pkg/registry/persistentvolume 0.022s
ok k8s.io/kubernetes/pkg/registry/persistentvolume/etcd 6.931s
ok k8s.io/kubernetes/pkg/registry/persistentvolumeclaim 0.021s
ok k8s.io/kubernetes/pkg/registry/persistentvolumeclaim/etcd 6.914s
ok k8s.io/kubernetes/pkg/registry/petset 0.018s
ok k8s.io/kubernetes/pkg/registry/petset/etcd 7.203s
ok k8s.io/kubernetes/pkg/registry/pod 0.026s
ok k8s.io/kubernetes/pkg/registry/pod/etcd 19.304s
ok k8s.io/kubernetes/pkg/registry/pod/rest 0.384s
ok k8s.io/kubernetes/pkg/registry/poddisruptionbudget 0.018s
ok k8s.io/kubernetes/pkg/registry/poddisruptionbudget/etcd 7.057s
ok k8s.io/kubernetes/pkg/registry/podsecuritypolicy/etcd 7.470s
ok k8s.io/kubernetes/pkg/registry/podtemplate 0.022s
ok k8s.io/kubernetes/pkg/registry/podtemplate/etcd 7.230s
ok k8s.io/kubernetes/pkg/registry/replicaset 0.019s
ok k8s.io/kubernetes/pkg/registry/replicaset/etcd 14.193s
ok k8s.io/kubernetes/pkg/registry/resourcequota 0.023s
ok k8s.io/kubernetes/pkg/registry/resourcequota/etcd 6.589s
ok k8s.io/kubernetes/pkg/registry/scheduledjob 0.022s
ok k8s.io/kubernetes/pkg/registry/secret 0.021s
ok k8s.io/kubernetes/pkg/registry/secret/etcd 7.094s
ok k8s.io/kubernetes/pkg/registry/service 0.040s
ok k8s.io/kubernetes/pkg/registry/service/allocator 0.004s
ok k8s.io/kubernetes/pkg/registry/service/allocator/etcd 1.238s
ok k8s.io/kubernetes/pkg/registry/service/etcd 6.318s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator 0.011s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator/controller 0.035s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator/etcd 1.972s
ok k8s.io/kubernetes/pkg/registry/service/portallocator 0.010s
ok k8s.io/kubernetes/pkg/registry/serviceaccount 0.021s
ok k8s.io/kubernetes/pkg/registry/serviceaccount/etcd 7.398s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresource 0.021s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresource/etcd 7.300s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresourcedata 0.026s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresourcedata/etcd 7.200s
ok k8s.io/kubernetes/pkg/runtime 0.033s
ok k8s.io/kubernetes/pkg/runtime/serializer 0.522s
ok k8s.io/kubernetes/pkg/runtime/serializer/json 0.010s
ok k8s.io/kubernetes/pkg/runtime/serializer/protobuf 0.017s
ok k8s.io/kubernetes/pkg/runtime/serializer/recognizer 0.025s
ok k8s.io/kubernetes/pkg/runtime/serializer/streaming 0.011s
ok k8s.io/kubernetes/pkg/runtime/serializer/versioning 0.006s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy 0.012s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/capabilities 0.011s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/group 0.011s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/selinux 0.012s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/user 0.012s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/util 0.057s
ok k8s.io/kubernetes/pkg/securitycontext 0.022s
ok k8s.io/kubernetes/pkg/serviceaccount 0.043s
ok k8s.io/kubernetes/pkg/ssh 1.262s
ok k8s.io/kubernetes/pkg/storage 9.721s
ok k8s.io/kubernetes/pkg/storage/etcd 13.259s
ok k8s.io/kubernetes/pkg/storage/etcd/util 0.010s
ok k8s.io/kubernetes/pkg/storage/etcd3 18.010s
ok k8s.io/kubernetes/pkg/util 0.012s
ok k8s.io/kubernetes/pkg/util/bandwidth 0.006s
ok k8s.io/kubernetes/pkg/util/cache 0.003s
ok k8s.io/kubernetes/pkg/util/config 0.005s
ok k8s.io/kubernetes/pkg/util/configz 0.007s
ok k8s.io/kubernetes/pkg/util/dbus 0.013s
ok k8s.io/kubernetes/pkg/util/deployment 0.023s
ok k8s.io/kubernetes/pkg/util/env 0.005s
ok k8s.io/kubernetes/pkg/util/errors 0.004s
ok k8s.io/kubernetes/pkg/util/exec 0.041s
ok k8s.io/kubernetes/pkg/util/flowcontrol 4.112s
ok k8s.io/kubernetes/pkg/util/flushwriter 0.006s
ok k8s.io/kubernetes/pkg/util/framer 0.004s
ok k8s.io/kubernetes/pkg/util/goroutinemap 0.126s
ok k8s.io/kubernetes/pkg/util/hash 0.032s
ok k8s.io/kubernetes/pkg/util/httpstream 0.010s
ok k8s.io/kubernetes/pkg/util/httpstream/spdy 0.231s
ok k8s.io/kubernetes/pkg/util/integer 0.004s
ok k8s.io/kubernetes/pkg/util/intstr 0.004s
ok k8s.io/kubernetes/pkg/util/io 0.024s
ok k8s.io/kubernetes/pkg/util/iptables 0.017s
ok k8s.io/kubernetes/pkg/util/json 0.004s
ok k8s.io/kubernetes/pkg/util/jsonpath 0.008s
ok k8s.io/kubernetes/pkg/util/keymutex 1.006s
ok k8s.io/kubernetes/pkg/util/labels 0.005s
ok k8s.io/kubernetes/pkg/util/mount 0.036s
ok k8s.io/kubernetes/pkg/util/net 0.006s
ok k8s.io/kubernetes/pkg/util/net/sets 0.004s
ok k8s.io/kubernetes/pkg/util/oom 0.016s
ok k8s.io/kubernetes/pkg/util/parsers 0.007s
ok k8s.io/kubernetes/pkg/util/procfs 0.020s
ok k8s.io/kubernetes/pkg/util/proxy 0.015s
ok k8s.io/kubernetes/pkg/util/rand 0.006s
ok k8s.io/kubernetes/pkg/util/runtime 0.005s
ok k8s.io/kubernetes/pkg/util/sets 0.003s
ok k8s.io/kubernetes/pkg/util/slice 0.003s
ok k8s.io/kubernetes/pkg/util/strategicpatch 0.103s
ok k8s.io/kubernetes/pkg/util/strings 0.003s
ok k8s.io/kubernetes/pkg/util/testing 0.009s
ok k8s.io/kubernetes/pkg/util/threading 0.005s
ok k8s.io/kubernetes/pkg/util/validation 0.005s
ok k8s.io/kubernetes/pkg/util/validation/field 0.004s
ok k8s.io/kubernetes/pkg/util/wait 1.023s
ok k8s.io/kubernetes/pkg/util/workqueue 0.123s
ok k8s.io/kubernetes/pkg/util/wsstream 0.031s
ok k8s.io/kubernetes/pkg/util/yaml 0.006s
ok k8s.io/kubernetes/pkg/version 0.023s
ok k8s.io/kubernetes/pkg/volume 0.084s
ok k8s.io/kubernetes/pkg/volume/aws_ebs 0.026s
ok k8s.io/kubernetes/pkg/volume/azure_file 0.022s
ok k8s.io/kubernetes/pkg/volume/cephfs 0.046s
ok k8s.io/kubernetes/pkg/volume/cinder 2.029s
ok k8s.io/kubernetes/pkg/volume/configmap 0.029s
ok k8s.io/kubernetes/pkg/volume/downwardapi 0.052s
ok k8s.io/kubernetes/pkg/volume/empty_dir 0.048s
ok k8s.io/kubernetes/pkg/volume/fc 0.087s
ok k8s.io/kubernetes/pkg/volume/flexvolume 0.125s
ok k8s.io/kubernetes/pkg/volume/flocker 0.023s
ok k8s.io/kubernetes/pkg/volume/gce_pd 0.041s
ok k8s.io/kubernetes/pkg/volume/git_repo 0.028s
ok k8s.io/kubernetes/pkg/volume/glusterfs 0.026s
ok k8s.io/kubernetes/pkg/volume/host_path 0.047s
ok k8s.io/kubernetes/pkg/volume/iscsi 2.026s
ok k8s.io/kubernetes/pkg/volume/nfs 0.052s
ok k8s.io/kubernetes/pkg/volume/rbd 0.024s
ok k8s.io/kubernetes/pkg/volume/secret 0.035s
ok k8s.io/kubernetes/pkg/volume/util 0.100s
ok k8s.io/kubernetes/pkg/volume/vsphere_volume 0.034s
ok k8s.io/kubernetes/pkg/watch 0.011s
ok k8s.io/kubernetes/pkg/watch/versioned 0.023s
ok k8s.io/kubernetes/plugin/pkg/admission/admit 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/antiaffinity 0.021s
ok k8s.io/kubernetes/plugin/pkg/admission/deny 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/exec 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/initialresources 0.057s
ok k8s.io/kubernetes/plugin/pkg/admission/limitranger 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/autoprovision 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/exists 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/persistentvolume/label 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/resourcequota 0.028s
ok k8s.io/kubernetes/plugin/pkg/admission/security/podsecuritypolicy 0.046s
ok k8s.io/kubernetes/plugin/pkg/admission/securitycontext/scdeny 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/serviceaccount 1.414s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/password/allow 0.011s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/password/passwordfile 0.080s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/basicauth 0.005s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/union 0.005s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/x509 0.027s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/oidc 3.883s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/tokenfile 0.025s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/webhook 0.286s
ok k8s.io/kubernetes/plugin/pkg/auth/authorizer/webhook 0.298s
ok k8s.io/kubernetes/plugin/pkg/client/auth/oidc 2.690s
ok k8s.io/kubernetes/plugin/pkg/scheduler 2.028s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm 0.019s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates 1.505s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/priorities 0.078s
2016-05-23 20:50:36.305177 I | etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
2016-05-23 20:50:36.465492 I | etcdserver: saved snapshot at index 30003
2016-05-23 20:50:36.465898 I | etcdserver: compacted raft log at 25003
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider 0.059s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider/defaults 0.033s
ok k8s.io/kubernetes/plugin/pkg/scheduler/api/validation 0.019s
ok k8s.io/kubernetes/plugin/pkg/scheduler/factory 0.051s
ok k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache 0.017s
Running tests for APIVersion: v1,autoscaling/v1,batch/v1,batch/v2alpha1,extensions/v1beta1,apps/v1alpha1,metrics/v1alpha1,federation/v1alpha1,policy/v1alpha1 with etcdPrefix: kubernetes.io/registry
+++ [0523 20:51:39] Running tests without code coverage
ok k8s.io/kubernetes/cluster/addons/dns/kube2sky 0.037s
ok k8s.io/kubernetes/cmd/genutils 0.004s
ok k8s.io/kubernetes/cmd/hyperkube 0.056s
ok k8s.io/kubernetes/cmd/kube-apiserver/app 0.041s
ok k8s.io/kubernetes/cmd/kube-apiserver/app/options 0.022s
ok k8s.io/kubernetes/cmd/kube-proxy/app 0.022s
ok k8s.io/kubernetes/cmd/kubelet/app 0.040s
ok k8s.io/kubernetes/cmd/kubernetes-discovery/discoverysummarizer 0.049s
ok k8s.io/kubernetes/cmd/libs/go2idl/client-gen/testoutput/clientset_generated/test_internalclientset 0.036s
ok k8s.io/kubernetes/cmd/libs/go2idl/client-gen/testoutput/clientset_generated/test_internalclientset/typed/testgroup.k8s.io/unversioned 0.038s
ok k8s.io/kubernetes/cmd/libs/go2idl/generator 0.006s
ok k8s.io/kubernetes/cmd/libs/go2idl/import-boss/generators 0.006s
ok k8s.io/kubernetes/cmd/libs/go2idl/namer 0.004s
ok k8s.io/kubernetes/cmd/libs/go2idl/parser 0.012s
ok k8s.io/kubernetes/cmd/libs/go2idl/types 0.006s
ok k8s.io/kubernetes/cmd/mungedocs 0.011s
ok k8s.io/kubernetes/contrib/mesos/cmd/km 0.116s
ok k8s.io/kubernetes/contrib/mesos/pkg/election 1.807s
ok k8s.io/kubernetes/contrib/mesos/pkg/executor 3.282s
ok k8s.io/kubernetes/contrib/mesos/pkg/minion/tasks 0.009s
ok k8s.io/kubernetes/contrib/mesos/pkg/node 0.043s
ok k8s.io/kubernetes/contrib/mesos/pkg/offers 4.144s
ok k8s.io/kubernetes/contrib/mesos/pkg/podutil 0.024s
ok k8s.io/kubernetes/contrib/mesos/pkg/proc 0.064s
ok k8s.io/kubernetes/contrib/mesos/pkg/queue 9.126s
ok k8s.io/kubernetes/contrib/mesos/pkg/redirfd 0.003s
ok k8s.io/kubernetes/contrib/mesos/pkg/runtime 0.209s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/components/deleter 0.031s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/components/framework 0.036s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/config 0.006s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/constraint 0.008s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/executorinfo 0.065s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/integration 8.081s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/podtask 0.043s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/podtask/hostport 0.010s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/resources 0.011s
ok k8s.io/kubernetes/contrib/mesos/pkg/scheduler/service 0.035s
ok k8s.io/kubernetes/contrib/mesos/pkg/service 0.020s
ok k8s.io/kubernetes/examples 0.112s
ok k8s.io/kubernetes/examples/apiserver 0.107s
ok k8s.io/kubernetes/federation/apis/federation/install 0.017s
ok k8s.io/kubernetes/federation/apis/federation/validation 0.016s
ok k8s.io/kubernetes/federation/cmd/federated-apiserver/app 0.072s
ok k8s.io/kubernetes/federation/pkg/federation-controller/cluster 0.025s
ok k8s.io/kubernetes/federation/registry/cluster 0.020s
ok k8s.io/kubernetes/federation/registry/cluster/etcd 7.509s
ok k8s.io/kubernetes/hack/cmd/teststale 0.043s
ok k8s.io/kubernetes/pkg/admission 0.020s
ok k8s.io/kubernetes/pkg/api 3.436s
ok k8s.io/kubernetes/pkg/api/endpoints 0.011s
ok k8s.io/kubernetes/pkg/api/errors 0.011s
ok k8s.io/kubernetes/pkg/api/install 0.018s
ok k8s.io/kubernetes/pkg/api/meta 0.047s
ok k8s.io/kubernetes/pkg/api/pod 0.010s
ok k8s.io/kubernetes/pkg/api/resource 0.021s
ok k8s.io/kubernetes/pkg/api/service 0.004s
ok k8s.io/kubernetes/pkg/api/testapi 0.026s
ok k8s.io/kubernetes/pkg/api/unversioned 0.012s
ok k8s.io/kubernetes/pkg/api/unversioned/validation 0.006s
ok k8s.io/kubernetes/pkg/api/util 0.008s
ok k8s.io/kubernetes/pkg/api/v1 0.034s
ok k8s.io/kubernetes/pkg/api/validation 0.726s
ok k8s.io/kubernetes/pkg/apimachinery/registered 0.006s
ok k8s.io/kubernetes/pkg/apis/abac/v0 0.009s
ok k8s.io/kubernetes/pkg/apis/apps/validation 0.016s
ok k8s.io/kubernetes/pkg/apis/authorization/validation 0.011s
ok k8s.io/kubernetes/pkg/apis/autoscaling/validation 0.022s
ok k8s.io/kubernetes/pkg/apis/batch/validation 0.024s
ok k8s.io/kubernetes/pkg/apis/componentconfig 0.011s
ok k8s.io/kubernetes/pkg/apis/componentconfig/install 0.010s
ok k8s.io/kubernetes/pkg/apis/extensions/install 0.019s
ok k8s.io/kubernetes/pkg/apis/extensions/v1beta1 0.023s
ok k8s.io/kubernetes/pkg/apis/extensions/validation 0.020s
ok k8s.io/kubernetes/pkg/apis/policy/validation 0.016s
ok k8s.io/kubernetes/pkg/apis/rbac/validation 0.016s
ok k8s.io/kubernetes/pkg/apiserver 1.401s
ok k8s.io/kubernetes/pkg/auth/authenticator/bearertoken 0.015s
ok k8s.io/kubernetes/pkg/auth/authorizer/abac 0.051s
ok k8s.io/kubernetes/pkg/auth/authorizer/union 0.005s
ok k8s.io/kubernetes/pkg/auth/handlers 0.010s
ok k8s.io/kubernetes/pkg/client/cache 0.253s
ok k8s.io/kubernetes/pkg/client/chaosclient 0.014s
ok k8s.io/kubernetes/pkg/client/leaderelection 0.046s
ok k8s.io/kubernetes/pkg/client/record 0.246s
ok k8s.io/kubernetes/pkg/client/restclient 0.046s
ok k8s.io/kubernetes/pkg/client/transport 0.008s
ok k8s.io/kubernetes/pkg/client/typed/discovery 0.030s
ok k8s.io/kubernetes/pkg/client/typed/dynamic 0.034s
ok k8s.io/kubernetes/pkg/client/unversioned 0.334s
ok k8s.io/kubernetes/pkg/client/unversioned/auth 0.116s
ok k8s.io/kubernetes/pkg/client/unversioned/clientcmd 0.079s
ok k8s.io/kubernetes/pkg/client/unversioned/clientcmd/api 0.051s
ok k8s.io/kubernetes/pkg/client/unversioned/portforward 0.051s
ok k8s.io/kubernetes/pkg/client/unversioned/remotecommand 0.207s
ok k8s.io/kubernetes/pkg/client/unversioned/testclient 0.043s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/aws 0.019s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/gce 0.011s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/mesos 0.504s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/openstack 0.010s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/ovirt 0.014s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/rackspace 0.010s
ok k8s.io/kubernetes/pkg/cloudprovider/providers/vsphere 0.066s
ok k8s.io/kubernetes/pkg/controller 0.029s
ok k8s.io/kubernetes/pkg/controller/daemon 0.209s
ok k8s.io/kubernetes/pkg/controller/deployment 0.030s
ok k8s.io/kubernetes/pkg/controller/endpoint 0.042s
ok k8s.io/kubernetes/pkg/controller/framework 0.281s
ok k8s.io/kubernetes/pkg/controller/garbagecollector 0.041s
ok k8s.io/kubernetes/pkg/controller/gc 0.020s
ok k8s.io/kubernetes/pkg/controller/job 0.146s
ok k8s.io/kubernetes/pkg/controller/namespace 0.030s
ok k8s.io/kubernetes/pkg/controller/node 0.086s
ok k8s.io/kubernetes/pkg/controller/persistentvolume 2.396s
ok k8s.io/kubernetes/pkg/controller/petset 0.094s
ok k8s.io/kubernetes/pkg/controller/podautoscaler 4.046s
ok k8s.io/kubernetes/pkg/controller/podautoscaler/metrics 0.027s
ok k8s.io/kubernetes/pkg/controller/replicaset 9.759s
ok k8s.io/kubernetes/pkg/controller/replication 9.758s
ok k8s.io/kubernetes/pkg/controller/resourcequota 0.027s
ok k8s.io/kubernetes/pkg/controller/route 0.081s
ok k8s.io/kubernetes/pkg/controller/service 0.026s
ok k8s.io/kubernetes/pkg/controller/serviceaccount 0.069s
ok k8s.io/kubernetes/pkg/controller/volume/cache 0.004s
ok k8s.io/kubernetes/pkg/conversion 0.366s
ok k8s.io/kubernetes/pkg/conversion/queryparams 0.005s
ok k8s.io/kubernetes/pkg/credentialprovider 2.012s
ok k8s.io/kubernetes/pkg/credentialprovider/aws 0.010s
ok k8s.io/kubernetes/pkg/credentialprovider/gcp 0.035s
ok k8s.io/kubernetes/pkg/dns 0.067s
ok k8s.io/kubernetes/pkg/fieldpath 0.010s
ok k8s.io/kubernetes/pkg/fields 0.003s
ok k8s.io/kubernetes/pkg/genericapiserver 7.003s
ok k8s.io/kubernetes/pkg/healthz 0.006s
ok k8s.io/kubernetes/pkg/httplog 0.006s
ok k8s.io/kubernetes/pkg/kubectl 2.263s
ok k8s.io/kubernetes/pkg/kubectl/cmd 0.327s
ok k8s.io/kubernetes/pkg/kubectl/cmd/config 0.736s
ok k8s.io/kubernetes/pkg/kubectl/cmd/util 0.206s
ok k8s.io/kubernetes/pkg/kubectl/cmd/util/editor 0.006s
ok k8s.io/kubernetes/pkg/kubectl/resource 0.102s
ok k8s.io/kubernetes/pkg/kubelet 0.476s
ok k8s.io/kubernetes/pkg/kubelet/client 0.018s
ok k8s.io/kubernetes/pkg/kubelet/cm 0.023s
ok k8s.io/kubernetes/pkg/kubelet/config 0.092s
ok k8s.io/kubernetes/pkg/kubelet/container 0.028s
ok k8s.io/kubernetes/pkg/kubelet/custommetrics 0.009s
ok k8s.io/kubernetes/pkg/kubelet/dockertools 0.080s
ok k8s.io/kubernetes/pkg/kubelet/envvars 0.009s
ok k8s.io/kubernetes/pkg/kubelet/eviction 0.055s
ok k8s.io/kubernetes/pkg/kubelet/lifecycle 0.020s
ok k8s.io/kubernetes/pkg/kubelet/network 0.021s
ok k8s.io/kubernetes/pkg/kubelet/network/cni 0.121s
ok k8s.io/kubernetes/pkg/kubelet/network/exec 0.120s
ok k8s.io/kubernetes/pkg/kubelet/network/hairpin 0.005s
ok k8s.io/kubernetes/pkg/kubelet/network/kubenet 0.021s
ok k8s.io/kubernetes/pkg/kubelet/pleg 0.025s
ok k8s.io/kubernetes/pkg/kubelet/pod 0.063s
ok k8s.io/kubernetes/pkg/kubelet/prober 10.468s
ok k8s.io/kubernetes/pkg/kubelet/prober/results 0.040s
ok k8s.io/kubernetes/pkg/kubelet/qos 0.010s
ok k8s.io/kubernetes/pkg/kubelet/qos/util 0.010s
ok k8s.io/kubernetes/pkg/kubelet/rkt 1.041s
ok k8s.io/kubernetes/pkg/kubelet/server 0.645s
ok k8s.io/kubernetes/pkg/kubelet/server/stats 0.075s
ok k8s.io/kubernetes/pkg/kubelet/status 0.035s
ok k8s.io/kubernetes/pkg/kubelet/types 0.062s
ok k8s.io/kubernetes/pkg/kubelet/util/cache 0.019s
ok k8s.io/kubernetes/pkg/kubelet/util/format 0.010s
ok k8s.io/kubernetes/pkg/kubelet/util/queue 0.006s
ok k8s.io/kubernetes/pkg/labels 0.006s
ok k8s.io/kubernetes/pkg/master 18.471s
ok k8s.io/kubernetes/pkg/probe/exec 0.005s
ok k8s.io/kubernetes/pkg/probe/http 3.010s
ok k8s.io/kubernetes/pkg/probe/tcp 0.006s
ok k8s.io/kubernetes/pkg/proxy/config 0.019s
ok k8s.io/kubernetes/pkg/proxy/iptables 0.011s
ok k8s.io/kubernetes/pkg/proxy/userspace 5.687s
ok k8s.io/kubernetes/pkg/quota 0.022s
ok k8s.io/kubernetes/pkg/registry/componentstatus 0.104s
ok k8s.io/kubernetes/pkg/registry/configmap 0.020s
ok k8s.io/kubernetes/pkg/registry/configmap/etcd 7.381s
ok k8s.io/kubernetes/pkg/registry/controller 0.020s
ok k8s.io/kubernetes/pkg/registry/controller/etcd 13.829s
ok k8s.io/kubernetes/pkg/registry/daemonset 0.020s
ok k8s.io/kubernetes/pkg/registry/daemonset/etcd 8.550s
ok k8s.io/kubernetes/pkg/registry/deployment 0.054s
ok k8s.io/kubernetes/pkg/registry/deployment/etcd 12.167s
ok k8s.io/kubernetes/pkg/registry/endpoint 0.022s
ok k8s.io/kubernetes/pkg/registry/endpoint/etcd 6.914s
ok k8s.io/kubernetes/pkg/registry/event 0.020s
ok k8s.io/kubernetes/pkg/registry/event/etcd 1.923s
ok k8s.io/kubernetes/pkg/registry/experimental/controller/etcd 1.153s
ok k8s.io/kubernetes/pkg/registry/generic 0.044s
ok k8s.io/kubernetes/pkg/registry/generic/registry 12.351s
ok k8s.io/kubernetes/pkg/registry/generic/rest 0.097s
ok k8s.io/kubernetes/pkg/registry/horizontalpodautoscaler 0.025s
ok k8s.io/kubernetes/pkg/registry/horizontalpodautoscaler/etcd 7.120s
ok k8s.io/kubernetes/pkg/registry/ingress 0.022s
ok k8s.io/kubernetes/pkg/registry/ingress/etcd 8.238s
ok k8s.io/kubernetes/pkg/registry/job 0.021s
ok k8s.io/kubernetes/pkg/registry/job/etcd 7.014s
ok k8s.io/kubernetes/pkg/registry/limitrange 0.041s
ok k8s.io/kubernetes/pkg/registry/limitrange/etcd 7.578s
ok k8s.io/kubernetes/pkg/registry/namespace 0.021s
ok k8s.io/kubernetes/pkg/registry/namespace/etcd 7.514s
ok k8s.io/kubernetes/pkg/registry/node 0.021s
ok k8s.io/kubernetes/pkg/registry/node/etcd 7.988s
ok k8s.io/kubernetes/pkg/registry/persistentvolume 0.020s
ok k8s.io/kubernetes/pkg/registry/persistentvolume/etcd 7.016s
ok k8s.io/kubernetes/pkg/registry/persistentvolumeclaim 0.020s
ok k8s.io/kubernetes/pkg/registry/persistentvolumeclaim/etcd 7.071s
ok k8s.io/kubernetes/pkg/registry/petset 0.028s
ok k8s.io/kubernetes/pkg/registry/petset/etcd 7.424s
ok k8s.io/kubernetes/pkg/registry/pod 0.030s
ok k8s.io/kubernetes/pkg/registry/pod/etcd 18.776s
ok k8s.io/kubernetes/pkg/registry/pod/rest 0.359s
ok k8s.io/kubernetes/pkg/registry/poddisruptionbudget 0.022s
ok k8s.io/kubernetes/pkg/registry/poddisruptionbudget/etcd 6.964s
ok k8s.io/kubernetes/pkg/registry/podsecuritypolicy/etcd 7.061s
ok k8s.io/kubernetes/pkg/registry/podtemplate 0.020s
ok k8s.io/kubernetes/pkg/registry/podtemplate/etcd 7.063s
ok k8s.io/kubernetes/pkg/registry/replicaset 0.018s
ok k8s.io/kubernetes/pkg/registry/replicaset/etcd 14.544s
ok k8s.io/kubernetes/pkg/registry/resourcequota 0.045s
ok k8s.io/kubernetes/pkg/registry/resourcequota/etcd 6.794s
ok k8s.io/kubernetes/pkg/registry/scheduledjob 0.022s
ok k8s.io/kubernetes/pkg/registry/secret 0.022s
ok k8s.io/kubernetes/pkg/registry/secret/etcd 7.114s
ok k8s.io/kubernetes/pkg/registry/service 0.073s
ok k8s.io/kubernetes/pkg/registry/service/allocator 0.004s
ok k8s.io/kubernetes/pkg/registry/service/allocator/etcd 1.130s
ok k8s.io/kubernetes/pkg/registry/service/etcd 6.155s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator 0.011s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator/controller 0.033s
ok k8s.io/kubernetes/pkg/registry/service/ipallocator/etcd 1.596s
ok k8s.io/kubernetes/pkg/registry/service/portallocator 0.013s
ok k8s.io/kubernetes/pkg/registry/serviceaccount 0.022s
ok k8s.io/kubernetes/pkg/registry/serviceaccount/etcd 7.432s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresource 0.037s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresource/etcd 6.667s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresourcedata 0.061s
ok k8s.io/kubernetes/pkg/registry/thirdpartyresourcedata/etcd 7.144s
ok k8s.io/kubernetes/pkg/runtime 0.029s
ok k8s.io/kubernetes/pkg/runtime/serializer 0.479s
ok k8s.io/kubernetes/pkg/runtime/serializer/json 0.011s
ok k8s.io/kubernetes/pkg/runtime/serializer/protobuf 0.020s
ok k8s.io/kubernetes/pkg/runtime/serializer/recognizer 0.009s
ok k8s.io/kubernetes/pkg/runtime/serializer/streaming 0.010s
ok k8s.io/kubernetes/pkg/runtime/serializer/versioning 0.006s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy 0.024s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/capabilities 0.016s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/group 0.010s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/selinux 0.010s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/user 0.024s
ok k8s.io/kubernetes/pkg/security/podsecuritypolicy/util 0.009s
ok k8s.io/kubernetes/pkg/securitycontext 0.042s
ok k8s.io/kubernetes/pkg/serviceaccount 0.075s
ok k8s.io/kubernetes/pkg/ssh 2.365s
ok k8s.io/kubernetes/pkg/storage 9.665s
ok k8s.io/kubernetes/pkg/storage/etcd 13.680s
ok k8s.io/kubernetes/pkg/storage/etcd/util 0.009s
ok k8s.io/kubernetes/pkg/storage/etcd3 18.186s
ok k8s.io/kubernetes/pkg/util 0.018s
ok k8s.io/kubernetes/pkg/util/bandwidth 0.009s
ok k8s.io/kubernetes/pkg/util/cache 0.004s
ok k8s.io/kubernetes/pkg/util/config 0.005s
ok k8s.io/kubernetes/pkg/util/configz 0.007s
ok k8s.io/kubernetes/pkg/util/dbus 0.005s
ok k8s.io/kubernetes/pkg/util/deployment 0.022s
ok k8s.io/kubernetes/pkg/util/env 0.005s
ok k8s.io/kubernetes/pkg/util/errors 0.004s
ok k8s.io/kubernetes/pkg/util/exec 0.008s
ok k8s.io/kubernetes/pkg/util/flowcontrol 4.106s
ok k8s.io/kubernetes/pkg/util/flushwriter 0.005s
ok k8s.io/kubernetes/pkg/util/framer 0.004s
ok k8s.io/kubernetes/pkg/util/goroutinemap 0.125s
ok k8s.io/kubernetes/pkg/util/hash 0.031s
ok k8s.io/kubernetes/pkg/util/httpstream 0.010s
ok k8s.io/kubernetes/pkg/util/httpstream/spdy 0.194s
ok k8s.io/kubernetes/pkg/util/integer 0.003s
ok k8s.io/kubernetes/pkg/util/intstr 0.004s
ok k8s.io/kubernetes/pkg/util/io 0.038s
ok k8s.io/kubernetes/pkg/util/iptables 0.017s
ok k8s.io/kubernetes/pkg/util/json 0.005s
ok k8s.io/kubernetes/pkg/util/jsonpath 0.008s
ok k8s.io/kubernetes/pkg/util/keymutex 1.005s
ok k8s.io/kubernetes/pkg/util/labels 0.006s
ok k8s.io/kubernetes/pkg/util/mount 0.005s
ok k8s.io/kubernetes/pkg/util/net 0.006s
ok k8s.io/kubernetes/pkg/util/net/sets 0.005s
ok k8s.io/kubernetes/pkg/util/oom 0.006s
ok k8s.io/kubernetes/pkg/util/parsers 0.007s
ok k8s.io/kubernetes/pkg/util/procfs 0.003s
ok k8s.io/kubernetes/pkg/util/proxy 0.015s
ok k8s.io/kubernetes/pkg/util/rand 0.004s
ok k8s.io/kubernetes/pkg/util/runtime 0.006s
ok k8s.io/kubernetes/pkg/util/sets 0.004s
ok k8s.io/kubernetes/pkg/util/slice 0.004s
ok k8s.io/kubernetes/pkg/util/strategicpatch 0.120s
ok k8s.io/kubernetes/pkg/util/strings 0.007s
ok k8s.io/kubernetes/pkg/util/testing 0.019s
ok k8s.io/kubernetes/pkg/util/threading 0.005s
ok k8s.io/kubernetes/pkg/util/validation 0.012s
ok k8s.io/kubernetes/pkg/util/validation/field 0.007s
ok k8s.io/kubernetes/pkg/util/wait 1.027s
ok k8s.io/kubernetes/pkg/util/workqueue 0.126s
ok k8s.io/kubernetes/pkg/util/wsstream 0.028s
ok k8s.io/kubernetes/pkg/util/yaml 0.006s
ok k8s.io/kubernetes/pkg/version 0.007s
ok k8s.io/kubernetes/pkg/volume 0.168s
ok k8s.io/kubernetes/pkg/volume/aws_ebs 0.025s
ok k8s.io/kubernetes/pkg/volume/azure_file 0.022s
ok k8s.io/kubernetes/pkg/volume/cephfs 0.021s
ok k8s.io/kubernetes/pkg/volume/cinder 2.027s
ok k8s.io/kubernetes/pkg/volume/configmap 0.027s
ok k8s.io/kubernetes/pkg/volume/downwardapi 0.072s
ok k8s.io/kubernetes/pkg/volume/empty_dir 0.030s
ok k8s.io/kubernetes/pkg/volume/fc 0.025s
ok k8s.io/kubernetes/pkg/volume/flexvolume 0.110s
ok k8s.io/kubernetes/pkg/volume/flocker 0.022s
ok k8s.io/kubernetes/pkg/volume/gce_pd 0.025s
ok k8s.io/kubernetes/pkg/volume/git_repo 0.027s
ok k8s.io/kubernetes/pkg/volume/glusterfs 0.026s
ok k8s.io/kubernetes/pkg/volume/host_path 0.022s
ok k8s.io/kubernetes/pkg/volume/iscsi 2.025s
ok k8s.io/kubernetes/pkg/volume/nfs 0.023s
ok k8s.io/kubernetes/pkg/volume/rbd 0.054s
ok k8s.io/kubernetes/pkg/volume/secret 0.154s
ok k8s.io/kubernetes/pkg/volume/util 0.096s
ok k8s.io/kubernetes/pkg/volume/vsphere_volume 0.030s
ok k8s.io/kubernetes/pkg/watch 0.010s
ok k8s.io/kubernetes/pkg/watch/versioned 0.025s
ok k8s.io/kubernetes/plugin/pkg/admission/admit 0.025s
ok k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/antiaffinity 0.062s
ok k8s.io/kubernetes/plugin/pkg/admission/deny 0.023s
ok k8s.io/kubernetes/plugin/pkg/admission/exec 0.028s
ok k8s.io/kubernetes/plugin/pkg/admission/initialresources 0.048s
ok k8s.io/kubernetes/plugin/pkg/admission/limitranger 0.107s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/autoprovision 0.022s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/exists 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle 0.020s
ok k8s.io/kubernetes/plugin/pkg/admission/persistentvolume/label 0.024s
ok k8s.io/kubernetes/plugin/pkg/admission/resourcequota 0.030s
ok k8s.io/kubernetes/plugin/pkg/admission/security/podsecuritypolicy 0.049s
ok k8s.io/kubernetes/plugin/pkg/admission/securitycontext/scdeny 0.033s
ok k8s.io/kubernetes/plugin/pkg/admission/serviceaccount 1.403s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/password/allow 0.005s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/password/passwordfile 0.086s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/basicauth 0.010s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/union 0.012s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/request/x509 0.011s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/oidc 6.333s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/tokenfile 0.057s
ok k8s.io/kubernetes/plugin/pkg/auth/authenticator/token/webhook 0.222s
ok k8s.io/kubernetes/plugin/pkg/auth/authorizer/webhook 0.212s
ok k8s.io/kubernetes/plugin/pkg/client/auth/oidc 1.788s
ok k8s.io/kubernetes/plugin/pkg/scheduler 1.026s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm 0.018s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates 1.002s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/priorities 0.046s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider 0.020s
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider/defaults 0.081s
ok k8s.io/kubernetes/plugin/pkg/scheduler/api/validation 0.017s
ok k8s.io/kubernetes/plugin/pkg/scheduler/factory 0.047s
ok k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache 0.017s
ERRO[22757] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: Container a8d679a8f2b669741f31a9c48014590b69518a3e3344ea6d5578cfda360cc412 is not running
+++ [0523 22:03:34] Running build command....
ERRO[22758] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: No such container: kube-build-573d825ce9
ERRO[22758] Handler for POST /v1.23/containers/kube-build-573d825ce9/wait returned error: No such container: kube-build-573d825ce9
ERRO[22758] Handler for DELETE /v1.23/containers/kube-build-573d825ce9 returned error: No such container: kube-build-573d825ce9
INFO[22758] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[22758] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
+++ [0523 22:03:35] Checking etcd is on PATH
/usr/local/bin/etcd
Go version: go version go1.6.2 linux/amd64
+++ [0523 22:03:35] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
+++ [0523 22:03:35] Building go targets for linux/amd64:
cmd/integration
+++ [0523 22:03:38] Placing binaries
etcd -data-dir /tmp.k8s/tmp.p9gJUpHJTH --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null
Waiting for etcd to come up.
+++ [0523 22:03:43] On try 1, etcd: :
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
+++ [0523 22:03:43] Running integration test cases
Running tests for APIVersion: v1,extensions/v1beta1 with etcdPrefix: registry
+++ [0523 22:03:43] Running tests without code coverage
ok k8s.io/kubernetes/test/integration 295.810s
Running tests for APIVersion: v1,extensions/v1beta1 with etcdPrefix: kubernetes.io/registry
+++ [0523 22:09:18] Running tests without code coverage
2016-05-23 22:13:57.807365 I | etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)
2016-05-23 22:13:57.906133 I | etcdserver: saved snapshot at index 40004
2016-05-23 22:13:57.906704 I | etcdserver: compacted raft log at 35004
ok k8s.io/kubernetes/test/integration 317.029s
+++ [0523 22:15:11] Running integration test scenario with watch cache on
I0523 22:15:13.796320 518 integration.go:766] Running tests for APIVersion:
I0523 22:15:13.796904 518 integration.go:113] Creating etcd client pointing to [http://127.0.0.1:4001]
W0523 22:15:13.871857 518 genericapiserver.go:257] Network range for service cluster IPs is unspecified. Defaulting to 10.0.0.0/24.
I0523 22:15:13.871952 518 genericapiserver.go:286] Node port range unspecified. Defaulting to 30000-32767.
W0523 22:15:14.504969 518 controller.go:277] Resetting endpoints for master service "kubernetes" to kind:"" apiVersion:""
I0523 22:15:14.508320 518 factory.go:256] Creating scheduler from algorithm provider 'DefaultProvider'
I0523 22:15:14.508535 518 factory.go:302] creating scheduler with fit predicates 'map[NoDiskConflict:{} NoVolumeZoneConflict:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} GeneralPredicates:{} PodToleratesNodeTaints:{} CheckNodeMemoryPressure:{}]' and priority functions 'map[BalancedResourceAllocation:{} SelectorSpreadPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} LeastRequestedPriority:{}]
I0523 22:15:14.509793 518 nodecontroller.go:157] Sending events to api server.
I0523 22:15:14.510639 518 integration.go:216] Using /tmp/kubelet_integ_1.735272699 as root dir for kubelet #1
W0523 22:15:14.510962 518 server.go:645] No api server defined - no events will be sent to API server.
I0523 22:15:14.511224 518 server.go:707] Adding manifest file: /tmp/kubelet_integ_1.735272699/config102408222
I0523 22:15:14.579656 518 replication_controller.go:240] Starting RC Manager
I0523 22:15:14.583647 518 file.go:47] Watching path "/tmp/kubelet_integ_1.735272699/config102408222"
I0523 22:15:14.583941 518 server.go:713] Adding manifest url "http://127.0.0.1:54489/manifest" with HTTP header map[]
I0523 22:15:14.584478 518 http.go:55] Watching URL http://127.0.0.1:54489/manifest
I0523 22:15:14.584720 518 server.go:717] Watching apiserver
W0523 22:15:14.624434 518 iptables.go:144] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0523 22:15:14.624592 518 iptables.go:177] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0523 22:15:14.624638 518 kubelet.go:378] Hairpin mode set to "hairpin-veth"
W0523 22:15:14.624799 518 plugins.go:170] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
I0523 22:15:14.859472 518 manager.go:230] Setting dockerRoot to
I0523 22:15:14.971621 518 nodecontroller.go:636] NodeController is entering network segmentation mode.
I0523 22:15:15.151966 518 plugins.go:294] Loaded volume plugin "kubernetes.io/empty-dir"
I0523 22:15:15.152100 518 server.go:679] Started kubelet v1.3.0-alpha.4.392+8f104a7b0f2298
I0523 22:15:15.152305 518 integration.go:249] Using /tmp/kubelet_integ_2.145652965 as root dir for kubelet #2
W0523 22:15:15.152406 518 server.go:645] No api server defined - no events will be sent to API server.
I0523 22:15:15.152446 518 server.go:713] Adding manifest url "http://127.0.0.1:38499/manifest" with HTTP header map[]
I0523 22:15:15.152482 518 http.go:55] Watching URL http://127.0.0.1:38499/manifest
I0523 22:15:15.152498 518 server.go:717] Watching apiserver
W0523 22:15:15.152678 518 iptables.go:144] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0523 22:15:15.152769 518 iptables.go:177] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0523 22:15:15.152801 518 kubelet.go:378] Hairpin mode set to "hairpin-veth"
W0523 22:15:15.152924 518 plugins.go:170] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
I0523 22:15:15.152975 518 manager.go:230] Setting dockerRoot to
E0523 22:15:15.153752 518 kubelet.go:905] Image garbage collection failed: invalid capacity 0 on device "" at mount point ""
I0523 22:15:15.154115 518 container_manager_stub.go:29] Starting stub container manager
I0523 22:15:15.154165 518 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0523 22:15:15.154191 518 manager.go:123] Starting to sync pod status with apiserver
I0523 22:15:15.154268 518 kubelet.go:2492] Starting kubelet main sync loop.
I0523 22:15:15.154291 518 kubelet.go:2501] skipping pod synchronization - [network state unknown container runtime is down]
I0523 22:15:15.154781 518 server.go:117] Starting to listen on 127.0.0.1:10250
I0523 22:15:15.260186 518 kubelet.go:2904] Recording NodeHasSufficientDisk event message for node localhost
I0523 22:15:15.260609 518 kubelet.go:2904] Recording NodeHasSufficientMemory event message for node localhost
I0523 22:15:15.260868 518 kubelet.go:1099] Attempting to register node localhost
I0523 22:15:15.269422 518 kubelet.go:1130] Successfully registered node localhost
I0523 22:15:15.321587 518 plugins.go:294] Loaded volume plugin "kubernetes.io/empty-dir"
I0523 22:15:15.322047 518 server.go:679] Started kubelet v1.3.0-alpha.4.392+8f104a7b0f2298
I0523 22:15:15.322279 518 integration.go:773] API Server started on http://127.0.0.1:38715
I0523 22:15:15.322586 518 server.go:117] Starting to listen on 127.0.0.1:10251
E0523 22:15:15.325267 518 kubelet.go:905] Image garbage collection failed: invalid capacity 0 on device "" at mount point ""
I0523 22:15:15.325861 518 container_manager_stub.go:29] Starting stub container manager
I0523 22:15:15.326100 518 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0523 22:15:15.326362 518 manager.go:123] Starting to sync pod status with apiserver
I0523 22:15:15.326606 518 kubelet.go:2492] Starting kubelet main sync loop.
I0523 22:15:15.326863 518 kubelet.go:2501] skipping pod synchronization - [network state unknown container runtime is down]
I0523 22:15:15.427617 518 kubelet.go:2904] Recording NodeHasSufficientDisk event message for node 127.0.0.1
I0523 22:15:15.428088 518 kubelet.go:2904] Recording NodeHasSufficientMemory event message for node 127.0.0.1
I0523 22:15:15.428378 518 kubelet.go:1099] Attempting to register node 127.0.0.1
I0523 22:15:15.434161 518 kubelet.go:1130] Successfully registered node 127.0.0.1
I0523 22:15:19.975848 518 nodecontroller.go:525] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"127.0.0.1", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/127.0.0.1", UID:"b9a17537-2105-11e6-92dc-0242ac110002", ResourceVersion:"3186", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/os":"linux", "kubernetes.io/hostname":"127.0.0.1", "beta.kubernetes.io/arch":"amd64"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"127.0.0.1", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}, "alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}}, Allocatable:api.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}, "alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"127.0.0.1"}, api.NodeAddress{Type:"InternalIP", Address:"127.0.0.1"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10251}}, NodeInfo:api.NodeSystemInfo{MachineID:"", SystemUUID:"", BootID:"", KernelVersion:"", OSImage:"", ContainerRuntimeVersion:"docker://1.8.1", KubeletVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", KubeProxyVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]api.ContainerImage(nil)}}
I0523 22:15:19.976898 518 nodecontroller.go:691] Recording Registered Node 127.0.0.1 in NodeController event message for node 127.0.0.1
I0523 22:15:19.977195 518 nodecontroller.go:525] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"localhost", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/localhost", UID:"b987f92d-2105-11e6-92dc-0242ac110002", ResourceVersion:"3185", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/arch":"amd64", "beta.kubernetes.io/os":"linux", "kubernetes.io/hostname":"localhost"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"localhost", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}, "alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}}, Allocatable:api.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}, "alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599618718, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599618715, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"172.17.0.2"}, api.NodeAddress{Type:"InternalIP", Address:"172.17.0.2"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10250}}, NodeInfo:api.NodeSystemInfo{MachineID:"", SystemUUID:"", BootID:"", KernelVersion:"", OSImage:"", ContainerRuntimeVersion:"docker://1.8.1", KubeletVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", KubeProxyVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]api.ContainerImage(nil)}}
I0523 22:15:19.979851 518 nodecontroller.go:691] Recording Registered Node localhost in NodeController event message for node localhost
W0523 22:15:19.980279 518 nodecontroller.go:758] Missing timestamp for Node 127.0.0.1. Assuming now as a timestamp.
W0523 22:15:19.980520 518 nodecontroller.go:758] Missing timestamp for Node localhost. Assuming now as a timestamp.
I0523 22:15:19.980832 518 nodecontroller.go:641] NodeController exited network segmentation mode.
I0523 22:15:19.978348 518 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController
I0523 22:15:19.985844 518 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node localhost event: Registered Node localhost in NodeController
I0523 22:15:20.154983 518 kubelet.go:2558] SyncLoop (ADD, "file"): ""
I0523 22:15:20.155452 518 kubelet.go:2558] SyncLoop (ADD, "http"): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)"
I0523 22:15:20.156144 518 kubelet.go:2558] SyncLoop (ADD, "api"): ""
I0523 22:15:20.165365 518 kubelet.go:2558] SyncLoop (ADD, "api"): "container-vm-guestbook-pod-spec-localhost_default(bc731f9c-2105-11e6-92dc-0242ac110002)"
I0523 22:15:20.177721 518 manager.go:1706] Need to restart pod infra container for "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)" because it is not found
I0523 22:15:20.179151 518 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}
E0523 22:15:20.180176 518 manager.go:1584] ResolvConfPath is empty.
I0523 22:15:20.181166 518 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:15:20.181757 518 manager.go:1455] Generating ref for container redis: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{redis}"}
I0523 22:15:20.182896 518 manager.go:1455] Generating ref for container guestbook: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{guestbook}"}
I0523 22:15:20.327498 518 kubelet.go:2558] SyncLoop (ADD, "api"): ""
I0523 22:15:20.327981 518 kubelet.go:2558] SyncLoop (ADD, "http"): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)"
I0523 22:15:20.335577 518 kubelet.go:2558] SyncLoop (ADD, "api"): "container-vm-guestbook-pod-spec-127.0.0.1_default(bc8d4973-2105-11e6-92dc-0242ac110002)"
I0523 22:15:20.338016 518 manager.go:1706] Need to restart pod infra container for "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)" because it is not found
I0523 22:15:20.339585 518 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}
E0523 22:15:20.340456 518 manager.go:1584] ResolvConfPath is empty.
I0523 22:15:20.340886 518 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:15:20.341365 518 manager.go:1455] Generating ref for container redis: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{redis}"}
I0523 22:15:20.342621 518 manager.go:1455] Generating ref for container guestbook: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{guestbook}"}
I0523 22:15:21.156289 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_bde9d03d"}
I0523 22:15:21.156409 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_eecf0606"}
I0523 22:15:21.156450 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_4fbfef56"}
I0523 22:15:21.330453 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_6da8cf49"}
I0523 22:15:21.334404 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_2ed42ad5"}
I0523 22:15:21.334688 518 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_baf07339"}
I0523 22:15:25.322972 518 integration.go:810] Running 5 tests in parallel.
I0523 22:15:25.329567 518 integration.go:396] Version test passed
I0523 22:15:25.359035 518 integration.go:493] Created atomicService
I0523 22:15:25.359441 518 integration.go:506] Starting to update (e, v)
I0523 22:15:25.359937 518 integration.go:506] Starting to update (foo, bar)
I0523 22:15:25.360482 518 integration.go:506] Starting to update (a, z)
I0523 22:15:25.361033 518 integration.go:506] Starting to update (b, y)
I0523 22:15:25.361545 518 integration.go:506] Starting to update (c, x)
I0523 22:15:25.362019 518 integration.go:506] Starting to update (d, w)
I0523 22:15:25.370790 518 integration.go:517] Posting update (d, w)
I0523 22:15:25.372368 518 integration.go:517] Posting update (c, x)
I0523 22:15:25.374502 518 integration.go:517] Posting update (b, y)
I0523 22:15:25.375374 518 integration.go:517] Posting update (a, z)
I0523 22:15:25.376216 518 integration.go:517] Posting update (foo, bar)
I0523 22:15:25.377022 518 integration.go:517] Posting update (e, v)
I0523 22:15:25.469650 518 integration.go:530] Done update (d, w)
I0523 22:15:25.471451 518 integration.go:521] Conflict: (foo, bar)
I0523 22:15:25.471824 518 integration.go:506] Starting to update (foo, bar)
I0523 22:15:25.473093 518 integration.go:521] Conflict: (e, v)
I0523 22:15:25.475870 518 integration.go:506] Starting to update (e, v)
I0523 22:15:25.476132 518 integration.go:521] Conflict: (c, x)
I0523 22:15:25.476155 518 integration.go:506] Starting to update (c, x)
I0523 22:15:25.476371 518 integration.go:521] Conflict: (b, y)
I0523 22:15:25.476394 518 integration.go:506] Starting to update (b, y)
I0523 22:15:25.476607 518 integration.go:521] Conflict: (a, z)
I0523 22:15:25.476630 518 integration.go:506] Starting to update (a, z)
I0523 22:15:25.502488 518 integration.go:517] Posting update (a, z)
I0523 22:15:25.503308 518 integration.go:517] Posting update (b, y)
I0523 22:15:25.508293 518 integration.go:517] Posting update (c, x)
I0523 22:15:25.508802 518 integration.go:517] Posting update (e, v)
I0523 22:15:25.509588 518 integration.go:517] Posting update (foo, bar)
I0523 22:15:25.512417 518 integration.go:460] Self link test passed in namespace default
I0523 22:15:25.549680 518 integration.go:530] Done update (foo, bar)
I0523 22:15:25.554212 518 integration.go:530] Done update (e, v)
I0523 22:15:25.554695 518 integration.go:521] Conflict: (b, y)
I0523 22:15:25.555878 518 integration.go:506] Starting to update (b, y)
I0523 22:15:25.556114 518 integration.go:521] Conflict: (c, x)
I0523 22:15:25.556137 518 integration.go:506] Starting to update (c, x)
I0523 22:15:25.556427 518 integration.go:521] Conflict: (a, z)
I0523 22:15:25.556449 518 integration.go:506] Starting to update (a, z)
I0523 22:15:25.566549 518 integration.go:517] Posting update (a, z)
I0523 22:15:25.573443 518 integration.go:517] Posting update (c, x)
I0523 22:15:25.575248 518 integration.go:517] Posting update (b, y)
I0523 22:15:25.594932 518 integration.go:530] Done update (c, x)
I0523 22:15:25.597635 518 integration.go:521] Conflict: (a, z)
I0523 22:15:25.597760 518 integration.go:506] Starting to update (a, z)
I0523 22:15:25.600673 518 integration.go:517] Posting update (a, z)
I0523 22:15:25.605585 518 integration.go:460] Self link test passed in namespace other
I0523 22:15:25.612981 518 integration.go:530] Done update (a, z)
I0523 22:15:25.620178 518 integration.go:521] Conflict: (b, y)
I0523 22:15:25.620448 518 integration.go:506] Starting to update (b, y)
I0523 22:15:25.622608 518 integration.go:517] Posting update (b, y)
I0523 22:15:25.638052 518 integration.go:530] Done update (b, y)
I0523 22:15:25.645647 518 integration.go:542] Atomic PUTs work.
I0523 22:15:25.974643 518 integration.go:650] PATCHs work.
I0523 22:15:37.329132 518 integration.go:679] Master service test passed.
I0523 22:15:37.329586 518 integration.go:851] OK - found created containers: []string{"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_"}
I0523 22:15:37.366179 518 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"c6af3efc-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3234", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.foo to localhost
I0523 22:15:37.369015 518 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.foo_default(c6af3efc-2105-11e6-92dc-0242ac110002)"
I0523 22:15:37.371720 518 manager.go:1706] Need to restart pod infra container for "phantom.foo_default(c6af3efc-2105-11e6-92dc-0242ac110002)" because it is not found
I0523 22:15:37.372648 518 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"c6af3efc-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3235", FieldPath:"implicitly required container POD"}
E0523 22:15:37.373980 518 manager.go:1584] ResolvConfPath is empty.
I0523 22:15:37.374459 518 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:15:37.374967 518 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"c6af3efc-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3235", FieldPath:"spec.containers{c1}"}
I0523 22:15:38.159984 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.foo_default(c6af3efc-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c6af3efc-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.foo_default_c6af3efc-2105-11e6-92dc-0242ac110002_8d407968"}
I0523 22:15:38.160089 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.foo_default(c6af3efc-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c6af3efc-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.foo_default_c6af3efc-2105-11e6-92dc-0242ac110002_a644f0f8"}
I0523 22:15:38.366597 518 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"c748c31d-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3239", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.bar to 127.0.0.1
I0523 22:15:38.370860 518 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)"
I0523 22:15:38.372012 518 manager.go:1706] Need to restart pod infra container for "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)" because it is not found
I0523 22:15:38.372876 518 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"c748c31d-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3240", FieldPath:"implicitly required container POD"}
E0523 22:15:38.373800 518 manager.go:1584] ResolvConfPath is empty.
I0523 22:15:38.374268 518 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:15:38.374776 518 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"c748c31d-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3240", FieldPath:"spec.containers{c1}"}
I0523 22:15:39.342534 518 integration.go:378] Pod default/phantom.bar is not running. In phase "Pending"
I0523 22:15:39.347585 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c748c31d-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b"}
I0523 22:15:39.347680 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c748c31d-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02"}
I0523 22:15:40.343020 518 integration.go:720] Deleting pod phantom.bar
I0523 22:15:40.353132 518 kubelet.go:2565] SyncLoop (UPDATE, "api"): "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)"
I0523 22:15:40.380541 518 kubelet.go:2568] SyncLoop (REMOVE, "api"): "phantom.bar_default(c748c31d-2105-11e6-92dc-0242ac110002)"
I0523 22:15:40.381159 518 kubelet.go:2338] Killing unwanted pod "phantom.bar"
I0523 22:15:40.381774 518 manager.go:1309] Killing container "/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b c1 default/phantom.bar" with 0 second grace period
I0523 22:15:40.382091 518 manager.go:1347] Container "/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b c1 default/phantom.bar" exited after 80.017µs
I0523 22:15:40.382411 518 manager.go:1309] Killing container "/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02 default/phantom.bar" with 0 second grace period
I0523 22:15:40.382754 518 manager.go:1347] Container "/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02 default/phantom.bar" exited after 60.653µs
I0523 22:15:40.426064 518 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"c87ffac1-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3248", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.baz to 127.0.0.1
I0523 22:15:40.429455 518 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.baz_default(c87ffac1-2105-11e6-92dc-0242ac110002)"
I0523 22:15:40.431609 518 manager.go:1706] Need to restart pod infra container for "phantom.baz_default(c87ffac1-2105-11e6-92dc-0242ac110002)" because it is not found
I0523 22:15:40.432616 518 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"c87ffac1-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3249", FieldPath:"implicitly required container POD"}
E0523 22:15:40.433690 518 manager.go:1584] ResolvConfPath is empty.
I0523 22:15:40.433988 518 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:15:40.434447 518 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"c87ffac1-2105-11e6-92dc-0242ac110002", APIVersion:"v1", ResourceVersion:"3249", FieldPath:"spec.containers{c1}"}
I0523 22:15:41.328656 518 kubelet.go:2338] Killing unwanted pod "phantom.bar"
I0523 22:15:41.328843 518 manager.go:1309] Killing container "/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b /" with 30 second grace period
I0523 22:15:41.328928 518 manager.go:1347] Container "/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b /" exited after 55.711µs
W0523 22:15:41.328957 518 manager.go:1353] No ref for pod '"/k8s_c1.4efa0b7e_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_fc682f3b /"'
I0523 22:15:41.329010 518 manager.go:1309] Killing container "/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02 /" with 30 second grace period
I0523 22:15:41.329045 518 manager.go:1347] Container "/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02 /" exited after 15.509µs
W0523 22:15:41.329066 518 manager.go:1353] No ref for pod '"/k8s_POD.5d120417_phantom.bar_default_c748c31d-2105-11e6-92dc-0242ac110002_21725a02 /"'
I0523 22:15:41.352288 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.baz_default(c87ffac1-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c87ffac1-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.baz_default_c87ffac1-2105-11e6-92dc-0242ac110002_ab694965"}
I0523 22:15:41.352444 518 kubelet.go:2585] SyncLoop (PLEG): "phantom.baz_default(c87ffac1-2105-11e6-92dc-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"c87ffac1-2105-11e6-92dc-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.baz_default_c87ffac1-2105-11e6-92dc-0242ac110002_15e4ac07"}
I0523 22:15:41.362923 518 manager.go:399] Status for pod "c748c31d-2105-11e6-92dc-0242ac110002" is up-to-date; skipping
I0523 22:15:41.403556 518 integration.go:739] Scheduler doesn't make phantom pods: test passed.
I0523 22:15:41.403911 518 integration.go:858]
Logging high latency metrics from the 10250 kubelet
I0523 22:15:41.457233 518 server.go:959] GET /metrics: (51.640832ms) 200 [[Go-http-client/1.1] 127.0.0.1:58289]
May 23 22:15:41.484: INFO:
Latency metrics for node localhost:10250
May 23 22:15:41.484: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:6.165881s}
May 23 22:15:41.484: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:5.193218s}
May 23 22:15:41.484: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:5.163868s}
May 23 22:15:41.484: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:5.163868s}
I0523 22:15:41.484202 518 integration.go:860]
Logging high latency metrics from the 10251 kubelet
I0523 22:15:41.505932 518 server.go:959] GET /metrics: (20.978456ms) 200 [[Go-http-client/1.1] 127.0.0.1:48245]
May 23 22:15:41.536: INFO:
Latency metrics for node localhost:10251
May 23 22:15:41.536: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:6.165881s}
May 23 22:15:41.536: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:5.193218s}
May 23 22:15:41.536: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:5.163868s}
May 23 22:15:41.536: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:5.163868s}
+++ [0523 22:15:41] Integration test cleanup complete
etcd -data-dir /tmp.k8s/tmp.s3EPWp6iyO --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null
Waiting for etcd to come up.
+++ [0523 22:15:41] On try 2, etcd: :
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
+++ [0523 22:15:42] Running integration test cases
Running tests for APIVersion: v1,autoscaling/v1,batch/v1,apps/v1alpha1,policy/v1alpha1,extensions/v1beta1 with etcdPrefix: registry
+++ [0523 22:15:42] Running tests without code coverage
ok k8s.io/kubernetes/test/integration 305.610s
Running tests for APIVersion: v1,autoscaling/v1,batch/v1,apps/v1alpha1,policy/v1alpha1,extensions/v1beta1 with etcdPrefix: kubernetes.io/registry
+++ [0523 22:21:25] Running tests without code coverage
ok k8s.io/kubernetes/test/integration 305.235s
+++ [0523 22:27:08] Running integration test scenario with watch cache on
I0523 22:27:08.291929 668 integration.go:766] Running tests for APIVersion:
I0523 22:27:08.292641 668 integration.go:113] Creating etcd client pointing to [http://127.0.0.1:4001]
W0523 22:27:08.295882 668 genericapiserver.go:257] Network range for service cluster IPs is unspecified. Defaulting to 10.0.0.0/24.
I0523 22:27:08.296103 668 genericapiserver.go:286] Node port range unspecified. Defaulting to 30000-32767.
W0523 22:27:08.597216 668 controller.go:277] Resetting endpoints for master service "kubernetes" to kind:"" apiVersion:""
I0523 22:27:08.621230 668 factory.go:256] Creating scheduler from algorithm provider 'DefaultProvider'
I0523 22:27:08.624727 668 factory.go:302] creating scheduler with fit predicates 'map[MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} GeneralPredicates:{} PodToleratesNodeTaints:{} CheckNodeMemoryPressure:{} NoDiskConflict:{} NoVolumeZoneConflict:{}]' and priority functions 'map[TaintTolerationPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} SelectorSpreadPriority:{} NodeAffinityPriority:{}]
I0523 22:27:08.625793 668 nodecontroller.go:157] Sending events to api server.
I0523 22:27:08.626517 668 integration.go:216] Using /tmp/kubelet_integ_1.524475105 as root dir for kubelet #1
W0523 22:27:08.626882 668 server.go:645] No api server defined - no events will be sent to API server.
I0523 22:27:08.627148 668 server.go:707] Adding manifest file: /tmp/kubelet_integ_1.524475105/config590502092
I0523 22:27:08.627413 668 file.go:47] Watching path "/tmp/kubelet_integ_1.524475105/config590502092"
I0523 22:27:08.627681 668 server.go:713] Adding manifest url "http://127.0.0.1:55041/manifest" with HTTP header map[]
I0523 22:27:08.627921 668 http.go:55] Watching URL http://127.0.0.1:55041/manifest
I0523 22:27:08.628227 668 server.go:717] Watching apiserver
W0523 22:27:08.628871 668 iptables.go:144] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0523 22:27:08.629267 668 iptables.go:177] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0523 22:27:08.629589 668 kubelet.go:378] Hairpin mode set to "hairpin-veth"
W0523 22:27:08.629943 668 plugins.go:170] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
I0523 22:27:08.633438 668 manager.go:230] Setting dockerRoot to
I0523 22:27:08.635731 668 replication_controller.go:240] Starting RC Manager
I0523 22:27:08.676964 668 nodecontroller.go:636] NodeController is entering network segmentation mode.
I0523 22:27:08.755143 668 endpoints_controller.go:322] Waiting for pods controller to sync, requeuing rc default/kubernetes
I0523 22:27:08.933668 668 plugins.go:294] Loaded volume plugin "kubernetes.io/empty-dir"
I0523 22:27:08.933806 668 server.go:679] Started kubelet v1.3.0-alpha.4.392+8f104a7b0f2298
I0523 22:27:08.934046 668 integration.go:249] Using /tmp/kubelet_integ_2.156468155 as root dir for kubelet #2
W0523 22:27:08.934113 668 server.go:645] No api server defined - no events will be sent to API server.
I0523 22:27:08.934138 668 server.go:713] Adding manifest url "http://127.0.0.1:60386/manifest" with HTTP header map[]
I0523 22:27:08.934195 668 http.go:55] Watching URL http://127.0.0.1:60386/manifest
I0523 22:27:08.934212 668 server.go:717] Watching apiserver
W0523 22:27:08.934364 668 iptables.go:144] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0523 22:27:08.934486 668 iptables.go:177] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0523 22:27:08.934519 668 kubelet.go:378] Hairpin mode set to "hairpin-veth"
W0523 22:27:08.934644 668 plugins.go:170] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
I0523 22:27:08.934703 668 manager.go:230] Setting dockerRoot to
E0523 22:27:08.935367 668 kubelet.go:905] Image garbage collection failed: invalid capacity 0 on device "" at mount point ""
I0523 22:27:08.935778 668 container_manager_stub.go:29] Starting stub container manager
I0523 22:27:08.940313 668 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0523 22:27:08.940605 668 manager.go:123] Starting to sync pod status with apiserver
I0523 22:27:08.940887 668 kubelet.go:2492] Starting kubelet main sync loop.
I0523 22:27:08.941156 668 kubelet.go:2501] skipping pod synchronization - [network state unknown container runtime is down]
I0523 22:27:08.936525 668 server.go:117] Starting to listen on 127.0.0.1:10250
I0523 22:27:09.048579 668 kubelet.go:2904] Recording NodeHasSufficientDisk event message for node localhost
I0523 22:27:09.049262 668 kubelet.go:2904] Recording NodeHasSufficientMemory event message for node localhost
I0523 22:27:09.049871 668 kubelet.go:1099] Attempting to register node localhost
I0523 22:27:09.057140 668 kubelet.go:1130] Successfully registered node localhost
I0523 22:27:09.068268 668 plugins.go:294] Loaded volume plugin "kubernetes.io/empty-dir"
I0523 22:27:09.068479 668 server.go:679] Started kubelet v1.3.0-alpha.4.392+8f104a7b0f2298
I0523 22:27:09.068519 668 integration.go:773] API Server started on http://127.0.0.1:35757
I0523 22:27:09.068571 668 server.go:117] Starting to listen on 127.0.0.1:10251
E0523 22:27:09.096195 668 kubelet.go:905] Image garbage collection failed: invalid capacity 0 on device "" at mount point ""
I0523 22:27:09.096936 668 container_manager_stub.go:29] Starting stub container manager
I0523 22:27:09.097152 668 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0523 22:27:09.097380 668 manager.go:123] Starting to sync pod status with apiserver
I0523 22:27:09.098453 668 kubelet.go:2492] Starting kubelet main sync loop.
I0523 22:27:09.098651 668 kubelet.go:2501] skipping pod synchronization - [network state unknown container runtime is down]
I0523 22:27:09.201707 668 kubelet.go:2904] Recording NodeHasSufficientDisk event message for node 127.0.0.1
I0523 22:27:09.202219 668 kubelet.go:2904] Recording NodeHasSufficientMemory event message for node 127.0.0.1
I0523 22:27:09.202491 668 kubelet.go:1099] Attempting to register node 127.0.0.1
I0523 22:27:09.501028 668 kubelet.go:1130] Successfully registered node 127.0.0.1
I0523 22:27:13.680689 668 nodecontroller.go:525] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"127.0.0.1", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/127.0.0.1", UID:"633f66da-2107-11e6-b148-0242ac110002", ResourceVersion:"3201", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/arch":"amd64", "beta.kubernetes.io/os":"linux", "kubernetes.io/hostname":"127.0.0.1"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"127.0.0.1", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}, "alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}}, Allocatable:api.ResourceList{"alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"127.0.0.1"}, api.NodeAddress{Type:"InternalIP", Address:"127.0.0.1"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10251}}, NodeInfo:api.NodeSystemInfo{MachineID:"", SystemUUID:"", BootID:"", KernelVersion:"", OSImage:"", ContainerRuntimeVersion:"docker://1.8.1", KubeletVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", KubeProxyVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]api.ContainerImage(nil)}}
I0523 22:27:13.681713 668 nodecontroller.go:691] Recording Registered Node 127.0.0.1 in NodeController event message for node 127.0.0.1
I0523 22:27:13.681904 668 nodecontroller.go:525] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"localhost", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/localhost", UID:"62fb8d87-2107-11e6-b148-0242ac110002", ResourceVersion:"3200", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/arch":"amd64", "beta.kubernetes.io/os":"linux", "kubernetes.io/hostname":"localhost"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"localhost", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}}, Allocatable:api.ResourceList{"alpha.kubernetes.io/nvidia-gpu":resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x30}, Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x31}, Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:4026531840, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x33, 0x38, 0x34, 0x30, 0x4d, 0x69}, Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:40, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:[]uint8{0x34, 0x30}, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63599619432, nsec:0, loc:(*time.Location)(0x557e540)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63599619429, nsec:0, loc:(*time.Location)(0x557e540)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"172.17.0.2"}, api.NodeAddress{Type:"InternalIP", Address:"172.17.0.2"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10250}}, NodeInfo:api.NodeSystemInfo{MachineID:"", SystemUUID:"", BootID:"", KernelVersion:"", OSImage:"", ContainerRuntimeVersion:"docker://1.8.1", KubeletVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", KubeProxyVersion:"v1.3.0-alpha.4.392+8f104a7b0f2298", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]api.ContainerImage(nil)}}
I0523 22:27:13.684835 668 nodecontroller.go:691] Recording Registered Node localhost in NodeController event message for node localhost
W0523 22:27:13.685393 668 nodecontroller.go:758] Missing timestamp for Node 127.0.0.1. Assuming now as a timestamp.
W0523 22:27:13.685620 668 nodecontroller.go:758] Missing timestamp for Node localhost. Assuming now as a timestamp.
I0523 22:27:13.685886 668 nodecontroller.go:641] NodeController exited network segmentation mode.
I0523 22:27:13.683022 668 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController
I0523 22:27:13.686380 668 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node localhost event: Registered Node localhost in NodeController
I0523 22:27:13.941732 668 kubelet.go:2558] SyncLoop (ADD, "file"): ""
I0523 22:27:13.942188 668 kubelet.go:2558] SyncLoop (ADD, "api"): ""
I0523 22:27:13.942485 668 kubelet.go:2558] SyncLoop (ADD, "http"): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)"
I0523 22:27:13.954416 668 manager.go:1706] Need to restart pod infra container for "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)" because it is not found
I0523 22:27:13.955783 668 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}
E0523 22:27:13.956849 668 manager.go:1584] ResolvConfPath is empty.
I0523 22:27:13.957417 668 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:27:13.957942 668 manager.go:1455] Generating ref for container redis: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{redis}"}
I0523 22:27:13.959164 668 manager.go:1455] Generating ref for container guestbook: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-localhost", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{guestbook}"}
I0523 22:27:13.984413 668 kubelet.go:2558] SyncLoop (ADD, "api"): "container-vm-guestbook-pod-spec-localhost_default(65e641ab-2107-11e6-b148-0242ac110002)"
I0523 22:27:14.099059 668 kubelet.go:2558] SyncLoop (ADD, "api"): ""
I0523 22:27:14.099578 668 kubelet.go:2558] SyncLoop (ADD, "http"): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)"
I0523 22:27:14.107381 668 manager.go:1706] Need to restart pod infra container for "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)" because it is not found
I0523 22:27:14.108465 668 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}
E0523 22:27:14.109340 668 manager.go:1584] ResolvConfPath is empty.
I0523 22:27:14.111561 668 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:27:14.111865 668 manager.go:1455] Generating ref for container redis: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{redis}"}
I0523 22:27:14.110451 668 kubelet.go:2558] SyncLoop (ADD, "api"): "container-vm-guestbook-pod-spec-127.0.0.1_default(65fe0d3e-2107-11e6-b148-0242ac110002)"
I0523 22:27:14.123271 668 manager.go:1455] Generating ref for container guestbook: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"container-vm-guestbook-pod-spec-127.0.0.1", UID:"f3d5be8fac656a02fc266a2ed8e419c1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{guestbook}"}
I0523 22:27:14.943812 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_bde9d03d"}
I0523 22:27:14.943916 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_eecf0606"}
I0523 22:27:14.944004 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-localhost_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_4fbfef56"}
I0523 22:27:15.102796 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_6da8cf49"}
I0523 22:27:15.106724 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_2ed42ad5"}
I0523 22:27:15.107009 668 kubelet.go:2585] SyncLoop (PLEG): "container-vm-guestbook-pod-spec-127.0.0.1_default(f3d5be8fac656a02fc266a2ed8e419c1)", event: &pleg.PodLifecycleEvent{ID:"f3d5be8fac656a02fc266a2ed8e419c1", Type:"ContainerStarted", Data:"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_baf07339"}
I0523 22:27:19.069023 668 integration.go:810] Running 5 tests in parallel.
I0523 22:27:19.077233 668 integration.go:396] Version test passed
I0523 22:27:19.121618 668 integration.go:493] Created atomicService
I0523 22:27:19.122004 668 integration.go:506] Starting to update (e, v)
I0523 22:27:19.122473 668 integration.go:506] Starting to update (foo, bar)
I0523 22:27:19.122980 668 integration.go:506] Starting to update (a, z)
I0523 22:27:19.123478 668 integration.go:506] Starting to update (b, y)
I0523 22:27:19.124016 668 integration.go:506] Starting to update (c, x)
I0523 22:27:19.124736 668 integration.go:506] Starting to update (d, w)
I0523 22:27:19.141411 668 integration.go:517] Posting update (d, w)
I0523 22:27:19.142997 668 integration.go:517] Posting update (e, v)
I0523 22:27:19.143986 668 integration.go:517] Posting update (b, y)
I0523 22:27:19.153386 668 integration.go:517] Posting update (a, z)
I0523 22:27:19.155135 668 integration.go:517] Posting update (foo, bar)
I0523 22:27:19.165156 668 integration.go:517] Posting update (c, x)
I0523 22:27:19.278023 668 integration.go:530] Done update (d, w)
I0523 22:27:19.299765 668 integration.go:521] Conflict: (b, y)
I0523 22:27:19.300023 668 integration.go:506] Starting to update (b, y)
I0523 22:27:19.300477 668 integration.go:521] Conflict: (foo, bar)
I0523 22:27:19.300773 668 integration.go:506] Starting to update (foo, bar)
I0523 22:27:19.301464 668 integration.go:521] Conflict: (a, z)
I0523 22:27:19.301739 668 integration.go:506] Starting to update (a, z)
I0523 22:27:19.302427 668 integration.go:521] Conflict: (e, v)
I0523 22:27:19.302668 668 integration.go:506] Starting to update (e, v)
I0523 22:27:19.303321 668 integration.go:521] Conflict: (c, x)
I0523 22:27:19.303596 668 integration.go:506] Starting to update (c, x)
I0523 22:27:19.313690 668 integration.go:517] Posting update (c, x)
I0523 22:27:19.314558 668 integration.go:517] Posting update (foo, bar)
I0523 22:27:19.315326 668 integration.go:517] Posting update (b, y)
I0523 22:27:19.318001 668 integration.go:517] Posting update (a, z)
I0523 22:27:19.318799 668 integration.go:517] Posting update (e, v)
I0523 22:27:19.319910 668 integration.go:460] Self link test passed in namespace default
I0523 22:27:19.334966 668 integration.go:530] Done update (foo, bar)
I0523 22:27:19.349271 668 integration.go:530] Done update (b, y)
I0523 22:27:19.352522 668 integration.go:521] Conflict: (a, z)
I0523 22:27:19.355327 668 integration.go:506] Starting to update (a, z)
I0523 22:27:19.353401 668 integration.go:521] Conflict: (c, x)
I0523 22:27:19.355540 668 integration.go:506] Starting to update (c, x)
I0523 22:27:19.353471 668 integration.go:521] Conflict: (e, v)
I0523 22:27:19.356190 668 integration.go:506] Starting to update (e, v)
I0523 22:27:19.360993 668 integration.go:517] Posting update (e, v)
I0523 22:27:19.361833 668 integration.go:517] Posting update (c, x)
I0523 22:27:19.362683 668 integration.go:517] Posting update (a, z)
I0523 22:27:19.378677 668 integration.go:460] Self link test passed in namespace other
I0523 22:27:19.397220 668 integration.go:530] Done update (a, z)
I0523 22:27:19.397625 668 integration.go:521] Conflict: (e, v)
I0523 22:27:19.397868 668 integration.go:506] Starting to update (e, v)
I0523 22:27:19.399777 668 integration.go:521] Conflict: (c, x)
I0523 22:27:19.399900 668 integration.go:506] Starting to update (c, x)
I0523 22:27:19.403930 668 integration.go:517] Posting update (e, v)
I0523 22:27:19.405852 668 integration.go:517] Posting update (c, x)
I0523 22:27:19.421839 668 integration.go:530] Done update (e, v)
I0523 22:27:19.422891 668 integration.go:521] Conflict: (c, x)
I0523 22:27:19.426162 668 integration.go:506] Starting to update (c, x)
I0523 22:27:19.429123 668 integration.go:517] Posting update (c, x)
I0523 22:27:19.469602 668 integration.go:530] Done update (c, x)
I0523 22:27:19.477203 668 integration.go:542] Atomic PUTs work.
I0523 22:27:19.820830 668 integration.go:650] PATCHs work.
I0523 22:27:31.074560 668 integration.go:679] Master service test passed.
I0523 22:27:31.075008 668 integration.go:851] OK - found created containers: []string{"/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_POD.1f83049e_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_guestbook.1c010e8a_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-127.0.0.1_default_f3d5be8fac656a02fc266a2ed8e419c1_", "/k8s_redis.dbcc0798_container-vm-guestbook-pod-spec-localhost_default_f3d5be8fac656a02fc266a2ed8e419c1_"}
I0523 22:27:31.103229 668 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"701c2730-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3249", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.foo to localhost
I0523 22:27:31.107874 668 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.foo_default(701c2730-2107-11e6-b148-0242ac110002)"
I0523 22:27:31.108875 668 manager.go:1706] Need to restart pod infra container for "phantom.foo_default(701c2730-2107-11e6-b148-0242ac110002)" because it is not found
I0523 22:27:31.109755 668 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"701c2730-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3250", FieldPath:"implicitly required container POD"}
E0523 22:27:31.110675 668 manager.go:1584] ResolvConfPath is empty.
I0523 22:27:31.111101 668 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:27:31.111648 668 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.foo", UID:"701c2730-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3250", FieldPath:"spec.containers{c1}"}
I0523 22:27:31.955159 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.foo_default(701c2730-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"701c2730-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.foo_default_701c2730-2107-11e6-b148-0242ac110002_8d407968"}
I0523 22:27:31.955278 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.foo_default(701c2730-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"701c2730-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.foo_default_701c2730-2107-11e6-b148-0242ac110002_a644f0f8"}
I0523 22:27:32.103955 668 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"70b592bb-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3254", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.bar to 127.0.0.1
I0523 22:27:32.107168 668 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)"
I0523 22:27:32.110552 668 manager.go:1706] Need to restart pod infra container for "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)" because it is not found
I0523 22:27:32.111468 668 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"70b592bb-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3255", FieldPath:"implicitly required container POD"}
E0523 22:27:32.115758 668 manager.go:1584] ResolvConfPath is empty.
I0523 22:27:32.116128 668 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:27:32.116649 668 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.bar", UID:"70b592bb-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3255", FieldPath:"spec.containers{c1}"}
I0523 22:27:33.087376 668 integration.go:378] Pod default/phantom.bar is not running. In phase "Pending"
I0523 22:27:33.114639 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"70b592bb-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b"}
I0523 22:27:33.115106 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"70b592bb-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02"}
I0523 22:27:34.092800 668 integration.go:720] Deleting pod phantom.bar
I0523 22:27:34.102236 668 kubelet.go:2565] SyncLoop (UPDATE, "api"): "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)"
I0523 22:27:34.146555 668 kubelet.go:2568] SyncLoop (REMOVE, "api"): "phantom.bar_default(70b592bb-2107-11e6-b148-0242ac110002)"
I0523 22:27:34.147178 668 kubelet.go:2338] Killing unwanted pod "phantom.bar"
I0523 22:27:34.148637 668 manager.go:1309] Killing container "/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b c1 default/phantom.bar" with 0 second grace period
I0523 22:27:34.148940 668 manager.go:1347] Container "/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b c1 default/phantom.bar" exited after 59.589µs
I0523 22:27:34.149326 668 manager.go:1309] Killing container "/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02 default/phantom.bar" with 0 second grace period
I0523 22:27:34.149595 668 manager.go:1347] Container "/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02 default/phantom.bar" exited after 20.777µs
I0523 22:27:34.196707 668 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"71f0ea00-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3263", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned phantom.baz to 127.0.0.1
I0523 22:27:34.199885 668 kubelet.go:2558] SyncLoop (ADD, "api"): "phantom.baz_default(71f0ea00-2107-11e6-b148-0242ac110002)"
I0523 22:27:34.202179 668 manager.go:1706] Need to restart pod infra container for "phantom.baz_default(71f0ea00-2107-11e6-b148-0242ac110002)" because it is not found
I0523 22:27:34.203159 668 manager.go:1455] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"71f0ea00-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3264", FieldPath:"implicitly required container POD"}
E0523 22:27:34.204059 668 manager.go:1584] ResolvConfPath is empty.
I0523 22:27:34.204659 668 hairpin.go:61] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
I0523 22:27:34.205135 668 manager.go:1455] Generating ref for container c1: &api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"phantom.baz", UID:"71f0ea00-2107-11e6-b148-0242ac110002", APIVersion:"v1", ResourceVersion:"3264", FieldPath:"spec.containers{c1}"}
I0523 22:27:35.100480 668 kubelet.go:2338] Killing unwanted pod "phantom.bar"
I0523 22:27:35.100935 668 manager.go:1309] Killing container "/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b /" with 30 second grace period
I0523 22:27:35.101433 668 manager.go:1347] Container "/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b /" exited after 31.718µs
W0523 22:27:35.101594 668 manager.go:1353] No ref for pod '"/k8s_c1.4efa0b7e_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_fc682f3b /"'
I0523 22:27:35.101741 668 manager.go:1309] Killing container "/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02 /" with 30 second grace period
I0523 22:27:35.101932 668 manager.go:1347] Container "/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02 /" exited after 21.879µs
W0523 22:27:35.101959 668 manager.go:1353] No ref for pod '"/k8s_POD.5d120417_phantom.bar_default_70b592bb-2107-11e6-b148-0242ac110002_21725a02 /"'
I0523 22:27:35.115298 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.baz_default(71f0ea00-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"71f0ea00-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_c1.4efa0b7e_phantom.baz_default_71f0ea00-2107-11e6-b148-0242ac110002_ab694965"}
I0523 22:27:35.115590 668 kubelet.go:2585] SyncLoop (PLEG): "phantom.baz_default(71f0ea00-2107-11e6-b148-0242ac110002)", event: &pleg.PodLifecycleEvent{ID:"71f0ea00-2107-11e6-b148-0242ac110002", Type:"ContainerStarted", Data:"/k8s_POD.5d120417_phantom.baz_default_71f0ea00-2107-11e6-b148-0242ac110002_15e4ac07"}
I0523 22:27:35.126173 668 manager.go:399] Status for pod "70b592bb-2107-11e6-b148-0242ac110002" is up-to-date; skipping
I0523 22:27:35.173746 668 integration.go:739] Scheduler doesn't make phantom pods: test passed.
I0523 22:27:35.173813 668 integration.go:858]
Logging high latency metrics from the 10250 kubelet
I0523 22:27:35.210151 668 server.go:959] GET /metrics: (35.570507ms) 200 [[Go-http-client/1.1] 127.0.0.1:33297]
May 23 22:27:35.226: INFO:
Latency metrics for node localhost:10250
May 23 22:27:35.231: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:6.156166s}
May 23 22:27:35.231: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:5.272117s}
May 23 22:27:35.231: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:5.153004s}
May 23 22:27:35.231: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:5.153004s}
I0523 22:27:35.231620 668 integration.go:860]
Logging high latency metrics from the 10251 kubelet
I0523 22:27:35.261784 668 server.go:959] GET /metrics: (29.40841ms) 200 [[Go-http-client/1.1] 127.0.0.1:51486]
May 23 22:27:35.281: INFO:
Latency metrics for node localhost:10251
May 23 22:27:35.281: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:6.156166s}
May 23 22:27:35.281: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:5.272117s}
May 23 22:27:35.281: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:5.153004s}
May 23 22:27:35.281: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:5.153004s}
+++ [0523 22:27:35] Integration test cleanup complete
+++ [0523 22:27:35] Integration test cleanup complete
ERRO[24199] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: Container a2cf172a9b8e9d60533f1204c2cec44c646d4bc7f8139c257fd81875f3c1ee5a is not running
+++ [0523 22:27:35] Running build command....
ERRO[24199] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: No such container: kube-build-573d825ce9
ERRO[24199] Handler for POST /v1.23/containers/kube-build-573d825ce9/wait returned error: No such container: kube-build-573d825ce9
ERRO[24199] Handler for DELETE /v1.23/containers/kube-build-573d825ce9 returned error: No such container: kube-build-573d825ce9
INFO[24199] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[24199] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
ERRO[24200] Handler for POST /v1.23/containers/kube-build-573d825ce9/kill returned error: Cannot kill container kube-build-573d825ce9: Container 3eb565ce9f6a612b4edb9cae9f336736945c4cacf0b54e670f3f122073fd7f72 is not running
+++ [0523 22:27:36] Output directory is local. No need to copy results out.
+++ [0523 22:27:36] Building tarball: manifests
+++ [0523 22:27:36] Building tarball: salt
+++ [0523 22:27:36] Building tarball: src
+++ [0523 22:27:36] Starting tarball: client darwin-386
+++ [0523 22:27:36] Starting tarball: client darwin-amd64
+++ [0523 22:27:36] Starting tarball: client linux-386
+++ [0523 22:27:36] Starting tarball: client linux-amd64
+++ [0523 22:27:36] Starting tarball: client linux-arm
+++ [0523 22:27:36] Starting tarball: client linux-arm64
+++ [0523 22:27:36] Starting tarball: client windows-386
+++ [0523 22:27:36] Starting tarball: client windows-amd64
+++ [0523 22:27:36] Waiting on tarballs
+++ [0523 22:27:36] Building tarball: server linux-amd64
+++ [0523 22:29:01] Starting Docker build for image: kube-apiserver
+++ [0523 22:29:01] Starting Docker build for image: kube-controller-manager
+++ [0523 22:29:01] Starting Docker build for image: kube-scheduler
+++ [0523 22:29:01] Starting Docker build for image: kube-proxy
+++ [0523 22:30:11] Deleting docker image gcr.io/google_containers/kube-scheduler:4d7adce042af64c205c4415a40d92fd0
Untagged: gcr.io/google_containers/kube-scheduler:4d7adce042af64c205c4415a40d92fd0
Deleted: sha256:3d60a617420e6f02caa8d53c0fe73705cf0097fe2101823e40b55d2cae659dba
Deleted: sha256:1522225de35720a630dff9885d395dba5ce86e8992484bcdb673a7b6fffdf311
+++ [0523 22:30:20] Deleting docker image gcr.io/google_containers/kube-controller-manager:6e4aadd8b09dcca1f6de4ddeb4b979cc
Untagged: gcr.io/google_containers/kube-controller-manager:6e4aadd8b09dcca1f6de4ddeb4b979cc
Deleted: sha256:bd153a5261432d5f05ca9208cd864ae11dde5677867f3f85a14074553d9ef235
Deleted: sha256:afb82b8d6a6f23b2693f0885efdcf301331867327a81fc331e04176b637fb0a1
+++ [0523 22:30:21] Deleting docker image gcr.io/google_containers/kube-apiserver:bc49bb83976b5c5b9aa961ff54c60190
Untagged: gcr.io/google_containers/kube-apiserver:bc49bb83976b5c5b9aa961ff54c60190
Deleted: sha256:0d885df27cfc0ed61e4d1492c3d059c1a1254e99a26a1ea4bcf10d89a8e48355
Deleted: sha256:8d7e85771d9e1fd002c525ab212ce5f5321f1ffb88dce9eba307149454ec1966
+++ [0523 22:30:35] Deleting docker image gcr.io/google_containers/kube-proxy:63abacf7a3925b8ba77830ea4e4bd775
Untagged: gcr.io/google_containers/kube-proxy:63abacf7a3925b8ba77830ea4e4bd775
Deleted: sha256:c6cc3755abdbf44792d76a63cecfab0841c355e26ddeba48e3a887f510c21794
Deleted: sha256:5065d82959532d0d2003200cc75b4ceedfc4fe1ba1f988799a5623f97ea0b21c
+++ [0523 22:30:35] Docker builds done
+++ [0523 22:32:13] Building tarball: server linux-arm
+++ [0523 22:32:48] Starting Docker build for image: kube-apiserver
+++ [0523 22:32:48] Starting Docker build for image: kube-controller-manager
+++ [0523 22:32:48] Starting Docker build for image: kube-scheduler
+++ [0523 22:32:48] Starting Docker build for image: kube-proxy
+++ [0523 22:33:48] Deleting docker image gcr.io/google_containers/kube-scheduler-arm:364bfec65b13933710f23ff73fa8ca2b
Untagged: gcr.io/google_containers/kube-scheduler-arm:364bfec65b13933710f23ff73fa8ca2b
Deleted: sha256:c73aa14054f77a07a3c6f13ced499a160dbeb16956b1bdad881d824f0331b8cf
Deleted: sha256:4c27da6a0a5678cc01c4db59f21f36984bd594a687eea39146d1aadf1ca512e6
+++ [0523 22:33:55] Deleting docker image gcr.io/google_containers/kube-controller-manager-arm:e0178f41759c8b177aecea386887a2c0
Untagged: gcr.io/google_containers/kube-controller-manager-arm:e0178f41759c8b177aecea386887a2c0
Deleted: sha256:b5cf26782b8cdf22e5baedaa55544a80893c0787f357525ee3f081d8c04a1a3f
Deleted: sha256:6252de5d7d8e622bcd4f5015d85c7106bfd944aa4cfa11883f1ee39f39e4bd5c
+++ [0523 22:33:55] Deleting docker image gcr.io/google_containers/kube-apiserver-arm:9bbfc9fb93ec4a8fe4126f36b0ca8d24
Untagged: gcr.io/google_containers/kube-apiserver-arm:9bbfc9fb93ec4a8fe4126f36b0ca8d24
Deleted: sha256:5d22eea326e126cb771bb5af000c602f70ed1fcfbc32716fc4c32e22b316eb9e
Deleted: sha256:eb02d06a7bdd440c47b41aa877fb6d6213a7d6cf43b71bbec04de098ab05db1f
+++ [0523 22:34:11] Deleting docker image gcr.io/google_containers/kube-proxy-arm:42c7e1781b4eda91c2b8eae495f94f1e
Untagged: gcr.io/google_containers/kube-proxy-arm:42c7e1781b4eda91c2b8eae495f94f1e
Deleted: sha256:003596a792f5813f4bf7b68996e963bda23ea6abaa7d3d0aa2564976ad07f13b
Deleted: sha256:5ad38216082c46d8e8678b5c855d50881d18fa3786ff7768b8a68b235184fdf0
+++ [0523 22:34:11] Docker builds done
+++ [0523 22:35:36] Building tarball: server linux-arm64
+++ [0523 22:36:09] Starting Docker build for image: kube-apiserver
+++ [0523 22:36:09] Starting Docker build for image: kube-controller-manager
+++ [0523 22:36:09] Starting Docker build for image: kube-scheduler
+++ [0523 22:36:09] Starting Docker build for image: kube-proxy
+++ [0523 22:37:19] Deleting docker image gcr.io/google_containers/kube-scheduler-arm64:4f01b0d532cd6acef8e2abaaf4a78f71
Untagged: gcr.io/google_containers/kube-scheduler-arm64:4f01b0d532cd6acef8e2abaaf4a78f71
Deleted: sha256:9ba862baf11eb73fb39569971be1fac6dfe557761e77373f72d450695c480cc5
Deleted: sha256:009ef5a347469d978c874c55c1d2768dc60757c6f2959f6ad823bda0f9728b93
+++ [0523 22:37:31] Deleting docker image gcr.io/google_containers/kube-apiserver-arm64:f71c83e2c59329c88de21e494ca82a52
+++ [0523 22:37:32] Deleting docker image gcr.io/google_containers/kube-controller-manager-arm64:8d2706e247b7980f7ea215d3ce032861
Untagged: gcr.io/google_containers/kube-apiserver-arm64:f71c83e2c59329c88de21e494ca82a52
Deleted: sha256:2753d75c17773799197f54a8e76df7b0cfa626e9687804cee19bb8f1b9498841
Deleted: sha256:867f79f236a8cd04341bc9a2ee9c3fd46582fa82fb297457ce14bcfd6cfd4d11
Untagged: gcr.io/google_containers/kube-controller-manager-arm64:8d2706e247b7980f7ea215d3ce032861
Deleted: sha256:0a002cc106e87ba98cf2e8b768982d2d8ddb2d8488469d607fe17179037ef3df
Deleted: sha256:68a3b1f37f276339a315eb5cec8e0cf65e92354658b45768c44da639a814089b
+++ [0523 22:37:38] Deleting docker image gcr.io/google_containers/kube-proxy-arm64:4c57f12147715a45b04fbbc508aae447
Untagged: gcr.io/google_containers/kube-proxy-arm64:4c57f12147715a45b04fbbc508aae447
Deleted: sha256:c34eef113a4f3f91259164108ff247825260c277b4e45fc0008eedaa6c824743
Deleted: sha256:9720f6db360d822896141e8e815ffa25ace2d199de84b7637b825e785beb11aa
+++ [0523 22:37:38] Docker builds done
+++ [0523 22:39:16] Building tarball: test
+++ [0523 22:39:16] Building tarball: full
root@janonymous-virtual-machine:~/etcd-v2.3.5-linux-amd64/kubernetes# ./hack/local-up-cluster.sh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment