Skip to content

Instantly share code, notes, and snippets.

@enisoc
Created December 2, 2016 20:31
Show Gist options
  • Save enisoc/e0fe54d0dc22a9138adfe77f4c19ee63 to your computer and use it in GitHub Desktop.
Save enisoc/e0fe54d0dc22a9138adfe77f4c19ee63 to your computer and use it in GitHub Desktop.
E1202 19:49:05.967029 6 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1202 19:49:09.420800 6 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I1202 19:49:13.690215 6 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
I1202 19:49:13.695214 6 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"3190e9ed-b8c5-11e6-bee7-42010a800002", APIVersion:"v1", ResourceVersion:"3877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-master became leader
I1202 19:49:13.764769 6 replication_controller.go:218] Starting RC Manager
I1202 19:49:13.764837 6 gc_controller.go:117] PodGCController is waiting for informer sync...
I1202 19:49:13.790668 6 gce.go:291] Using GCE provider config {Global:{TokenURL: TokenBody: ProjectID: NetworkName: NodeTags:[kubernetes-minion] NodeInstancePrefix:kubernetes-minion Multizone:false}}
I1202 19:49:13.790715 6 gce.go:331] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I1202 19:49:13.800852 6 nodecontroller.go:190] Sending events to api server.
I1202 19:49:13.879950 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:13.880010 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:13.880029 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:13.880042 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:13.982563 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:13.982649 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:13.982673 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:13.982687 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.085538 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.085600 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.085617 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.085631 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.187831 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.187884 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.187903 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.187917 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.288741 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.288806 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.288823 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.288836 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.391106 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.391169 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.391191 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.391206 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.494543 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.494604 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.494621 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.494635 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.594953 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.595008 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.595025 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.595039 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.697247 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.697315 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.697334 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.697348 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.798032 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.798095 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.798113 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:14.798127 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.900306 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:14.900375 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:14.900394 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:14.900409 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.002671 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.002746 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.002764 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.002778 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.104976 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.105037 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.105059 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.105074 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.207435 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.207518 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.207538 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.207552 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.310226 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.310289 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.310311 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.310325 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.412539 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.412611 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.412630 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.412643 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.514550 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.514621 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.514640 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.514654 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.616454 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.616526 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.616545 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.616559 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.719538 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.719612 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.719630 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.719644 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.819734 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.819885 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.819898 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:15.819905 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.924563 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:15.924622 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:15.924641 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:15.924656 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.024779 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.024884 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.024902 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.024916 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.125587 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.125671 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.125702 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.125744 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.225905 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.225983 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.226010 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.226031 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.326553 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.326615 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.326633 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.326649 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.427560 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.427628 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.427646 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.427660 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.528565 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.528629 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.528649 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.528664 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.628725 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.628790 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.628811 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.628827 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.729560 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.729628 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.729647 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:16.729662 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.830053 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:16.830306 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:16.830315 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:16.830322 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.072419 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.072486 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.072514 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.072530 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.172543 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.172611 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.172634 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.172650 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.272857 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.272917 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.272935 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.272949 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.375548 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.375613 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.375630 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.375651 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.475916 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.475978 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.476005 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.476022 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.576108 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.867618 6 trace.go:61] Trace "syncReplicationController: kube-system/kubernetes-dashboard-v1.4.0" (started 2016-12-02 19:49:17.476034653 +0000 UTC):
[391.525322ms] [391.525322ms] END
I1202 19:49:17.576273 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.867661 6 trace.go:61] Trace "syncReplicationController: kube-system/kube-dns-v20" (started 2016-12-02 19:49:17.4759652 +0000 UTC):
[391.685972ms] [391.685972ms] END
I1202 19:49:17.576282 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:17.867688 6 trace.go:61] Trace "syncReplicationController: kube-system/monitoring-influxdb-grafana-v4" (started 2016-12-02 19:49:17.475989109 +0000 UTC):
[391.685565ms] [391.685565ms] END
I1202 19:49:17.576289 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.867719 6 trace.go:61] Trace "syncReplicationController: kube-system/l7-default-backend-v1.0" (started 2016-12-02 19:49:17.476014027 +0000 UTC):
[391.693771ms] [391.693771ms] END
I1202 19:49:17.970546 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:17.970606 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:17.970624 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:17.970638 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.070741 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.070804 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.070822 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.070843 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.172964 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.173033 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.173051 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.173065 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.275465 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.275529 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.275547 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.275561 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.378540 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.378607 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.378626 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.378640 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.478735 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.745628 6 trace.go:61] Trace "syncReplicationController: kube-system/monitoring-influxdb-grafana-v4" (started 2016-12-02 19:49:18.378647313 +0000 UTC):
[366.947191ms] [366.947191ms] END
I1202 19:49:18.478893 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.745692 6 trace.go:61] Trace "syncReplicationController: kube-system/l7-default-backend-v1.0" (started 2016-12-02 19:49:18.378588937 +0000 UTC):
[367.090084ms] [367.090084ms] END
I1202 19:49:18.478903 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.745714 6 trace.go:61] Trace "syncReplicationController: kube-system/kubernetes-dashboard-v1.4.0" (started 2016-12-02 19:49:18.378619436 +0000 UTC):
[367.086718ms] [367.086718ms] END
I1202 19:49:18.478910 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.745748 6 trace.go:61] Trace "syncReplicationController: kube-system/kube-dns-v20" (started 2016-12-02 19:49:18.378633591 +0000 UTC):
[367.105828ms] [367.105828ms] END
I1202 19:49:18.846957 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.847026 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.847044 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:18.847058 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.949554 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:18.949621 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:18.949646 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:18.949661 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.052551 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.052630 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.052649 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.052663 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.154873 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.154936 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.154949 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.154957 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.257510 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.257564 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.257592 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.257609 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.360152 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.360218 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.360237 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.360251 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.461034 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.461101 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.461129 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.461144 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.561238 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:19.913056 6 trace.go:61] Trace "syncReplicationController: kube-system/monitoring-influxdb-grafana-v4" (started 2016-12-02 19:49:19.461162326 +0000 UTC):
[451.860996ms] [451.860996ms] END
I1202 19:49:19.561394 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:19.913104 6 trace.go:61] Trace "syncReplicationController: kube-system/l7-default-backend-v1.0" (started 2016-12-02 19:49:19.461088827 +0000 UTC):
[452.002268ms] [452.002268ms] END
I1202 19:49:19.561405 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:19.913142 6 trace.go:61] Trace "syncReplicationController: kube-system/kubernetes-dashboard-v1.4.0" (started 2016-12-02 19:49:19.461121939 +0000 UTC):
[452.012467ms] [452.012467ms] END
I1202 19:49:19.561411 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:19.913193 6 trace.go:61] Trace "syncReplicationController: kube-system/kube-dns-v20" (started 2016-12-02 19:49:19.46113748 +0000 UTC):
[452.031982ms] [452.031982ms] END
I1202 19:49:20.015549 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.015612 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.015629 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.015643 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.118540 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.118607 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.118624 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.118639 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.221007 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.221071 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.221087 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.221102 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.323274 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.323340 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.323358 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.323372 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.423638 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.423698 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.423716 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.423729 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.525985 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.526045 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.526062 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.526076 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.628359 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.628419 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.628436 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.628450 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.730741 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.730804 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.730822 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.730835 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.833615 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.833669 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.833686 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:20.833700 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.934724 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:20.934783 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:20.934801 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:20.934815 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:21.034921 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:21.628137 6 trace.go:61] Trace "syncReplicationController: kube-system/kubernetes-dashboard-v1.4.0" (started 2016-12-02 19:49:20.934826946 +0000 UTC):
[693.266452ms] [693.266452ms] END
I1202 19:49:21.035071 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:21.628183 6 trace.go:61] Trace "syncReplicationController: kube-system/kube-dns-v20" (started 2016-12-02 19:49:20.9347705 +0000 UTC):
[693.402945ms] [693.402945ms] END
I1202 19:49:21.035080 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:21.628206 6 trace.go:61] Trace "syncReplicationController: kube-system/monitoring-influxdb-grafana-v4" (started 2016-12-02 19:49:20.934793837 +0000 UTC):
[693.403692ms] [693.403692ms] END
I1202 19:49:21.035087 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:21.628225 6 trace.go:61] Trace "syncReplicationController: kube-system/l7-default-backend-v1.0" (started 2016-12-02 19:49:20.934808412 +0000 UTC):
[693.409525ms] [693.409525ms] END
I1202 19:49:21.728438 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:21.728507 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:21.728533 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:21.728549 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:21.829085 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:21.829146 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:21.829163 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:21.829177 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:21.929735 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:21.929819 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:21.929863 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:21.929887 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.030284 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.030345 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.030363 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.030378 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.132541 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.132607 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.132626 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.132640 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.235061 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.235128 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.235146 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.235160 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.337550 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.337617 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.337636 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.337650 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.440545 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.440610 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.440635 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.440650 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.542854 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.542909 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.542927 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.542945 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.645216 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.645284 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.645303 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.645316 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.746321 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.746387 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.746405 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.746419 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.848730 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.848791 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.848812 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.848826 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:22.950709 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:22.950776 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:22.950795 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:22.950809 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.053072 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.053153 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.053174 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.053188 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.154792 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.154868 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.154892 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.154906 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.259574 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.259631 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.259643 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.259652 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.361911 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.361978 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.361996 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.362010 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.464179 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.464250 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.464269 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.464283 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.566995 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.567057 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.567074 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.567088 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.669428 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.669508 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.669528 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.669542 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.771651 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.771711 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.771731 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.771746 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.807119 6 cidr_allocator.go:98] Node kubernetes-master has no CIDR, ignoring
I1202 19:49:23.807135 6 cidr_allocator.go:101] Node kubernetes-minion-group-2g4f has CIDR 10.244.3.0/24, occupying it in CIDR map
I1202 19:49:23.807148 6 cidr_allocator.go:101] Node kubernetes-minion-group-c4x6 has CIDR 10.244.4.0/24, occupying it in CIDR map
I1202 19:49:23.807165 6 cidr_allocator.go:101] Node kubernetes-minion-group-ofhq has CIDR 10.244.2.0/24, occupying it in CIDR map
I1202 19:49:23.807171 6 cidr_allocator.go:101] Node kubernetes-minion-group-rfzh has CIDR 10.244.1.0/24, occupying it in CIDR map
I1202 19:49:23.807176 6 cidr_allocator.go:101] Node kubernetes-minion-group-yl0d has CIDR 10.244.5.0/24, occupying it in CIDR map
E1202 19:49:23.815571 6 util.go:45] Metric for replenishment_controller already registered
E1202 19:49:23.815603 6 util.go:45] Metric for replenishment_controller already registered
E1202 19:49:23.815617 6 util.go:45] Metric for replenishment_controller already registered
E1202 19:49:23.815631 6 util.go:45] Metric for replenishment_controller already registered
E1202 19:49:23.815638 6 util.go:45] Metric for replenishment_controller already registered
I1202 19:49:23.885375 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:23.885576 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:23.885600 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:23.885614 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:23.997792 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:24.035329 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:24.035371 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:24.035387 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:24.112193 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
I1202 19:49:24.143284 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:24.148639 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:24.148667 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:24.150078 6 controllermanager.go:429] Starting job controller
I1202 19:49:24.150334 6 controllermanager.go:436] Starting deployment controller
I1202 19:49:24.150702 6 controllermanager.go:443] Starting ReplicaSet controller
I1202 19:49:24.150944 6 controllermanager.go:450] Starting horizontal pod autoscaler controller.
I1202 19:49:24.151117 6 controllermanager.go:466] Starting disruption controller
I1202 19:49:24.151405 6 controllermanager.go:472] Starting StatefulSet controller
I1202 19:49:24.151814 6 plugins.go:344] Loaded volume plugin "kubernetes.io/host-path"
I1202 19:49:24.151829 6 plugins.go:344] Loaded volume plugin "kubernetes.io/nfs"
I1202 19:49:24.151837 6 plugins.go:344] Loaded volume plugin "kubernetes.io/glusterfs"
I1202 19:49:24.151843 6 plugins.go:344] Loaded volume plugin "kubernetes.io/rbd"
I1202 19:49:24.151850 6 plugins.go:344] Loaded volume plugin "kubernetes.io/quobyte"
I1202 19:49:24.151857 6 plugins.go:344] Loaded volume plugin "kubernetes.io/flocker"
I1202 19:49:24.151864 6 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
I1202 19:49:24.152019 6 daemoncontroller.go:192] Starting Daemon Sets controller manager
I1202 19:49:24.152092 6 deployment_controller.go:133] Starting deployment controller
I1202 19:49:24.152126 6 replica_set.go:163] Starting ReplicaSet controller
I1202 19:49:24.152137 6 horizontal.go:132] Starting HPA Controller
I1202 19:49:24.153041 6 disruption.go:275] Starting disruption controller
I1202 19:49:24.153052 6 disruption.go:277] Sending events to api server.
I1202 19:49:24.153111 6 pet_set.go:146] Starting statefulset controller
I1202 19:49:24.165075 6 plugins.go:344] Loaded volume plugin "kubernetes.io/aws-ebs"
I1202 19:49:24.165107 6 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
I1202 19:49:24.165115 6 plugins.go:344] Loaded volume plugin "kubernetes.io/cinder"
I1202 19:49:24.165250 6 plugins.go:344] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1202 19:49:24.165269 6 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-disk"
I1202 19:49:24.165277 6 plugins.go:344] Loaded volume plugin "kubernetes.io/photon-pd"
I1202 19:49:24.165371 6 controllermanager.go:524] Starting certificate request controller
E1202 19:49:24.167582 6 controllermanager.go:534] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
E1202 19:49:24.168009 6 util.go:45] Metric for serviceaccount_controller already registered
I1202 19:49:24.168984 6 attach_detach_controller.go:209] Starting Attach Detach Controller
I1202 19:49:24.169030 6 serviceaccounts_controller.go:120] Starting ServiceAccount controller
I1202 19:49:24.191259 6 garbagecollector.go:759] Garbage Collector: Initializing
I1202 19:49:24.217491 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.222201 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/l7-default-backend-v1.0
E1202 19:49:24.241909 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-minion-group-yl0d" does not exist
E1202 19:49:24.241936 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-master" does not exist
E1202 19:49:24.241946 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-minion-group-2g4f" does not exist
E1202 19:49:24.241952 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-minion-group-c4x6" does not exist
E1202 19:49:24.241959 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-minion-group-ofhq" does not exist
E1202 19:49:24.241965 6 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="kubernetes-minion-group-rfzh" does not exist
I1202 19:49:24.259900 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/monitoring-influxdb-grafana-v4
I1202 19:49:24.260156 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard-v1.4.0
I1202 19:49:24.260364 6 replication_controller.go:624] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1202 19:49:24.277837 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.301495 6 routecontroller.go:183] Deleting route kubernetes-817ac98e-b8c4-11e6-bee7-42010a800002 10.244.0.0/24
I1202 19:49:24.322973 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.335576 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-yl0d"
I1202 19:49:24.335592 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-yl0d in NodeController event message for node kubernetes-minion-group-yl0d
I1202 19:49:24.335614 6 nodecontroller.go:430] Initializing eviction metric for zone: us-central1:\00:us-central1-b
I1202 19:49:24.335631 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-master"
I1202 19:49:24.335636 6 controller_utils.go:275] Recording Registered Node kubernetes-master in NodeController event message for node kubernetes-master
I1202 19:49:24.335655 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-2g4f"
I1202 19:49:24.335661 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-2g4f in NodeController event message for node kubernetes-minion-group-2g4f
I1202 19:49:24.335667 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-c4x6"
I1202 19:49:24.335671 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-c4x6 in NodeController event message for node kubernetes-minion-group-c4x6
I1202 19:49:24.335679 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-ofhq"
I1202 19:49:24.335684 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-ofhq in NodeController event message for node kubernetes-minion-group-ofhq
I1202 19:49:24.335689 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-rfzh"
I1202 19:49:24.335693 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-rfzh in NodeController event message for node kubernetes-minion-group-rfzh
W1202 19:49:24.335707 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-minion-group-yl0d. Assuming now as a timestamp.
W1202 19:49:24.335790 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-master. Assuming now as a timestamp.
I1202 19:49:24.336112 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-yl0d", UID:"9f76479f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-yl0d event: Registered Node kubernetes-minion-group-yl0d in NodeController
I1202 19:49:24.336171 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-master", UID:"6adb9ab4-b8c8-11e6-aa17-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-master event: Registered Node kubernetes-master in NodeController
I1202 19:49:24.336179 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-2g4f", UID:"9ca9f4cb-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-2g4f event: Registered Node kubernetes-minion-group-2g4f in NodeController
I1202 19:49:24.336187 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-c4x6", UID:"9dd4837f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-c4x6 event: Registered Node kubernetes-minion-group-c4x6 in NodeController
I1202 19:49:24.336196 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-ofhq", UID:"9ca528cf-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-ofhq event: Registered Node kubernetes-minion-group-ofhq in NodeController
I1202 19:49:24.336204 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-rfzh", UID:"9c18b0b0-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-rfzh event: Registered Node kubernetes-minion-group-rfzh in NodeController
W1202 19:49:24.337069 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-minion-group-2g4f. Assuming now as a timestamp.
W1202 19:49:24.337116 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-minion-group-c4x6. Assuming now as a timestamp.
W1202 19:49:24.337164 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-minion-group-ofhq. Assuming now as a timestamp.
W1202 19:49:24.337185 6 nodecontroller.go:679] Missing timestamp for Node kubernetes-minion-group-rfzh. Assuming now as a timestamp.
I1202 19:49:24.337232 6 nodecontroller.go:609] NodeController detected that zone us-central1:\00:us-central1-b is now in state Normal.
I1202 19:49:24.371855 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.413872 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.460770 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.509779 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.511644 6 trace.go:61] Trace "syncReplicationController: kube-system/l7-default-backend-v1.0" (started 2016-12-02 19:49:24.222223731 +0000 UTC):
[5.905µs] [5.905µs] ReplicationController restored
[7.225µs] [1.32µs] Expectations restored
[45.144µs] [37.919µs] manageReplicas done
[289.372662ms] [289.327518ms] END
I1202 19:49:24.524689 6 replication_controller.go:321] Observed updated replication controller l7-default-backend-v1.0. Desired pod count change: 1->1
I1202 19:49:24.529755 6 trace.go:61] Trace "syncReplicationController: kube-system/monitoring-influxdb-grafana-v4" (started 2016-12-02 19:49:24.259927517 +0000 UTC):
[5.049µs] [5.049µs] ReplicationController restored
[6.038µs] [989ns] Expectations restored
[43.244µs] [37.206µs] manageReplicas done
[269.798664ms] [269.75542ms] END
I1202 19:49:24.531752 6 trace.go:61] Trace "syncReplicationController: kube-system/kube-dns-v20" (started 2016-12-02 19:49:24.260379184 +0000 UTC):
[3.023µs] [3.023µs] ReplicationController restored
[3.635µs] [612ns] Expectations restored
[37.429µs] [33.794µs] manageReplicas done
[271.355028ms] [271.317599ms] END
I1202 19:49:24.539203 6 replication_controller.go:321] Observed updated replication controller kubernetes-dashboard-v1.4.0. Desired pod count change: 1->1
I1202 19:49:24.562946 6 servicecontroller.go:305] Not persisting unchanged LoadBalancerStatus to registry.
I1202 19:49:24.572980 6 replication_controller.go:321] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
I1202 19:49:24.586904 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:49:30.273173 6 replication_controller.go:321] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
I1202 19:49:30.283007 6 replication_controller.go:321] Observed updated replication controller kubernetes-dashboard-v1.4.0. Desired pod count change: 1->1
I1202 19:49:30.291100 6 replication_controller.go:321] Observed updated replication controller l7-default-backend-v1.0. Desired pod count change: 1->1
I1202 19:49:30.300777 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:49:30.653033 6 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"l7-default-backend", UID:"70c77771-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3962", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set l7-default-backend-1869959889 to 1
I1202 19:49:30.656367 6 replica_set.go:452] Too few "kube-system"/"l7-default-backend-1869959889" replicas, need 1, creating 1
I1202 19:49:30.665542 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"l7-default-backend-1869959889", UID:"70c81593-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3963", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: l7-default-backend-1869959889-n53x9
I1202 19:49:30.674974 6 deployment_controller.go:299] Error syncing deployment kube-system/l7-default-backend: Operation cannot be fulfilled on deployments.extensions "l7-default-backend": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:30.979442 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:49:31.073246 6 replica_set.go:452] Too few "kube-system"/"kubernetes-dashboard-1283361852" replicas, need 1, creating 1
I1202 19:49:31.074917 6 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"7107677f-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3981", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-1283361852 to 1
I1202 19:49:31.080273 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kubernetes-dashboard-1283361852", UID:"7108226e-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-1283361852-7t203
I1202 19:49:31.095545 6 deployment_controller.go:299] Error syncing deployment kube-system/kubernetes-dashboard: Operation cannot be fulfilled on deployments.extensions "kubernetes-dashboard": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.112825 6 deployment_controller.go:299] Error syncing deployment kube-system/kubernetes-dashboard: Operation cannot be fulfilled on deployments.extensions "kubernetes-dashboard": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.132571 6 deployment_controller.go:299] Error syncing deployment kube-system/kubernetes-dashboard: Operation cannot be fulfilled on deployments.extensions "kubernetes-dashboard": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.220650 6 replica_set.go:452] Too few "kube-system"/"kube-dns-4009328302" replicas, need 1, creating 1
I1202 19:49:31.223472 6 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"711d5998-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3992", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-4009328302 to 1
I1202 19:49:31.230325 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-4009328302", UID:"711e8b13-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"3993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-4009328302-4wvnw
I1202 19:49:31.257606 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns: Operation cannot be fulfilled on deployments.extensions "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.295137 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns: Operation cannot be fulfilled on deployments.extensions "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.362272 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns: Operation cannot be fulfilled on deployments.extensions "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.460647 6 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns-autoscaler", UID:"7142747c-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"4003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-autoscaler-2895493484 to 1
I1202 19:49:31.460842 6 replica_set.go:452] Too few "kube-system"/"kube-dns-autoscaler-2895493484" replicas, need 1, creating 1
I1202 19:49:31.466928 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-autoscaler-2895493484", UID:"71433413-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"4004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-autoscaler-2895493484-202hs
I1202 19:49:31.483083 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns-autoscaler: Operation cannot be fulfilled on deployments.extensions "kube-dns-autoscaler": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.502006 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns-autoscaler: Operation cannot be fulfilled on deployments.extensions "kube-dns-autoscaler": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:31.516677 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns-autoscaler: Operation cannot be fulfilled on deployments.extensions "kube-dns-autoscaler": the object has been modified; please apply your changes to the latest version and try again
I1202 19:49:34.191474 6 garbagecollector.go:773] Garbage Collector: All monitored resources synced. Proceeding to collect garbage
I1202 19:49:48.702293 6 routecontroller.go:187] Deleted route kubernetes-817ac98e-b8c4-11e6-bee7-42010a800002 10.244.0.0/24 after 24.400809842s
I1202 19:49:48.839087 6 routecontroller.go:154] Creating route for node kubernetes-master 10.244.0.0/24 with hint 6adb9ab4-b8c8-11e6-aa17-42010a800002, throttled 495ns
I1202 19:50:10.475822 6 routecontroller.go:162] Created route for node kubernetes-master 10.244.0.0/24 with hint 6adb9ab4-b8c8-11e6-aa17-42010a800002 after 21.636748892s
I1202 19:50:12.787265 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:50:12.841873 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:50:24.155774 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:50:32.225229 6 replication_controller.go:321] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->0
I1202 19:50:32.225385 6 replication_controller.go:556] Too many "kube-system"/"kube-dns-v20" replicas, need 0, deleting 1
I1202 19:50:32.225431 6 controller_utils.go:528] Controller kube-dns-v20 deleting pod kube-system/kube-dns-v20-xmq5v
I1202 19:50:32.238700 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kube-dns-v20", UID:"a21bf820-b8c4-11e6-bee7-42010a800002", APIVersion:"v1", ResourceVersion:"4146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kube-dns-v20-xmq5v
I1202 19:50:32.248541 6 replication_controller.go:321] Observed updated replication controller kube-dns-v20. Desired pod count change: 0->0
I1202 19:50:32.262977 6 replication_controller.go:321] Observed updated replication controller kube-dns-v20. Desired pod count change: 0->0
I1202 19:50:32.275933 6 replication_controller.go:631] Replication Controller has been deleted kube-system/kube-dns-v20
I1202 19:50:32.280214 6 garbagecollector.go:754] none of object [v1/Pod, namespace: kube-system, name: kube-dns-v20-xmq5v, uid: a21dceea-b8c4-11e6-bee7-42010a800002]'s owners exist any more, will garbage collect it
I1202 19:50:32.294697 6 replication_controller.go:321] Observed updated replication controller kubernetes-dashboard-v1.4.0. Desired pod count change: 1->0
I1202 19:50:32.294847 6 replication_controller.go:556] Too many "kube-system"/"kubernetes-dashboard-v1.4.0" replicas, need 0, deleting 1
I1202 19:50:32.294894 6 controller_utils.go:528] Controller kubernetes-dashboard-v1.4.0 deleting pod kube-system/kubernetes-dashboard-v1.4.0-xhth2
I1202 19:50:32.307604 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard-v1.4.0", UID:"a18f6c4b-b8c4-11e6-bee7-42010a800002", APIVersion:"v1", ResourceVersion:"4152", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kubernetes-dashboard-v1.4.0-xhth2
I1202 19:50:32.323686 6 replication_controller.go:321] Observed updated replication controller kubernetes-dashboard-v1.4.0. Desired pod count change: 0->0
I1202 19:50:32.354270 6 replication_controller.go:321] Observed updated replication controller kubernetes-dashboard-v1.4.0. Desired pod count change: 0->0
I1202 19:50:32.408217 6 replication_controller.go:631] Replication Controller has been deleted kube-system/kubernetes-dashboard-v1.4.0
I1202 19:50:32.412391 6 garbagecollector.go:754] none of object [v1/Pod, namespace: kube-system, name: kubernetes-dashboard-v1.4.0-xhth2, uid: a19097eb-b8c4-11e6-bee7-42010a800002]'s owners exist any more, will garbage collect it
I1202 19:50:32.425933 6 replication_controller.go:321] Observed updated replication controller l7-default-backend-v1.0. Desired pod count change: 1->0
I1202 19:50:32.426060 6 replication_controller.go:556] Too many "kube-system"/"l7-default-backend-v1.0" replicas, need 0, deleting 1
I1202 19:50:32.426096 6 controller_utils.go:528] Controller l7-default-backend-v1.0 deleting pod kube-system/l7-default-backend-v1.0-gs4ch
I1202 19:50:32.434160 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"l7-default-backend-v1.0", UID:"a1bc3ed3-b8c4-11e6-bee7-42010a800002", APIVersion:"v1", ResourceVersion:"4158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: l7-default-backend-v1.0-gs4ch
I1202 19:50:32.451832 6 replication_controller.go:321] Observed updated replication controller l7-default-backend-v1.0. Desired pod count change: 0->0
I1202 19:50:32.457685 6 replication_controller.go:321] Observed updated replication controller l7-default-backend-v1.0. Desired pod count change: 0->0
I1202 19:50:32.465816 6 replication_controller.go:631] Replication Controller has been deleted kube-system/l7-default-backend-v1.0
I1202 19:50:32.468603 6 garbagecollector.go:754] none of object [v1/Pod, namespace: kube-system, name: l7-default-backend-v1.0-gs4ch, uid: a1bd5f7c-b8c4-11e6-bee7-42010a800002]'s owners exist any more, will garbage collect it
I1202 19:50:54.155897 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:51:03.819717 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-2g4f kubernetes-minion-group-c4x6 kubernetes-minion-group-ofhq kubernetes-minion-group-rfzh kubernetes-minion-group-yl0d]
I1202 19:51:03.819784 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:51:24.156106 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:51:39.553748 6 controller_utils.go:286] Recording status change NodeNotReady event message for node kubernetes-minion-group-2g4f
I1202 19:51:39.553791 6 controller_utils.go:204] Update ready status of pods on node [kubernetes-minion-group-2g4f]
I1202 19:51:39.554042 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-2g4f", UID:"9ca9f4cb-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node kubernetes-minion-group-2g4f status is now: NodeNotReady
I1202 19:51:39.562347 6 controller_utils.go:221] Updating ready status of pod web-1 to false
I1202 19:51:39.569543 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:51:39.569595 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:51:39.571932 6 controller_utils.go:221] Updating ready status of pod fluentd-cloud-logging-kubernetes-minion-group-2g4f to false
I1202 19:51:39.584220 6 controller_utils.go:221] Updating ready status of pod kube-dns-autoscaler-2895493484-202hs to false
I1202 19:51:39.597140 6 controller_utils.go:221] Updating ready status of pod kube-proxy-kubernetes-minion-group-2g4f to false
I1202 19:51:39.605463 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-2g4f"
I1202 19:51:39.611612 6 controller_utils.go:221] Updating ready status of pod node-problem-detector-v0.1-6wtjw to false
I1202 19:51:39.703311 6 attacher.go:88] Attach operation is successful. PD "kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" is already attached to node "kubernetes-minion-group-2g4f".
I1202 19:51:39.703363 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6101dbf7-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-2g4f".
I1202 19:51:39.709576 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-2g4f" succeeded. patchBytes: "{}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002}]
I1202 19:51:54.156239 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:51:54.158385 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:51:59.927249 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" (specName: "pvc-6101dbf7-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 19:51:59.927294 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6101dbf7-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-2g4f", therefore it was marked as detached.
I1202 19:51:59.942578 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-2g4f"
I1202 19:51:59.950527 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-2g4f" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":null}}" VolumesAttached: []
I1202 19:52:18.573957 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6101dbf7-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-2g4f".
I1202 19:52:18.592688 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-2g4f" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002}]
I1202 19:52:24.156390 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:52:24.158732 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:52:43.819942 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-yl0d kubernetes-minion-group-c4x6 kubernetes-minion-group-ofhq kubernetes-minion-group-rfzh]
I1202 19:52:43.819976 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:52:54.156546 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:52:54.158748 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:01.388408 6 controller_utils.go:286] Recording status change NodeNotReady event message for node kubernetes-minion-group-c4x6
I1202 19:53:01.388452 6 controller_utils.go:204] Update ready status of pods on node [kubernetes-minion-group-c4x6]
I1202 19:53:01.388597 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-c4x6", UID:"9dd4837f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node kubernetes-minion-group-c4x6 status is now: NodeNotReady
I1202 19:53:01.397784 6 controller_utils.go:221] Updating ready status of pod web-2 to false
I1202 19:53:01.418568 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:53:01.419753 6 controller_utils.go:221] Updating ready status of pod fluentd-cloud-logging-kubernetes-minion-group-c4x6 to false
I1202 19:53:01.423221 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:01.443941 6 controller_utils.go:221] Updating ready status of pod heapster-v1.2.0-2691653277-gj4z7 to false
I1202 19:53:01.455201 6 controller_utils.go:221] Updating ready status of pod kube-dns-4009328302-4wvnw to false
I1202 19:53:01.471332 6 controller_utils.go:221] Updating ready status of pod kube-proxy-kubernetes-minion-group-c4x6 to false
I1202 19:53:01.482688 6 controller_utils.go:221] Updating ready status of pod node-problem-detector-v0.1-pybjh to false
I1202 19:53:01.510589 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-c4x6"
I1202 19:53:01.604522 6 attacher.go:88] Attach operation is successful. PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" is already attached to node "kubernetes-minion-group-c4x6".
I1202 19:53:01.604575 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-c4x6".
I1202 19:53:01.614876 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-c4x6" succeeded. patchBytes: "{}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002}]
I1202 19:53:13.792030 6 cidr_allocator.go:172] Node kubernetes-minion-group-2g4f is already in a process of CIDR assignment.
E1202 19:53:13.795667 6 cidr_allocator.go:248] Failed while updating Node.Spec.PodCIDR (4 retries left): Operation cannot be fulfilled on nodes "kubernetes-minion-group-2g4f": the object has been modified; please apply your changes to the latest version and try again
I1202 19:53:13.846580 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-2g4f" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6101dbf7-b8c7-11e6-bee7-42010a800002}]
I1202 19:53:18.458895 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:53:18.463400 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
E1202 19:53:22.594849 6 gce.go:2913] getInstanceByName/single-zone: failed to get instance kubernetes-minion-group-c4x6; err: googleapi: Error 404: The resource 'projects/enisoc-kubernetes-dev/zones/us-central1-b/instances/kubernetes-minion-group-c4x6' was not found, notFound
I1202 19:53:22.594916 6 nodecontroller.go:509] Deleting node (no longer present in cloud provider): kubernetes-minion-group-c4x6
I1202 19:53:22.594932 6 controller_utils.go:275] Recording Deleting Node kubernetes-minion-group-c4x6 because it's not present according to cloud provider event message for node kubernetes-minion-group-c4x6
I1202 19:53:22.595352 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-c4x6", UID:"9dd4837f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeletingNode' Node kubernetes-minion-group-c4x6 event: Deleting Node kubernetes-minion-group-c4x6 because it's not present according to cloud provider
I1202 19:53:24.156717 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:53:24.159021 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:25.165683 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (specName: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 19:53:25.165731 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-c4x6", therefore it was marked as detached.
I1202 19:53:25.166567 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-c4x6"
I1202 19:53:25.166642 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.266807 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.366992 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.467194 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.567944 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.668050 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.769226 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.869352 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:25.969614 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.069816 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.169990 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.270238 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.370429 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.470633 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.571649 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.671898 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.772191 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.872464 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:26.972742 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.073034 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.175622 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.275856 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.376609 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.476892 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.577074 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.597360 6 nodecontroller.go:437] NodeController observed a Node deletion: kubernetes-minion-group-c4x6
I1202 19:53:27.597379 6 controller_utils.go:275] Recording Removing Node kubernetes-minion-group-c4x6 from NodeController event message for node kubernetes-minion-group-c4x6
I1202 19:53:27.597771 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-c4x6", UID:"9dd4837f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node kubernetes-minion-group-c4x6 event: Removing Node kubernetes-minion-group-c4x6 from NodeController
I1202 19:53:27.677293 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.778146 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.878436 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:27.978659 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.078896 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.179152 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.279429 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.379674 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.479955 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.580827 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.681024 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.781187 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.881349 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:28.981480 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.081701 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.181908 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.282105 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.382300 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.482485 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.582698 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.682916 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.783142 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.883343 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:29.983572 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.083779 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.184011 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.284222 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.384467 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.484645 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.584877 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.595942 6 routecontroller.go:183] Deleting route kubernetes-9dd4837f-b8c4-11e6-bee7-42010a800002 10.244.4.0/24
I1202 19:53:30.685068 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.785271 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.885476 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:30.985718 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.085955 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.186223 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.286391 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.386602 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.486816 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.587023 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.687245 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.787451 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.887997 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:31.988467 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.091978 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.192580 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.293580 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.394095 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.494289 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.594875 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.695744 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.795930 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.896118 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:32.996333 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.096561 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.196761 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.296994 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.397193 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.497424 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.597645 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.697854 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.773426 6 gc_controller.go:184] Found orphaned Pod web-2 assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.773467 6 gc_controller.go:71] PodGC is force deleting Pod: default:web-2
I1202 19:53:33.778603 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:53:33.783858 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:33.787099 6 gc_controller.go:188] Forced deletion of orphaned Pod web-2 succeeded
I1202 19:53:33.787125 6 gc_controller.go:184] Found orphaned Pod fluentd-cloud-logging-kubernetes-minion-group-c4x6 assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.787130 6 gc_controller.go:71] PodGC is force deleting Pod: kube-system:fluentd-cloud-logging-kubernetes-minion-group-c4x6
I1202 19:53:33.798523 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.804937 6 gc_controller.go:188] Forced deletion of orphaned Pod fluentd-cloud-logging-kubernetes-minion-group-c4x6 succeeded
I1202 19:53:33.804955 6 gc_controller.go:184] Found orphaned Pod kube-proxy-kubernetes-minion-group-c4x6 assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.804960 6 gc_controller.go:71] PodGC is force deleting Pod: kube-system:kube-proxy-kubernetes-minion-group-c4x6
I1202 19:53:33.818926 6 gc_controller.go:188] Forced deletion of orphaned Pod kube-proxy-kubernetes-minion-group-c4x6 succeeded
I1202 19:53:33.818943 6 gc_controller.go:184] Found orphaned Pod kube-dns-4009328302-4wvnw assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.818948 6 gc_controller.go:71] PodGC is force deleting Pod: kube-system:kube-dns-4009328302-4wvnw
I1202 19:53:33.827307 6 replica_set.go:452] Too few "kube-system"/"kube-dns-4009328302" replicas, need 1, creating 1
I1202 19:53:33.834151 6 pet_set.go:324] Syncing StatefulSet default/web with 4 pods
I1202 19:53:33.839704 6 gc_controller.go:188] Forced deletion of orphaned Pod kube-dns-4009328302-4wvnw succeeded
I1202 19:53:33.839725 6 gc_controller.go:184] Found orphaned Pod node-problem-detector-v0.1-pybjh assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.839730 6 gc_controller.go:71] PodGC is force deleting Pod: kube-system:node-problem-detector-v0.1-pybjh
I1202 19:53:33.842339 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-4009328302", UID:"711e8b13-b8c8-11e6-aa17-42010a800002", APIVersion:"extensions", ResourceVersion:"4437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-4009328302-zr1rj
I1202 19:53:33.848588 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:33.887220 6 gc_controller.go:188] Forced deletion of orphaned Pod node-problem-detector-v0.1-pybjh succeeded
I1202 19:53:33.887241 6 gc_controller.go:184] Found orphaned Pod heapster-v1.2.0-2691653277-gj4z7 assigned to the Node kubernetes-minion-group-c4x6. Deleting.
I1202 19:53:33.887247 6 gc_controller.go:71] PodGC is force deleting Pod: kube-system:heapster-v1.2.0-2691653277-gj4z7
I1202 19:53:33.901871 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:33.920321 6 replica_set.go:452] Too few "kube-system"/"heapster-v1.2.0-2691653277" replicas, need 1, creating 1
I1202 19:53:33.941907 6 gc_controller.go:188] Forced deletion of orphaned Pod heapster-v1.2.0-2691653277-gj4z7 succeeded
I1202 19:53:33.944771 6 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"heapster-v1.2.0-2691653277", UID:"c1b12bf1-b8c4-11e6-bee7-42010a800002", APIVersion:"extensions", ResourceVersion:"4435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: heapster-v1.2.0-2691653277-0g7ct
I1202 19:53:33.944873 6 deployment_controller.go:299] Error syncing deployment kube-system/kube-dns: Operation cannot be fulfilled on deployments.extensions "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I1202 19:53:34.007582 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.034155 6 pet_set.go:324] Syncing StatefulSet default/web with 4 pods
I1202 19:53:34.035575 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:34.107752 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.207980 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.308182 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.408375 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.508575 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.608799 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.709012 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.809248 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:34.909524 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.009746 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.109963 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.210178 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.310427 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.410640 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.510859 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.611063 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.711276 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.811491 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:35.911738 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.011940 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.112151 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.212375 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.312572 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.412766 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.512953 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.613215 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.713416 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.813651 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:36.913861 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.014082 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.114286 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.214519 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.315562 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.416164 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.516308 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.616475 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.716648 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.816833 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:37.916990 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.017205 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.117304 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.217511 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.317778 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.417951 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.518172 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.618350 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.722569 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.822778 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:38.922947 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.023117 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.123295 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.223469 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.323652 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.423826 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.523980 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.624163 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.724370 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.824569 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:39.924751 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.024965 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.125198 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.225418 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.325629 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.425813 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.526007 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.626195 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.726466 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.826663 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:40.926999 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.027216 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.127407 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.227595 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.327829 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.428047 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.531462 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.631663 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.731852 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.832046 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:41.932233 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.032401 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.132713 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.232899 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.333092 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.433284 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.533461 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.633666 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.733905 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.834144 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:42.934338 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.034534 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.134718 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.234906 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.335096 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.435272 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.535441 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.635664 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.735843 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.836026 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:43.839374 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-c4x6".
I1202 19:53:43.936250 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.036424 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.136621 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.236820 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.337029 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.437275 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.537506 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.637725 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.737952 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.838178 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:44.938372 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.038619 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.138840 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.239058 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.339281 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.439568 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.539812 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.640009 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.740227 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.840463 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:45.940703 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.040874 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.141112 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.241341 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.341541 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.441761 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.541968 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.642195 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.742413 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.842647 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:46.942878 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.043094 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.143334 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.243567 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.344584 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.444851 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.545080 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.645288 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.745512 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.846573 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:47.946791 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.047019 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.147268 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.247478 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.347696 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.447908 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.548142 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.648341 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.748584 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.848812 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:48.949043 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.049288 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.149524 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.249746 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.349969 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.450208 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.550436 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.650656 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.750877 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.851140 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:49.951405 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.051614 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.151850 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.252119 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.352276 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.452511 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.552721 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.652944 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.753156 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.853383 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:50.953595 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.053833 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.154092 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.254296 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.354511 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.454723 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.554948 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.655163 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.755398 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.855605 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.955849 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:51.992980 6 routecontroller.go:187] Deleted route kubernetes-9dd4837f-b8c4-11e6-bee7-42010a800002 10.244.4.0/24 after 21.397045321s
I1202 19:53:52.056087 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.156306 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.256544 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.356740 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.457562 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.558008 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.658260 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.758440 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.858685 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:52.958909 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.059140 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.159358 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.259567 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.359791 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.459998 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.560250 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.660468 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.760730 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.860939 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:53.961181 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.061390 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.156917 6 pet_set.go:324] Syncing StatefulSet default/web with 4 pods
I1202 19:53:54.159265 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-1
I1202 19:53:54.162108 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.262325 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.362577 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.462780 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.563001 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.663226 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.763429 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.863623 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:54.963816 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.064031 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.164254 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.264488 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.364733 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.464881 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.565058 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.665250 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.765451 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.865682 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:55.965895 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.066058 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.166278 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.266512 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.366723 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.466958 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.567159 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.667387 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.767588 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.867784 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:56.968027 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.068234 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.168469 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.268701 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.368928 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.469567 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.571035 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.671575 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.771836 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.872036 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:57.972242 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.072438 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.172692 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.272919 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.373125 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.473334 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.573583 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.635262 6 pet_set.go:324] Syncing StatefulSet default/web with 4 pods
I1202 19:53:58.660362 6 event.go:217] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"web", UID:"89b57df0-b8c8-11e6-aa17-42010a800002", APIVersion:"apps", ResourceVersion:"4525", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' pet: web-2
I1202 19:53:58.674716 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:53:58.674750 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.690912 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:53:58.692482 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:53:58.774948 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.875168 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:58.975403 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.075625 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.175841 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.276086 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.376313 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.476569 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.576794 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.677022 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.777223 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.877444 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:53:59.977727 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.077934 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.178162 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.278407 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.378652 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.478878 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.579119 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.679337 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.779558 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.880240 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:00.980441 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.081251 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.181542 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.282452 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.382772 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.482982 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.583263 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.683534 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.783757 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.884033 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:01.984270 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.084505 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.184747 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.285003 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.385230 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:02.411870 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:02.411916 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:02.411999 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:02.911970254 +0000 UTC (durationBeforeRetry 500ms). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:02.412208 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:02.485471 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.585744 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.685961 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.786379 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.886610 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:02.986820 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:02.986872 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.092899 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.193104 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.293321 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.393549 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.493809 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.594024 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.694259 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.794453 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.894742 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:03.994950 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.095492 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.195718 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.295942 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.398541 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.498750 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.598990 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.699228 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.799443 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.899685 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:04.999934 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.100195 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.200428 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.300663 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.400892 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.501127 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.601365 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.701603 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.801845 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:05.902085 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.002347 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.102717 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.202942 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.303173 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.403401 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.503689 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.603943 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:06.644648 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:06.644704 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:06.644781 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:07.644759475 +0000 UTC (durationBeforeRetry 1s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:06.646069 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:06.704172 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.804411 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:06.904635 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.004903 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.105141 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.205365 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.305622 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.405844 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.506101 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.606346 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.706596 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:07.706644 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.806853 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:07.907061 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.007356 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.108274 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.208685 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.308947 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.409220 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.509458 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.611799 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.712073 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.812359 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:08.912638 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.012890 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.113170 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.213484 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.313782 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.414053 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.514311 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.614587 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.714892 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.815198 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:09.915442 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.015721 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.116011 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.216260 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.316474 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.416725 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.517001 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.617253 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.717764 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.818014 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:10.918245 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.018474 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.118739 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.218979 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:11.306689 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:11.306719 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:11.306791 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:13.306771043 +0000 UTC (durationBeforeRetry 2s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:11.306813 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:11.319191 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.419405 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.519637 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.619861 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.720093 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.820316 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:11.920539 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.020759 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.121090 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.221331 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.321568 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.421811 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.522032 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.622272 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.722532 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.822897 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:12.923187 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.023461 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.123774 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.224039 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.324279 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:13.324352 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.424556 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.524812 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.625117 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.725375 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.825619 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:13.925894 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.026155 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.126686 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.226915 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.327168 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.427466 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.527754 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.628027 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.728314 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.828620 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:14.928890 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.029167 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.129439 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.229721 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.332382 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.432659 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.532933 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.633216 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.733453 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.833732 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:15.934015 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.034273 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.134691 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.234922 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.335166 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.435435 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.535711 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.635955 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.736235 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.836515 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:16.936794 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:16.938624 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:16.938651 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:16.938730 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:20.93870667 +0000 UTC (durationBeforeRetry 4s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:16.940458 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:17.037089 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.137376 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.237682 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.338634 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.438940 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.539270 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.639559 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.739872 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.840160 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:17.940464 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.040806 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.141209 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.241444 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.341740 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.442013 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.542273 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.642550 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.742906 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.843175 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:18.943449 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.044253 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.144573 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.244917 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.345181 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.445520 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.545783 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.646072 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.746371 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.846724 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:19.947017 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.047361 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.148034 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.248241 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.348525 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.448786 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.549055 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.649362 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.749636 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.849978 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:20.950296 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:20.950373 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.050817 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.151068 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.251294 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.351596 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.451858 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.552163 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.652398 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.752648 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.852940 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:21.953222 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.053491 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.154326 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.254600 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.354873 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.455136 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.555641 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.655943 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.756226 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.856524 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:22.956809 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.057070 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.157424 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.257637 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.357897 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.458177 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.558458 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.658752 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.759071 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.820208 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-2g4f kubernetes-minion-group-ofhq kubernetes-minion-group-rfzh kubernetes-minion-group-yl0d]
I1202 19:54:23.820275 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:54:23.859362 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:23.959685 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.059970 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.157232 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:54:24.159850 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:54:24.160874 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.261136 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.361381 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.461616 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.561827 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:24.603784 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:24.603824 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:24.603891 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:32.603870034 +0000 UTC (durationBeforeRetry 8s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:24.604158 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:24.662080 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.762334 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.862608 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:24.962851 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.063102 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.163365 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.263607 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.363841 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.464056 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.564294 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.664482 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.764729 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.864910 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:25.965126 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.065382 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.165592 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.265799 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.366034 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.466277 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.566481 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.666701 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.766926 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.867181 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:26.967445 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.067656 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.167882 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.268131 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.368607 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.469699 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.569948 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.670157 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.770397 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.870623 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:27.970884 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.071139 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.171458 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.212075 6 controller_utils.go:286] Recording status change NodeNotReady event message for node kubernetes-minion-group-ofhq
I1202 19:54:28.212119 6 controller_utils.go:204] Update ready status of pods on node [kubernetes-minion-group-ofhq]
I1202 19:54:28.212377 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-ofhq", UID:"9ca528cf-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node kubernetes-minion-group-ofhq status is now: NodeNotReady
I1202 19:54:28.217917 6 controller_utils.go:221] Updating ready status of pod web-3 to false
I1202 19:54:28.231367 6 controller_utils.go:221] Updating ready status of pod fluentd-cloud-logging-kubernetes-minion-group-ofhq to false
I1202 19:54:28.234680 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:54:28.239092 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:54:28.244769 6 controller_utils.go:221] Updating ready status of pod heapster-v1.2.0-2691653277-0g7ct to false
I1202 19:54:28.259631 6 controller_utils.go:221] Updating ready status of pod kube-proxy-kubernetes-minion-group-ofhq to false
I1202 19:54:28.277611 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-ofhq"
I1202 19:54:28.277956 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.282834 6 controller_utils.go:221] Updating ready status of pod kubernetes-dashboard-1283361852-7t203 to false
I1202 19:54:28.302392 6 controller_utils.go:221] Updating ready status of pod node-problem-detector-v0.1-7fx2p to false
I1202 19:54:28.374059 6 attacher.go:88] Attach operation is successful. PD "kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" is already attached to node "kubernetes-minion-group-ofhq".
I1202 19:54:28.374136 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6105f52a-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-ofhq".
I1202 19:54:28.378126 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.382582 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-ofhq" succeeded. patchBytes: "{}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002}]
I1202 19:54:28.482794 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.583012 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.683232 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.783487 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.883722 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:28.983968 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.084215 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.184433 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.284692 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.384989 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.485193 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.585452 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.685692 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.785955 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.886289 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:29.986515 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.086760 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.186981 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.287262 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.387483 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.487720 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.587957 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.688201 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.788421 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.888740 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:30.988943 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.089183 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.189443 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.289709 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.389944 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.490166 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.590412 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.691616 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.792617 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.893381 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:31.993605 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.094656 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.195258 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.295608 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.395853 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.496138 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.596373 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.696596 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:32.696643 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.796829 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.897052 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:32.997282 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.097518 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.197741 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.298002 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.399465 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.499674 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.599917 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.700126 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.800381 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:33.901132 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.001372 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.101600 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.201860 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.302074 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.402299 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.502572 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.602819 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.703093 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.803394 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:34.903646 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.003965 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.104215 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.204472 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.304736 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.405001 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.505250 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.605563 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.705838 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.806067 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:35.906317 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.006550 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.106801 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.207021 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.307308 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
E1202 19:54:36.344643 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:36.344701 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:36.344785 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:54:52.344753527 +0000 UTC (durationBeforeRetry 16s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:36.346376 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:36.407612 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.507862 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.608121 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.708394 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.808637 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:36.908900 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.009188 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.109449 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.209782 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.313571 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.413892 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.514122 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.614420 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.714682 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.815028 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:37.915302 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.015591 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.115882 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.216119 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.316319 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.416522 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.516705 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.616921 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.717143 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.820652 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:38.920914 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.021157 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.121410 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.221688 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.321936 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.422179 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.522440 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.622692 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.722914 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.823141 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:39.923362 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.023603 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.123840 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.224599 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.324821 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.425078 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.431023 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" (specName: "pvc-6105f52a-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 19:54:40.431083 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6105f52a-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-ofhq", therefore it was marked as detached.
I1202 19:54:40.525285 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-ofhq"
I1202 19:54:40.535140 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-ofhq" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":null}}" VolumesAttached: []
I1202 19:54:40.535166 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.638105 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.738425 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.838717 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:40.938974 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.039199 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.139444 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.239885 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.340121 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.440380 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.540625 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.641044 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.741301 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.841573 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:41.941866 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.042117 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.142422 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.242699 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.343620 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.444492 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.545619 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.645890 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.746143 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.849058 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:42.951304 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.051605 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.151911 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.252203 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.352420 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.452640 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.552877 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.653123 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.753351 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.853610 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:43.953905 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.054173 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.154392 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.254652 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.354916 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.455149 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.555418 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.655659 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.755900 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.856186 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:44.956452 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.056701 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.156949 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.257207 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.357466 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.457787 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.558104 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.658386 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.758661 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.858917 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:45.959171 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:46.059425 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:46.159680 6 node_status_updater.go:74] Could not update node status. Failed to find node "kubernetes-minion-group-c4x6" in NodeInformer cache. <nil>
I1202 19:54:46.276032 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-c4x6" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002}]
I1202 19:54:46.281132 6 cidr_allocator.go:172] Node kubernetes-minion-group-c4x6 is already in a process of CIDR assignment.
E1202 19:54:46.282682 6 cidr_allocator.go:248] Failed while updating Node.Spec.PodCIDR (4 retries left): Operation cannot be fulfilled on nodes "kubernetes-minion-group-c4x6": the object has been modified; please apply your changes to the latest version and try again
I1202 19:54:46.286372 6 event.go:217] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"node-problem-detector-v0.1", UID:"a1a9327d-b8c4-11e6-bee7-42010a800002", APIVersion:"extensions", ResourceVersion:"4668", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: node-problem-detector-v0.1-53cqf
E1202 19:54:46.320900 6 daemoncontroller.go:225] kube-system/node-problem-detector-v0.1 failed with : error storing status for daemon set &v1beta1.DaemonSet{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-problem-detector-v0.1", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/node-problem-detector-v0.1", UID:"a1a9327d-b8c4-11e6-bee7-42010a800002", ResourceVersion:"4668", Generation:1, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616303334, nsec:0, loc:(*time.Location)(0x3955820)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/cluster-service":"true", "version":"v0.1", "k8s-app":"node-problem-detector"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"DaemonSet\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"node-problem-detector-v0.1\",\"namespace\":\"kube-system\",\"creationTimestamp\":null,\"labels\":{\"k8s-app\":\"node-problem-detector\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v0.1\"}},\"spec\":{\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"k8s-app\":\"node-problem-detector\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v0.1\"}},\"spec\":{\"volumes\":[{\"name\":\"log\",\"hostPath\":{\"path\":\"/var/log/\"}}],\"containers\":[{\"name\":\"node-problem-detector\",\"image\":\"gcr.io/google_containers/node-problem-detector:v0.1\",\"resources\":{\"limits\":{\"cpu\":\"200m\",\"memory\":\"100Mi\"},\"requests\":{\"cpu\":\"20m\",\"memory\":\"20Mi\"}},\"volumeMounts\":[{\"name\":\"log\",\"readOnly\":true,\"mountPath\":\"/log\"}],\"securityContext\":{\"privileged\":true}}],\"hostNetwork\":true}}},\"status\":{\"currentNumberScheduled\":0,\"numberMisscheduled\":0,\"desiredNumberScheduled\":0,\"numberReady\":0}}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1beta1.DaemonSetSpec{Selector:(*unversioned.LabelSelector)(0xc4218ad9a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"node-problem-detector", "kubernetes.io/cluster-service":"true", "version":"v0.1"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"log", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc4216c5c60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"node-problem-detector", Image:"gcr.io/google_containers/node-problem-detector:v0.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:200, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"200m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:20, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"log", ReadOnly:true, MountPath:"/log", SubPath:""}}, LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc421dcb890), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc4216c5cc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, SecurityContext:(*v1.PodSecurityContext)(0xc421a27d00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:""}}}, Status:v1beta1.DaemonSetStatus{CurrentNumberScheduled:6, NumberMisscheduled:0, DesiredNumberScheduled:6, NumberReady:4}}: Operation cannot be fulfilled on daemonsets.extensions "node-problem-detector-v0.1": the object has been modified; please apply your changes to the latest version and try again
I1202 19:54:48.709913 6 nodecontroller.go:420] NodeController observed a new Node: "kubernetes-minion-group-c4x6"
I1202 19:54:48.709943 6 controller_utils.go:275] Recording Registered Node kubernetes-minion-group-c4x6 in NodeController event message for node kubernetes-minion-group-c4x6
I1202 19:54:48.710266 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-c4x6", UID:"2ce5003a-b8c9-11e6-aa17-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kubernetes-minion-group-c4x6 event: Registered Node kubernetes-minion-group-c4x6 in NodeController
I1202 19:54:52.092042 6 routecontroller.go:154] Creating route for node kubernetes-minion-group-c4x6 10.244.4.0/24 with hint 2ce5003a-b8c9-11e6-aa17-42010a800002, throttled 751ns
I1202 19:54:52.390396 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:54:54.157316 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:54:54.159893 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
E1202 19:54:56.026823 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:56.026893 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:54:56.026983 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:55:28.026947868 +0000 UTC (durationBeforeRetry 32s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:56.027386 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:54:59.184400 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-6105f52a-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-ofhq".
I1202 19:54:59.214386 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-ofhq" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002}]
I1202 19:55:16.722315 6 routecontroller.go:162] Created route for node kubernetes-minion-group-c4x6 10.244.4.0/24 with hint 2ce5003a-b8c9-11e6-aa17-42010a800002 after 24.630287094s
I1202 19:55:24.157425 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:55:24.159480 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:55:28.091730 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
E1202 19:55:31.720561 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:55:31.720608 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:55:31.720678 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:56:35.720647126 +0000 UTC (durationBeforeRetry 1m4s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:55:31.721012 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:55:51.416556 6 controller_utils.go:286] Recording status change NodeNotReady event message for node kubernetes-minion-group-rfzh
I1202 19:55:51.416607 6 controller_utils.go:204] Update ready status of pods on node [kubernetes-minion-group-rfzh]
I1202 19:55:51.416975 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-rfzh", UID:"9c18b0b0-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node kubernetes-minion-group-rfzh status is now: NodeNotReady
I1202 19:55:51.423040 6 controller_utils.go:221] Updating ready status of pod web-4 to false
I1202 19:55:51.427473 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:55:51.429855 6 controller_utils.go:221] Updating ready status of pod fluentd-cloud-logging-kubernetes-minion-group-rfzh to false
I1202 19:55:51.433077 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:55:51.439329 6 controller_utils.go:221] Updating ready status of pod kube-dns-4009328302-zr1rj to false
I1202 19:55:51.453050 6 controller_utils.go:221] Updating ready status of pod kube-proxy-kubernetes-minion-group-rfzh to false
I1202 19:55:51.460681 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-rfzh"
I1202 19:55:51.463168 6 controller_utils.go:221] Updating ready status of pod monitoring-influxdb-grafana-v4-fpl9e to false
I1202 19:55:51.483200 6 controller_utils.go:221] Updating ready status of pod node-problem-detector-v0.1-6zbr4 to false
I1202 19:55:51.506667 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:55:51.560090 6 attacher.go:88] Attach operation is successful. PD "kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" is already attached to node "kubernetes-minion-group-rfzh".
I1202 19:55:51.560142 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61084680-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-rfzh".
I1202 19:55:51.564603 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-rfzh" succeeded. patchBytes: "{}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002}]
I1202 19:55:54.157596 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:55:54.159839 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:56:00.516799 6 cidr_allocator.go:172] Node kubernetes-minion-group-ofhq is already in a process of CIDR assignment.
E1202 19:56:00.527978 6 cidr_allocator.go:248] Failed while updating Node.Spec.PodCIDR (4 retries left): Operation cannot be fulfilled on nodes "kubernetes-minion-group-ofhq": the object has been modified; please apply your changes to the latest version and try again
I1202 19:56:00.593538 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-ofhq" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6105f52a-b8c7-11e6-bee7-42010a800002}]
I1202 19:56:03.820416 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-2g4f kubernetes-minion-group-c4x6 kubernetes-minion-group-yl0d]
I1202 19:56:03.820481 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:56:06.854451 6 routecontroller.go:183] Deleting route kubernetes-9ca528cf-b8c4-11e6-bee7-42010a800002 10.244.2.0/24
I1202 19:56:06.854700 6 routecontroller.go:154] Creating route for node kubernetes-minion-group-ofhq 10.244.6.0/24 with hint 5925a97b-b8c9-11e6-aa17-42010a800002, throttled 475ns
I1202 19:56:07.585549 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:56:07.590211 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:56:10.746998 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" (specName: "pvc-61084680-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 19:56:10.747071 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61084680-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-rfzh", therefore it was marked as detached.
I1202 19:56:10.833400 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-rfzh"
I1202 19:56:10.842273 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-rfzh" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":null}}" VolumesAttached: []
I1202 19:56:22.196362 6 routecontroller.go:187] Deleted route kubernetes-9ca528cf-b8c4-11e6-bee7-42010a800002 10.244.2.0/24 after 15.341919256s
I1202 19:56:24.157777 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:56:24.160074 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:56:29.604177 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61084680-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-rfzh".
I1202 19:56:29.693211 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-rfzh" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002}]
I1202 19:56:35.811461 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
E1202 19:56:39.456576 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:56:39.456629 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:56:39.456713 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 19:58:39.456692585 +0000 UTC (durationBeforeRetry 2m0s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:56:39.458246 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:56:43.422607 6 routecontroller.go:162] Created route for node kubernetes-minion-group-ofhq 10.244.6.0/24 with hint 5925a97b-b8c9-11e6-aa17-42010a800002 after 36.567907279s
I1202 19:56:54.157963 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:56:54.160720 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:03.383030 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:57:03.388091 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:24.158074 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:57:24.160835 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:24.272056 6 controller_utils.go:286] Recording status change NodeNotReady event message for node kubernetes-minion-group-yl0d
I1202 19:57:24.272090 6 controller_utils.go:204] Update ready status of pods on node [kubernetes-minion-group-yl0d]
I1202 19:57:24.272340 6 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-minion-group-yl0d", UID:"9f76479f-b8c4-11e6-bee7-42010a800002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node kubernetes-minion-group-yl0d status is now: NodeNotReady
I1202 19:57:24.279297 6 controller_utils.go:221] Updating ready status of pod web-0 to false
I1202 19:57:24.285305 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:57:24.287624 6 controller_utils.go:221] Updating ready status of pod web-2 to false
I1202 19:57:24.291680 6 controller_utils.go:221] Updating ready status of pod fluentd-cloud-logging-kubernetes-minion-group-yl0d to false
I1202 19:57:24.291904 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:24.301093 6 controller_utils.go:221] Updating ready status of pod kube-proxy-kubernetes-minion-group-yl0d to false
I1202 19:57:24.309965 6 controller_utils.go:221] Updating ready status of pod l7-default-backend-1869959889-n53x9 to false
I1202 19:57:24.317958 6 controller_utils.go:221] Updating ready status of pod node-problem-detector-v0.1-jh0rp to false
I1202 19:57:24.349105 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:57:24.437102 6 attacher.go:88] Attach operation is successful. PD "kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" is already attached to node "kubernetes-minion-group-yl0d".
I1202 19:57:24.437154 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-60f59472-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-yl0d".
I1202 19:57:24.453568 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-yl0d" succeeded. patchBytes: "{}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002}]
I1202 19:57:29.071622 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-rfzh" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61084680-b8c7-11e6-bee7-42010a800002}]
W1202 19:57:30.289844 6 reflector.go:315] pkg/controller/garbagecollector/garbagecollector.go:761: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [4041/3923]) [5040]
I1202 19:57:33.552840 6 routecontroller.go:183] Deleting route kubernetes-9c18b0b0-b8c4-11e6-bee7-42010a800002 10.244.1.0/24
I1202 19:57:33.553043 6 routecontroller.go:154] Creating route for node kubernetes-minion-group-rfzh 10.244.7.0/24 with hint 8de5731f-b8c9-11e6-aa17-42010a800002, throttled 475ns
I1202 19:57:33.787842 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:57:33.792533 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:43.820646 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-2g4f kubernetes-minion-group-c4x6 kubernetes-minion-group-ofhq]
I1202 19:57:43.820739 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:57:51.036072 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" (specName: "pvc-60f59472-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 19:57:51.036120 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-60f59472-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-yl0d", therefore it was marked as detached.
I1202 19:57:51.047395 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 19:57:51.059086 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-yl0d" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":null}}" VolumesAttached: []
I1202 19:57:51.893230 6 routecontroller.go:187] Deleted route kubernetes-9c18b0b0-b8c4-11e6-bee7-42010a800002 10.244.1.0/24 after 18.340415568s
I1202 19:57:54.158210 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:57:54.160458 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:57:58.158338 6 routecontroller.go:162] Created route for node kubernetes-minion-group-rfzh 10.244.7.0/24 with hint 8de5731f-b8c9-11e6-aa17-42010a800002 after 24.605295302s
I1202 19:58:09.683929 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-60f59472-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-yl0d".
I1202 19:58:09.732011 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-yl0d" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002}]
I1202 19:58:11.499799 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:58:11.506283 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:58:24.158332 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:58:24.160528 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:58:39.508187 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
E1202 19:58:43.158177 6 gce.go:504] GCE operation failed: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:58:43.158242 6 attacher.go:91] Error attaching PD "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d": googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
E1202 19:58:43.158328 6 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"" failed. No retries permitted until 2016-12-02 20:00:43.158291459 +0000 UTC (durationBeforeRetry 2m0s). Error: Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:58:43.160303 6 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"web-2", UID:"10863ca9-b8c9-11e6-aa17-42010a800002", APIVersion:"v1", ResourceVersion:"4599", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-61045ddc-b8c7-11e6-bee7-42010a800002" on node "kubernetes-minion-group-yl0d" with: googleapi: Error 400: The disk resource 'kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002' is already being used by 'kubernetes-minion-group-c4x6'
I1202 19:58:47.667110 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:58:54.158534 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:58:54.160658 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:59:09.291494 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-yl0d" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002}]
I1202 19:59:13.769810 6 replication_controller.go:321] Observed updated replication controller monitoring-influxdb-grafana-v4. Desired pod count change: 1->1
I1202 19:59:18.283342 6 routecontroller.go:183] Deleting route kubernetes-9f76479f-b8c4-11e6-bee7-42010a800002 10.244.5.0/24
I1202 19:59:18.283599 6 routecontroller.go:154] Creating route for node kubernetes-minion-group-yl0d 10.244.8.0/24 with hint c9a6c0fa-b8c9-11e6-aa17-42010a800002, throttled 585ns
I1202 19:59:20.750787 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:59:20.757861 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:59:23.820954 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-rfzh kubernetes-minion-group-2g4f kubernetes-minion-group-c4x6 kubernetes-minion-group-ofhq]
I1202 19:59:23.821022 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 19:59:24.158740 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:59:24.162061 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:59:33.617527 6 routecontroller.go:187] Deleted route kubernetes-9f76479f-b8c4-11e6-bee7-42010a800002 10.244.5.0/24 after 15.334203334s
I1202 19:59:44.032075 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-c4x6" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":null}}" VolumesAttached: []
I1202 19:59:48.864467 6 routecontroller.go:162] Created route for node kubernetes-minion-group-yl0d 10.244.8.0/24 with hint c9a6c0fa-b8c9-11e6-aa17-42010a800002 after 30.580881822s
I1202 19:59:51.835367 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:59:51.839854 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 19:59:54.158808 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 19:59:54.161061 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 20:00:24.158960 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:00:24.161724 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 20:00:43.238185 6 reconciler.go:172] Started DetachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" from node "kubernetes-minion-group-c4x6". This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching
I1202 20:00:46.630718 6 attacher.go:127] VolumesAreAttached: check volume "kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (specName: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") is no longer attached
I1202 20:00:46.630769 6 operation_executor.go:564] VerifyVolumesAreAttached determined volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") is no longer attached to node "kubernetes-minion-group-c4x6", therefore it was marked as detached.
I1202 20:00:46.784601 6 operation_executor.go:699] DetachVolume.Detach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-c4x6".
I1202 20:00:46.847328 6 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" to node "kubernetes-minion-group-yl0d"
I1202 20:00:50.639151 6 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002" (spec.Name: "pvc-61045ddc-b8c7-11e6-bee7-42010a800002") from node "kubernetes-minion-group-yl0d".
I1202 20:00:50.665902 6 node_status_updater.go:135] Updating status for node "kubernetes-minion-group-yl0d" succeeded. patchBytes: "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002\"},{\"devicePath\":\"/dev/disk/by-id/google-kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\",\"name\":\"kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002\"}]}}" VolumesAttached: [{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-60f59472-b8c7-11e6-bee7-42010a800002} {kubernetes.io/gce-pd/kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002 /dev/disk/by-id/google-kubernetes-dynamic-pvc-61045ddc-b8c7-11e6-bee7-42010a800002}]
I1202 20:00:54.159142 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:00:54.162114 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 20:01:03.821208 6 servicecontroller.go:642] Detected change in list of current cluster nodes. New node set: [kubernetes-minion-group-ofhq kubernetes-minion-group-rfzh kubernetes-minion-group-yl0d kubernetes-minion-group-2g4f kubernetes-minion-group-c4x6]
I1202 20:01:03.821278 6 servicecontroller.go:650] Successfully updated 8 out of 8 load balancers to direct traffic to the updated set of nodes
I1202 20:01:24.159274 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:01:24.162017 6 pet_set.go:332] StatefulSet web blocked from scaling on pod web-2
I1202 20:01:33.058403 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:01:54.159399 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:02:24.159553 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:02:54.159718 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:03:24.159955 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:03:54.160107 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
I1202 20:04:24.160248 6 pet_set.go:324] Syncing StatefulSet default/web with 5 pods
@kgrvamsi
Copy link

kgrvamsi commented Apr 4, 2017

Do you still have this issue

error retrieving resource lock kube-system/kube-controller-manager: 

@herozeng
Copy link

I have the same issue,is it caused by etcd ?I do have one etcd, so there is no option to do leader selection

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment