Skip to content

Instantly share code, notes, and snippets.

@tsuna
Created July 12, 2018 23:20
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tsuna/594fef65be39ecd7e0ffe05bf8113998 to your computer and use it in GitHub Desktop.
Save tsuna/594fef65be39ecd7e0ffe05bf8113998 to your computer and use it in GitHub Desktop.
kops failed upgrade
Flag --etcd-quorum-read has been deprecated, This flag is deprecated and the ability to switch off quorum read will be removed in a future release.
Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0712 22:54:43.196267 1 flags.go:27] FLAG: --address="127.0.0.1"
I0712 22:54:43.196388 1 flags.go:27] FLAG: --admission-control="[]"
I0712 22:54:43.196435 1 flags.go:27] FLAG: --admission-control-config-file=""
I0712 22:54:43.196476 1 flags.go:27] FLAG: --advertise-address="<nil>"
I0712 22:54:43.196541 1 flags.go:27] FLAG: --allow-privileged="true"
I0712 22:54:43.196582 1 flags.go:27] FLAG: --alsologtostderr="false"
I0712 22:54:43.196622 1 flags.go:27] FLAG: --anonymous-auth="false"
I0712 22:54:43.196683 1 flags.go:27] FLAG: --apiserver-count="1"
I0712 22:54:43.196727 1 flags.go:27] FLAG: --audit-log-batch-buffer-size="10000"
I0712 22:54:43.196763 1 flags.go:27] FLAG: --audit-log-batch-max-size="400"
I0712 22:54:43.196798 1 flags.go:27] FLAG: --audit-log-batch-max-wait="30s"
I0712 22:54:43.196861 1 flags.go:27] FLAG: --audit-log-batch-throttle-burst="15"
I0712 22:54:43.196898 1 flags.go:27] FLAG: --audit-log-batch-throttle-enable="false"
I0712 22:54:43.196936 1 flags.go:27] FLAG: --audit-log-batch-throttle-qps="10"
I0712 22:54:43.196978 1 flags.go:27] FLAG: --audit-log-format="json"
I0712 22:54:43.205189 1 flags.go:27] FLAG: --audit-log-maxage="0"
I0712 22:54:43.205235 1 flags.go:27] FLAG: --audit-log-maxbackup="0"
I0712 22:54:43.205279 1 flags.go:27] FLAG: --audit-log-maxsize="0"
I0712 22:54:43.205315 1 flags.go:27] FLAG: --audit-log-mode="blocking"
I0712 22:54:43.205378 1 flags.go:27] FLAG: --audit-log-path=""
I0712 22:54:43.205414 1 flags.go:27] FLAG: --audit-log-truncate-enabled="false"
I0712 22:54:43.205448 1 flags.go:27] FLAG: --audit-log-truncate-max-batch-size="10485760"
I0712 22:54:43.205487 1 flags.go:27] FLAG: --audit-log-truncate-max-event-size="102400"
I0712 22:54:43.205552 1 flags.go:27] FLAG: --audit-policy-file=""
I0712 22:54:43.205587 1 flags.go:27] FLAG: --audit-webhook-batch-buffer-size="10000"
I0712 22:54:43.205623 1 flags.go:27] FLAG: --audit-webhook-batch-initial-backoff="10s"
I0712 22:54:43.205683 1 flags.go:27] FLAG: --audit-webhook-batch-max-size="400"
I0712 22:54:43.205722 1 flags.go:27] FLAG: --audit-webhook-batch-max-wait="30s"
I0712 22:54:43.205758 1 flags.go:27] FLAG: --audit-webhook-batch-throttle-burst="15"
I0712 22:54:43.205792 1 flags.go:27] FLAG: --audit-webhook-batch-throttle-enable="true"
I0712 22:54:43.205852 1 flags.go:27] FLAG: --audit-webhook-batch-throttle-qps="10"
I0712 22:54:43.205892 1 flags.go:27] FLAG: --audit-webhook-config-file=""
I0712 22:54:43.205926 1 flags.go:27] FLAG: --audit-webhook-initial-backoff="10s"
I0712 22:54:43.205960 1 flags.go:27] FLAG: --audit-webhook-mode="batch"
I0712 22:54:43.206020 1 flags.go:27] FLAG: --audit-webhook-truncate-enabled="false"
I0712 22:54:43.206058 1 flags.go:27] FLAG: --audit-webhook-truncate-max-batch-size="10485760"
I0712 22:54:43.206093 1 flags.go:27] FLAG: --audit-webhook-truncate-max-event-size="102400"
I0712 22:54:43.206128 1 flags.go:27] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
I0712 22:54:43.206189 1 flags.go:27] FLAG: --authentication-token-webhook-config-file=""
I0712 22:54:43.206226 1 flags.go:27] FLAG: --authorization-mode="RBAC"
I0712 22:54:43.206260 1 flags.go:27] FLAG: --authorization-policy-file=""
I0712 22:54:43.206294 1 flags.go:27] FLAG: --authorization-rbac-super-user=""
I0712 22:54:43.206355 1 flags.go:27] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
I0712 22:54:43.206392 1 flags.go:27] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
I0712 22:54:43.206427 1 flags.go:27] FLAG: --authorization-webhook-config-file=""
I0712 22:54:43.206461 1 flags.go:27] FLAG: --basic-auth-file="/srv/kubernetes/basic_auth.csv"
I0712 22:54:43.206522 1 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0712 22:54:43.206560 1 flags.go:27] FLAG: --cert-dir="/var/run/kubernetes"
I0712 22:54:43.206597 1 flags.go:27] FLAG: --client-ca-file="/srv/kubernetes/ca.crt"
I0712 22:54:43.206633 1 flags.go:27] FLAG: --cloud-config=""
I0712 22:54:43.206776 1 flags.go:27] FLAG: --cloud-provider="aws"
I0712 22:54:43.206816 1 flags.go:27] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
I0712 22:54:43.206896 1 flags.go:27] FLAG: --contention-profiling="false"
I0712 22:54:43.206931 1 flags.go:27] FLAG: --cors-allowed-origins="[]"
I0712 22:54:43.206974 1 flags.go:27] FLAG: --default-not-ready-toleration-seconds="300"
I0712 22:54:43.207035 1 flags.go:27] FLAG: --default-unreachable-toleration-seconds="300"
I0712 22:54:43.207072 1 flags.go:27] FLAG: --default-watch-cache-size="100"
I0712 22:54:43.207107 1 flags.go:27] FLAG: --delete-collection-workers="1"
I0712 22:54:43.207141 1 flags.go:27] FLAG: --deserialization-cache-size="0"
I0712 22:54:43.207198 1 flags.go:27] FLAG: --disable-admission-plugins="[]"
I0712 22:54:43.207237 1 flags.go:27] FLAG: --enable-admission-plugins="[Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota]"
I0712 22:54:43.207293 1 flags.go:27] FLAG: --enable-aggregator-routing="false"
I0712 22:54:43.207362 1 flags.go:27] FLAG: --enable-bootstrap-token-auth="false"
I0712 22:54:43.207399 1 flags.go:27] FLAG: --enable-garbage-collector="true"
I0712 22:54:43.207433 1 flags.go:27] FLAG: --enable-logs-handler="true"
I0712 22:54:43.207467 1 flags.go:27] FLAG: --enable-swagger-ui="false"
I0712 22:54:43.207537 1 flags.go:27] FLAG: --endpoint-reconciler-type="master-count"
I0712 22:54:43.207578 1 flags.go:27] FLAG: --etcd-cafile=""
I0712 22:54:43.207612 1 flags.go:27] FLAG: --etcd-certfile=""
I0712 22:54:43.207646 1 flags.go:27] FLAG: --etcd-compaction-interval="5m0s"
I0712 22:54:43.207706 1 flags.go:27] FLAG: --etcd-count-metric-poll-period="1m0s"
I0712 22:54:43.207743 1 flags.go:27] FLAG: --etcd-keyfile=""
I0712 22:54:43.207776 1 flags.go:27] FLAG: --etcd-prefix="/registry"
I0712 22:54:43.207811 1 flags.go:27] FLAG: --etcd-quorum-read="false"
I0712 22:54:43.207926 1 flags.go:27] FLAG: --etcd-servers="[http://127.0.0.1:4001]"
I0712 22:54:43.207969 1 flags.go:27] FLAG: --etcd-servers-overrides="[/events#http://127.0.0.1:4002]"
I0712 22:54:43.208048 1 flags.go:27] FLAG: --event-ttl="1h0m0s"
I0712 22:54:43.208086 1 flags.go:27] FLAG: --experimental-encryption-provider-config=""
I0712 22:54:43.208120 1 flags.go:27] FLAG: --external-hostname=""
I0712 22:54:43.208153 1 flags.go:27] FLAG: --feature-gates=""
I0712 22:54:43.208219 1 flags.go:27] FLAG: --help="false"
I0712 22:54:43.208255 1 flags.go:27] FLAG: --http2-max-streams-per-connection="0"
I0712 22:54:43.208289 1 flags.go:27] FLAG: --insecure-bind-address="127.0.0.1"
I0712 22:54:43.208325 1 flags.go:27] FLAG: --insecure-port="8080"
I0712 22:54:43.208387 1 flags.go:27] FLAG: --ir-data-source="influxdb"
I0712 22:54:43.208423 1 flags.go:27] FLAG: --ir-dbname="k8s"
I0712 22:54:43.208457 1 flags.go:27] FLAG: --ir-hawkular=""
I0712 22:54:43.208491 1 flags.go:27] FLAG: --ir-influxdb-host="localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb:api/proxy"
I0712 22:54:43.208555 1 flags.go:27] FLAG: --ir-namespace-only="false"
I0712 22:54:43.208591 1 flags.go:27] FLAG: --ir-password="root"
I0712 22:54:43.208625 1 flags.go:27] FLAG: --ir-percentile="90"
I0712 22:54:43.208659 1 flags.go:27] FLAG: --ir-user="root"
I0712 22:54:43.208721 1 flags.go:27] FLAG: --kubelet-certificate-authority=""
I0712 22:54:43.208756 1 flags.go:27] FLAG: --kubelet-client-certificate=""
I0712 22:54:43.208790 1 flags.go:27] FLAG: --kubelet-client-key=""
I0712 22:54:43.208823 1 flags.go:27] FLAG: --kubelet-https="true"
I0712 22:54:43.208886 1 flags.go:27] FLAG: --kubelet-port="10250"
I0712 22:54:43.208926 1 flags.go:27] FLAG: --kubelet-preferred-address-types="[InternalIP,Hostname,ExternalIP]"
I0712 22:54:43.208965 1 flags.go:27] FLAG: --kubelet-read-only-port="10255"
I0712 22:54:43.209026 1 flags.go:27] FLAG: --kubelet-timeout="5s"
I0712 22:54:43.209065 1 flags.go:27] FLAG: --kubernetes-service-node-port="0"
I0712 22:54:43.209100 1 flags.go:27] FLAG: --log-backtrace-at=":0"
I0712 22:54:43.209139 1 flags.go:27] FLAG: --log-dir=""
I0712 22:54:43.209201 1 flags.go:27] FLAG: --log-flush-frequency="5s"
I0712 22:54:43.209239 1 flags.go:27] FLAG: --logtostderr="true"
I0712 22:54:43.209274 1 flags.go:27] FLAG: --master-service-namespace="default"
I0712 22:54:43.209309 1 flags.go:27] FLAG: --max-connection-bytes-per-sec="0"
I0712 22:54:43.209366 1 flags.go:27] FLAG: --max-mutating-requests-inflight="200"
I0712 22:54:43.209404 1 flags.go:27] FLAG: --max-requests-inflight="400"
I0712 22:54:43.209439 1 flags.go:27] FLAG: --min-request-timeout="1800"
I0712 22:54:43.209474 1 flags.go:27] FLAG: --oidc-ca-file=""
I0712 22:54:43.209531 1 flags.go:27] FLAG: --oidc-client-id=""
I0712 22:54:43.209567 1 flags.go:27] FLAG: --oidc-groups-claim=""
I0712 22:54:43.209607 1 flags.go:27] FLAG: --oidc-groups-prefix=""
I0712 22:54:43.209642 1 flags.go:27] FLAG: --oidc-issuer-url=""
I0712 22:54:43.209701 1 flags.go:27] FLAG: --oidc-signing-algs="[RS256]"
I0712 22:54:43.209746 1 flags.go:27] FLAG: --oidc-username-claim="sub"
I0712 22:54:43.209780 1 flags.go:27] FLAG: --oidc-username-prefix=""
I0712 22:54:43.209814 1 flags.go:27] FLAG: --port="8080"
I0712 22:54:43.209874 1 flags.go:27] FLAG: --profiling="true"
I0712 22:54:43.209911 1 flags.go:27] FLAG: --proxy-client-cert-file="/srv/kubernetes/apiserver-aggregator.cert"
I0712 22:54:43.209946 1 flags.go:27] FLAG: --proxy-client-key-file="/srv/kubernetes/apiserver-aggregator.key"
I0712 22:54:43.209981 1 flags.go:27] FLAG: --repair-malformed-updates="true"
I0712 22:54:43.210039 1 flags.go:27] FLAG: --request-timeout="1m0s"
I0712 22:54:43.210076 1 flags.go:27] FLAG: --requestheader-allowed-names="[aggregator]"
I0712 22:54:43.210114 1 flags.go:27] FLAG: --requestheader-client-ca-file="/srv/kubernetes/apiserver-aggregator-ca.cert"
I0712 22:54:43.210149 1 flags.go:27] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]"
I0712 22:54:43.210217 1 flags.go:27] FLAG: --requestheader-group-headers="[X-Remote-Group]"
I0712 22:54:43.210256 1 flags.go:27] FLAG: --requestheader-username-headers="[X-Remote-User]"
I0712 22:54:43.210299 1 flags.go:27] FLAG: --runtime-config=""
I0712 22:54:43.210375 1 flags.go:27] FLAG: --secure-port="443"
I0712 22:54:43.210415 1 flags.go:27] FLAG: --service-account-api-audiences="[]"
I0712 22:54:43.210453 1 flags.go:27] FLAG: --service-account-issuer=""
I0712 22:54:43.210486 1 flags.go:27] FLAG: --service-account-key-file="[]"
I0712 22:54:43.210568 1 flags.go:27] FLAG: --service-account-lookup="true"
I0712 22:54:43.210603 1 flags.go:27] FLAG: --service-account-signing-key-file=""
I0712 22:54:43.210637 1 flags.go:27] FLAG: --service-cluster-ip-range="100.64.0.0/13"
I0712 22:54:43.210702 1 flags.go:27] FLAG: --service-node-port-range="30000-32767"
I0712 22:54:43.210744 1 flags.go:27] FLAG: --ssh-keyfile=""
I0712 22:54:43.210779 1 flags.go:27] FLAG: --ssh-user=""
I0712 22:54:43.210812 1 flags.go:27] FLAG: --stderrthreshold="2"
I0712 22:54:43.210872 1 flags.go:27] FLAG: --storage-backend="etcd2"
I0712 22:54:43.210908 1 flags.go:27] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf"
I0712 22:54:43.210943 1 flags.go:27] FLAG: --storage-version=""
I0712 22:54:43.210977 1 flags.go:27] FLAG: --storage-versions="admission.k8s.io/v1beta1,admissionregistration.k8s.io/v1beta1,apps/v1,authentication.k8s.io/v1,authorization.k8s.io/v1,autoscaling/v1,batch/v1,certificates.k8s.io/v1beta1,componentconfig/v1alpha1,events.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,scheduling.k8s.io/v1alpha1,settings.k8s.io/v1alpha1,storage.k8s.io/v1,v1"
I0712 22:54:43.211052 1 flags.go:27] FLAG: --target-ram-mb="0"
I0712 22:54:43.211090 1 flags.go:27] FLAG: --tls-ca-file=""
I0712 22:54:43.211123 1 flags.go:27] FLAG: --tls-cert-file="/srv/kubernetes/server.cert"
I0712 22:54:43.211158 1 flags.go:27] FLAG: --tls-cipher-suites="[]"
I0712 22:54:43.212147 1 flags.go:27] FLAG: --tls-min-version=""
I0712 22:54:43.212187 1 flags.go:27] FLAG: --tls-private-key-file="/srv/kubernetes/server.key"
I0712 22:54:43.212223 1 flags.go:27] FLAG: --tls-sni-cert-key="[]"
I0712 22:54:43.212291 1 flags.go:27] FLAG: --token-auth-file="/srv/kubernetes/known_tokens.csv"
I0712 22:54:43.212330 1 flags.go:27] FLAG: --v="2"
I0712 22:54:43.212366 1 flags.go:27] FLAG: --version="false"
I0712 22:54:43.212405 1 flags.go:27] FLAG: --vmodule=""
I0712 22:54:43.212465 1 flags.go:27] FLAG: --watch-cache="true"
I0712 22:54:43.212502 1 flags.go:27] FLAG: --watch-cache-sizes="[]"
I0712 22:54:43.212579 1 server.go:135] Version: v1.10.3
I0712 22:54:43.213145 1 server.go:724] external host was not specified, using 172.20.57.206
I0712 22:54:43.221894 1 server.go:748] Initializing deserialization cache size based on 0MB limit
I0712 22:54:43.221961 1 server.go:767] Initializing cache sizes based on 0MB limit
W0712 22:54:44.628344 1 admission.go:68] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.
I0712 22:54:44.628862 1 feature_gate.go:190] feature gates: map[Initializers:true]
I0712 22:54:44.628944 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0712 22:54:44.629378 1 plugins.go:149] Loaded 11 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
W0712 22:54:44.629994 1 admission.go:68] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.
I0712 22:54:44.630470 1 plugins.go:149] Loaded 11 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I0712 22:54:44.640772 1 store.go:1391] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions
I0712 22:54:44.656376 1 master.go:228] Using reconciler: master-count
W0712 22:54:44.658194 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.660858 1 store.go:1391] Monitoring podtemplates count at <storage-prefix>//podtemplates
W0712 22:54:44.660948 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.661152 1 store.go:1391] Monitoring events count at <storage-prefix>//events
W0712 22:54:44.661209 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.661828 1 store.go:1391] Monitoring limitranges count at <storage-prefix>//limitranges
W0712 22:54:44.661911 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.662880 1 store.go:1391] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
W0712 22:54:44.663026 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.663713 1 store.go:1391] Monitoring secrets count at <storage-prefix>//secrets
W0712 22:54:44.672095 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.673051 1 store.go:1391] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
W0712 22:54:44.673146 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.673517 1 store.go:1391] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
W0712 22:54:44.673645 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.673893 1 store.go:1391] Monitoring configmaps count at <storage-prefix>//configmaps
W0712 22:54:44.674003 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.674344 1 store.go:1391] Monitoring namespaces count at <storage-prefix>//namespaces
W0712 22:54:44.674449 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.674836 1 store.go:1391] Monitoring endpoints count at <storage-prefix>//services/endpoints
W0712 22:54:44.674940 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.675699 1 store.go:1391] Monitoring nodes count at <storage-prefix>//minions
W0712 22:54:44.675841 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.676435 1 store.go:1391] Monitoring pods count at <storage-prefix>//pods
W0712 22:54:44.676549 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.676810 1 store.go:1391] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
W0712 22:54:44.676919 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.678783 1 store.go:1391] Monitoring services count at <storage-prefix>//services/specs
W0712 22:54:44.678935 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
W0712 22:54:44.694344 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.695285 1 store.go:1391] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0712 22:54:44.736784 1 master.go:417] Enabling API group "authentication.k8s.io".
I0712 22:54:44.736968 1 master.go:417] Enabling API group "authorization.k8s.io".
W0712 22:54:44.737273 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.737911 1 store.go:1391] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
W0712 22:54:44.738069 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.738422 1 store.go:1391] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0712 22:54:44.738519 1 master.go:417] Enabling API group "autoscaling".
W0712 22:54:44.738687 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.739276 1 store.go:1391] Monitoring jobs.batch count at <storage-prefix>//jobs
W0712 22:54:44.739433 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.740042 1 store.go:1391] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0712 22:54:44.740122 1 master.go:417] Enabling API group "batch".
W0712 22:54:44.740293 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.740946 1 store.go:1391] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0712 22:54:44.741024 1 master.go:417] Enabling API group "certificates.k8s.io".
W0712 22:54:44.741204 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.741430 1 store.go:1391] Monitoring replicationcontrollers count at <storage-prefix>//controllers
W0712 22:54:44.741613 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.742378 1 store.go:1391] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets
W0712 22:54:44.742529 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.743245 1 store.go:1391] Monitoring deployments.extensions count at <storage-prefix>//deployments
W0712 22:54:44.743409 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.744199 1 store.go:1391] Monitoring ingresses.extensions count at <storage-prefix>//ingress
W0712 22:54:44.744386 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.756560 1 store.go:1391] Monitoring podsecuritypolicies.extensions count at <storage-prefix>//podsecuritypolicy
W0712 22:54:44.756823 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.757518 1 store.go:1391] Monitoring replicasets.extensions count at <storage-prefix>//replicasets
W0712 22:54:44.757697 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.758602 1 store.go:1391] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0712 22:54:44.758698 1 master.go:417] Enabling API group "extensions".
W0712 22:54:44.758859 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.759215 1 store.go:1391] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0712 22:54:44.759270 1 master.go:417] Enabling API group "networking.k8s.io".
W0712 22:54:44.759429 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.773637 1 store.go:1391] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
W0712 22:54:44.774473 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.808494 1 store.go:1391] Monitoring podsecuritypolicies.extensions count at <storage-prefix>//podsecuritypolicy
I0712 22:54:44.808776 1 master.go:417] Enabling API group "policy".
W0712 22:54:44.808846 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.824577 1 store.go:1391] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
W0712 22:54:44.824832 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.825432 1 store.go:1391] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
W0712 22:54:44.825519 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.834483 1 store.go:1391] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
W0712 22:54:44.834611 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.834962 1 store.go:1391] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
W0712 22:54:44.835046 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.835291 1 store.go:1391] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
W0712 22:54:44.835435 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.835703 1 store.go:1391] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
W0712 22:54:44.835827 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.836055 1 store.go:1391] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
W0712 22:54:44.836192 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.836399 1 store.go:1391] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0712 22:54:44.836440 1 master.go:417] Enabling API group "rbac.authorization.k8s.io".
I0712 22:54:44.838460 1 master.go:409] Skipping disabled API group "scheduling.k8s.io".
I0712 22:54:44.838521 1 master.go:409] Skipping disabled API group "settings.k8s.io".
W0712 22:54:44.838721 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.839139 1 store.go:1391] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
W0712 22:54:44.839190 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.839892 1 store.go:1391] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
W0712 22:54:44.840046 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.840449 1 store.go:1391] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0712 22:54:44.840520 1 master.go:417] Enabling API group "storage.k8s.io".
W0712 22:54:44.840695 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.840944 1 store.go:1391] Monitoring deployments.extensions count at <storage-prefix>//deployments
W0712 22:54:44.841172 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.842025 1 store.go:1391] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
W0712 22:54:44.850273 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.850562 1 store.go:1391] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
W0712 22:54:44.850672 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.850859 1 store.go:1391] Monitoring deployments.extensions count at <storage-prefix>//deployments
W0712 22:54:44.850972 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.851152 1 store.go:1391] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
W0712 22:54:44.851273 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.851505 1 store.go:1391] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets
W0712 22:54:44.851625 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.851822 1 store.go:1391] Monitoring replicasets.extensions count at <storage-prefix>//replicasets
W0712 22:54:44.851931 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.852102 1 store.go:1391] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
W0712 22:54:44.852211 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.852386 1 store.go:1391] Monitoring deployments.extensions count at <storage-prefix>//deployments
W0712 22:54:44.852489 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.852686 1 store.go:1391] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
W0712 22:54:44.852788 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.852996 1 store.go:1391] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets
W0712 22:54:44.853106 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.853341 1 store.go:1391] Monitoring replicasets.extensions count at <storage-prefix>//replicasets
W0712 22:54:44.853461 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.853673 1 store.go:1391] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0712 22:54:44.853702 1 master.go:417] Enabling API group "apps".
W0712 22:54:44.853740 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.854379 1 store.go:1391] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
W0712 22:54:44.854418 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.854694 1 store.go:1391] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0712 22:54:44.854725 1 master.go:417] Enabling API group "admissionregistration.k8s.io".
W0712 22:54:44.854763 1 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
I0712 22:54:44.854899 1 store.go:1391] Monitoring events count at <storage-prefix>//events
I0712 22:54:44.854927 1 master.go:417] Enabling API group "events.k8s.io".
W0712 22:54:44.997792 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0712 22:54:45.147485 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0712 22:54:45.149248 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0712 22:54:45.248297 1 trace.go:76] Trace[200412]: "List *networking.NetworkPolicyList" (started: 2018-07-12 22:54:44.80711489 +0000 UTC m=+1.793206810) (total time: 441.135294ms):
Trace[200412]: [441.037514ms] [441.025036ms] Etcd node listed
I0712 22:54:45.249299 1 trace.go:76] Trace[244765603]: "List *networking.NetworkPolicyList" (started: 2018-07-12 22:54:44.806923152 +0000 UTC m=+1.793015046) (total time: 442.356042ms):
Trace[244765603]: [442.341843ms] [442.323791ms] Etcd node listed
I0712 22:54:45.254155 1 trace.go:76] Trace[2504308]: "List *extensions.PodSecurityPolicyList" (started: 2018-07-12 22:54:44.80645731 +0000 UTC m=+1.792549196) (total time: 447.68018ms):
Trace[2504308]: [447.612154ms] [447.588062ms] Etcd node listed
I0712 22:54:45.298859 1 trace.go:76] Trace[1511072914]: "List *rbac.RoleList" (started: 2018-07-12 22:54:44.897805999 +0000 UTC m=+1.883897893) (total time: 401.018935ms):
Trace[1511072914]: [400.546518ms] [400.532814ms] Etcd node listed
I0712 22:54:45.299402 1 trace.go:76] Trace[1837001089]: "List *extensions.PodSecurityPolicyList" (started: 2018-07-12 22:54:44.895322342 +0000 UTC m=+1.881414236) (total time: 404.061961ms):
Trace[1837001089]: [404.044705ms] [404.027642ms] Etcd node listed
I0712 22:54:45.316111 1 trace.go:76] Trace[2099417894]: "List *policy.PodDisruptionBudgetList" (started: 2018-07-12 22:54:44.807277251 +0000 UTC m=+1.793369140) (total time: 508.816021ms):
Trace[2099417894]: [508.770117ms] [508.752367ms] Etcd node listed
W0712 22:54:45.347047 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0712 22:54:45.403615 1 trace.go:76] Trace[1185236329]: "List *admissionregistration.MutatingWebhookConfigurationList" (started: 2018-07-12 22:54:44.960694745 +0000 UTC m=+1.946786633) (total time: 442.888181ms):
Trace[1185236329]: [442.834018ms] [442.814195ms] Etcd node listed
I0712 22:54:45.403920 1 trace.go:76] Trace[916414682]: "List *admissionregistration.ValidatingWebhookConfigurationList" (started: 2018-07-12 22:54:44.960497646 +0000 UTC m=+1.946589546) (total time: 443.405096ms):
Trace[916414682]: [443.368983ms] [443.343216ms] Etcd node listed
I0712 22:54:45.405441 1 trace.go:76] Trace[257104441]: "List *rbac.ClusterRoleBindingList" (started: 2018-07-12 22:54:44.899874451 +0000 UTC m=+1.885966345) (total time: 505.547464ms):
Trace[257104441]: [357.285003ms] [357.248879ms] Etcd node listed
Trace[257104441]: [505.543723ms] [148.25872ms] Node list decoded
I0712 22:54:45.571940 1 trace.go:76] Trace[398300400]: "decodeNodeList *[]core.Pod" (started: 2018-07-12 22:54:44.877583705 +0000 UTC m=+1.863675594) (total time: 694.318173ms):
Trace[398300400]: [694.313922ms] [694.313922ms] Decoded 4 nodes
I0712 22:54:45.589301 1 trace.go:76] Trace[2056467410]: "decodeNodeList *[]core.Pod" (started: 2018-07-12 22:54:44.87758198 +0000 UTC m=+1.863673886) (total time: 711.70065ms):
Trace[2056467410]: [711.697535ms] [711.697535ms] Decoded 5 nodes
I0712 22:54:45.589421 1 trace.go:76] Trace[45071193]: "List *core.PodList" (started: 2018-07-12 22:54:44.723167224 +0000 UTC m=+1.709259111) (total time: 866.232859ms):
Trace[45071193]: [154.393059ms] [154.37486ms] Etcd node listed
Trace[45071193]: [866.223856ms] [711.830797ms] Node list decoded
I0712 22:54:45.590374 1 trace.go:76] Trace[1821278552]: "decodeNodeList *[]extensions.Deployment" (started: 2018-07-12 22:54:45.070771512 +0000 UTC m=+2.056863401) (total time: 519.584359ms):
Trace[1821278552]: [519.581952ms] [519.581952ms] Decoded 2 nodes
I0712 22:54:45.594836 1 trace.go:76] Trace[1539435081]: "decodeNodeList *[]extensions.Deployment" (started: 2018-07-12 22:54:45.070769792 +0000 UTC m=+2.056861688) (total time: 524.048084ms):
Trace[1539435081]: [524.04574ms] [524.04574ms] Decoded 5 nodes
I0712 22:54:45.594959 1 trace.go:76] Trace[1575925873]: "List *extensions.DeploymentList" (started: 2018-07-12 22:54:44.754572001 +0000 UTC m=+1.740663888) (total time: 840.367518ms):
Trace[1575925873]: [316.17662ms] [316.151131ms] Etcd node listed
Trace[1575925873]: [840.350808ms] [524.174188ms] Node list decoded
I0712 22:54:45.634051 1 trace.go:76] Trace[1901491825]: "List *extensions.ReplicaSetList" (started: 2018-07-12 22:54:44.806738933 +0000 UTC m=+1.792830824) (total time: 827.282029ms):
Trace[1901491825]: [445.124318ms] [445.104068ms] Etcd node listed
Trace[1901491825]: [827.275186ms] [382.150868ms] Node list decoded
I0712 22:54:45.651644 1 trace.go:76] Trace[1475808219]: "List *extensions.DeploymentList" (started: 2018-07-12 22:54:44.943422879 +0000 UTC m=+1.929514770) (total time: 708.201433ms):
Trace[1475808219]: [369.908044ms] [369.88315ms] Etcd node listed
Trace[1475808219]: [708.198047ms] [338.290003ms] Node list decoded
I0712 22:54:45.656058 1 trace.go:76] Trace[1198032628]: "List *extensions.DeploymentList" (started: 2018-07-12 22:54:44.937346446 +0000 UTC m=+1.923438340) (total time: 718.691732ms):
Trace[1198032628]: [377.106725ms] [377.0788ms] Etcd node listed
Trace[1198032628]: [718.688494ms] [341.581769ms] Node list decoded
I0712 22:54:45.668584 1 trace.go:76] Trace[1174344164]: "List *extensions.DeploymentList" (started: 2018-07-12 22:54:44.940060162 +0000 UTC m=+1.926152053) (total time: 728.505429ms):
Trace[1174344164]: [375.516925ms] [375.498175ms] Etcd node listed
Trace[1174344164]: [728.502149ms] [352.985224ms] Node list decoded
I0712 22:54:45.673376 1 trace.go:76] Trace[1285113115]: "List *extensions.ReplicaSetList" (started: 2018-07-12 22:54:44.945933966 +0000 UTC m=+1.932025897) (total time: 727.422999ms):
Trace[1285113115]: [372.357886ms] [372.341427ms] Etcd node listed
Trace[1285113115]: [727.419746ms] [355.06186ms] Node list decoded
I0712 22:54:45.690710 1 trace.go:76] Trace[1932444272]: "List *extensions.ReplicaSetList" (started: 2018-07-12 22:54:44.942121558 +0000 UTC m=+1.928213478) (total time: 748.569216ms):
Trace[1932444272]: [387.866ms] [387.837875ms] Etcd node listed
Trace[1932444272]: [748.565812ms] [360.699812ms] Node list decoded
[restful] 2018/07/12 22:54:45 log.go:33: [restful/swagger] listing is available at https://172.20.57.206:443/swaggerapi
[restful] 2018/07/12 22:54:45 log.go:33: [restful/swagger] https://172.20.57.206:443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/07/12 22:54:50 log.go:33: [restful/swagger] listing is available at https://172.20.57.206:443/swaggerapi
[restful] 2018/07/12 22:54:50 log.go:33: [restful/swagger] https://172.20.57.206:443/swaggerui/ is mapped to folder /swagger-ui/
W0712 22:54:50.176677 1 admission.go:68] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.
I0712 22:54:50.177412 1 plugins.go:149] Loaded 11 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I0712 22:54:50.187809 1 store.go:1391] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
I0712 22:54:50.188017 1 store.go:1391] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
I0712 22:55:00.301113 1 insecure_handler.go:121] Serving insecurely on 127.0.0.1:8080
I0712 22:55:00.302333 1 serve.go:96] Serving securely on [::]:443
I0712 22:55:00.304094 1 crd_finalizer.go:242] Starting CRDFinalizer
I0712 22:55:00.304218 1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0712 22:55:00.304262 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0712 22:55:00.304352 1 available_controller.go:262] Starting AvailableConditionController
I0712 22:55:00.304391 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0712 22:55:00.304517 1 controller.go:84] Starting OpenAPI AggregationController
I0712 22:55:00.321249 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28734: EOF
I0712 22:55:00.333651 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28736: EOF
I0712 22:55:00.337899 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28738: EOF
I0712 22:55:00.350472 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28740: EOF
I0712 22:55:00.354598 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28742: EOF
I0712 22:55:00.367026 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28744: EOF
I0712 22:55:00.371138 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28746: EOF
I0712 22:55:00.383882 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28748: EOF
I0712 22:55:00.388222 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28750: EOF
I0712 22:55:00.401324 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28752: EOF
I0712 22:55:00.405440 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28754: EOF
I0712 22:55:00.418294 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28756: EOF
I0712 22:55:00.430993 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28758: EOF
I0712 22:55:00.435194 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28760: EOF
I0712 22:55:00.448244 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28762: EOF
I0712 22:55:00.452507 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28764: EOF
I0712 22:55:00.465168 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28766: EOF
I0712 22:55:00.486916 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28768: EOF
I0712 22:55:00.504060 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28774: EOF
I0712 22:55:00.508291 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28776: EOF
I0712 22:55:00.525125 1 logs.go:49] http: TLS handshake error from 127.0.0.1:28890: EOF
I0712 22:55:00.900825 1 customresource_discovery_controller.go:174] Starting DiscoveryController
I0712 22:55:00.900987 1 naming_controller.go:276] Starting NamingConditionController
I0712 22:55:00.901081 1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0712 22:55:00.901129 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0712 22:55:01.030717 1 wrap.go:42] POST /api/v1/namespaces/default/events: (432.933µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.033925 1 wrap.go:42] GET /api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (223.063µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.034845 1 wrap.go:42] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (558.713µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.035333 1 wrap.go:42] GET /apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: (314.199µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.035944 1 wrap.go:42] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (231.302µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.036399 1 wrap.go:42] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (293.699µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.037034 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (311.031µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.037486 1 wrap.go:42] GET /apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: (294.188µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.046315 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (8.67407ms) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.046794 1 wrap.go:42] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (308.074µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.047334 1 wrap.go:42] GET /api/v1/nodes?limit=500&resourceVersion=0: (11.846107ms) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:01.100145 1 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (337.319µs) 403 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:01.100948 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: (227.202µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.101797 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (232.917µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.102359 1 wrap.go:42] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: (244.709µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.111249 1 wrap.go:42] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: (8.573825ms) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.111858 1 wrap.go:42] POST /api/v1/namespaces/default/events: (253.674µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.112385 1 wrap.go:42] GET /api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: (240.697µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.113132 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (199.588µs) 403 [[kube-proxy/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28910]
I0712 22:55:01.114043 1 wrap.go:42] GET /api/v1/endpoints?limit=500&resourceVersion=0: (235.717µs) 403 [[kube-proxy/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28910]
I0712 22:55:01.114492 1 wrap.go:42] POST /api/v1/namespaces/default/events: (291.821µs) 403 [[kube-proxy/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28910]
I0712 22:55:01.213606 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal: (361.147µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.214108 1 wrap.go:42] POST /api/v1/namespaces/default/events: (318.809µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28900]
I0712 22:55:01.228234 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (194.138µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.228671 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (181.161µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.229894 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?limit=500&resourceVersion=0: (318.026µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.230522 1 wrap.go:42] GET /api/v1/namespaces?limit=500&resourceVersion=0: (469.658µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.231447 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations?limit=500&resourceVersion=0: (262.424µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.232073 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal: (272.17µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:01.240994 1 wrap.go:42] GET /api/v1/serviceaccounts?limit=500&resourceVersion=0: (12.162418ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.241270 1 wrap.go:42] GET /apis/apiregistration.k8s.io/v1/apiservices?limit=500&resourceVersion=0: (10.514431ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.246471 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/roles?limit=500&resourceVersion=0: (392.816µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.263216 1 wrap.go:42] GET /api/v1/endpoints?limit=500&resourceVersion=0: (18.300951ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.263607 1 wrap.go:42] GET /api/v1/secrets?limit=500&resourceVersion=0: (18.14958ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.263892 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0: (16.113347ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.264938 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: (304.189µs) 403 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:01.278380 1 wrap.go:42] GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?limit=500&resourceVersion=0: (559.97µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.278911 1 get.go:238] Starting watch for /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations, rv=3258306 labels= fields= timeout=7m33s
I0712 22:55:01.279461 1 get.go:238] Starting watch for /api/v1/serviceaccounts, rv=3258306 labels= fields= timeout=6m8s
I0712 22:55:01.279911 1 get.go:238] Starting watch for /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations, rv=3258306 labels= fields= timeout=5m36s
I0712 22:55:01.280341 1 get.go:238] Starting watch for /api/v1/namespaces, rv=3258306 labels= fields= timeout=9m15s
I0712 22:55:01.280745 1 get.go:238] Starting watch for /apis/apiregistration.k8s.io/v1/apiservices, rv=3258306 labels= fields= timeout=6m55s
I0712 22:55:01.286406 1 wrap.go:42] GET /api/v1/resourcequotas?limit=500&resourceVersion=0: (399.534µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.287171 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (572.696µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.287885 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/rolebindings?limit=500&resourceVersion=0: (551.462µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.297737 1 wrap.go:42] GET /api/v1/limitranges?limit=500&resourceVersion=0: (341.957µs) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.298197 1 get.go:238] Starting watch for /apis/rbac.authorization.k8s.io/v1/clusterrolebindings, rv=3258306 labels= fields= timeout=5m22s
I0712 22:55:01.298648 1 get.go:238] Starting watch for /api/v1/endpoints, rv=3258306 labels= fields= timeout=7m15s
I0712 22:55:01.299074 1 get.go:238] Starting watch for /api/v1/secrets, rv=3258306 labels= fields= timeout=9m40s
I0712 22:55:01.299479 1 get.go:238] Starting watch for /apis/rbac.authorization.k8s.io/v1/roles, rv=3258306 labels= fields= timeout=8m54s
I0712 22:55:01.299891 1 get.go:238] Starting watch for /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions, rv=3258306 labels= fields= timeout=9m27s
I0712 22:55:01.300831 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (4.80984ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.301271 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500&resourceVersion=0: (4.633609ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.318041 1 get.go:238] Starting watch for /api/v1/resourcequotas, rv=3258306 labels= fields= timeout=7m34s
I0712 22:55:01.318839 1 get.go:238] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=3258306 labels= fields= timeout=9m51s
I0712 22:55:01.319123 1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
I0712 22:55:01.319347 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0712 22:55:01.349949 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0712 22:55:01.351161 1 get.go:238] Starting watch for /apis/rbac.authorization.k8s.io/v1/rolebindings, rv=3258306 labels= fields= timeout=5m39s
I0712 22:55:01.351442 1 autoregister_controller.go:136] Starting autoregister controller
I0712 22:55:01.351526 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0712 22:55:01.352809 1 get.go:238] Starting watch for /api/v1/services, rv=3258306 labels= fields= timeout=7m58s
I0712 22:55:01.353268 1 get.go:238] Starting watch for /apis/rbac.authorization.k8s.io/v1/clusterroles, rv=3258306 labels= fields= timeout=9m11s
I0712 22:55:01.353769 1 get.go:238] Starting watch for /api/v1/limitranges, rv=3258306 labels= fields= timeout=9m45s
I0712 22:55:01.354900 1 wrap.go:42] GET /api/v1/services: (54.802351ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.481587 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.networking.k8s.io/status: (125.709775ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.483138 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.authorization.k8s.io/status: (127.763988ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.484091 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.apps/status: (129.315851ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.484903 1 wrap.go:42] GET /api/v1/namespaces/kube-system: (130.445437ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.559777 1 cache.go:39] Caches are synced for autoregister controller
I0712 22:55:01.602725 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (248.648043ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.603982 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.rbac.authorization.k8s.io/status: (247.378953ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.604868 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.batch/status: (248.637374ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.744181 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.admissionregistration.k8s.io/status: (65.113859ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.745871 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.apiextensions.k8s.io/status: (66.712816ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.747226 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.certmanager.k8s.io/status: (68.973058ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.809944 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.policy/status: (34.15799ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.827106 1 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (147.810569ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.828022 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.storage.k8s.io/status: (51.646398ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.857662 1 wrap.go:42] GET /api/v1/services: (81.142662ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.858367 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (178.06739ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.891658 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta2.apps/status: (34.203117ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.892700 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.authentication.k8s.io/status: (35.179306ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.893601 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.autoscaling/status: (36.671494ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.959882 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.storage.k8s.io/status: (33.229805ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.961489 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (34.750256ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.965735 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (38.769273ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:01.974791 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.k8s.io/status: (48.977141ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.032475 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.certificates.k8s.io/status: (33.542214ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.033077 1 wrap.go:42] GET /api/v1/namespaces/default: (33.669139ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.033728 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v2beta1.autoscaling/status: (34.008557ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.064836 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (167.497µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.102561 1 wrap.go:42] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (505.916µs) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.113541 1 wrap.go:42] GET /apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: (17.249797ms) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.113911 1 wrap.go:42] GET /api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (14.632346ms) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.116063 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (14.526416ms) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.117789 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1./status: (53.655545ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.161148 1 wrap.go:42] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (9.486886ms) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.163004 1 wrap.go:42] GET /apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: (477.722µs) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.163893 1 wrap.go:42] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (581.312µs) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.164723 1 wrap.go:42] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (663.824µs) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.165271 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (388.975µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.165883 1 get.go:238] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=3258306 labels= fields= timeout=8m44s
I0712 22:55:02.179449 1 wrap.go:42] GET /api/v1/nodes?limit=500&resourceVersion=0: (17.881289ms) 200 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:02.180296 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (2.210657ms) 200 [[kube-proxy/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28910]
I0712 22:55:02.180662 1 wrap.go:42] GET /api/v1/endpoints?limit=500&resourceVersion=0: (1.903041ms) 200 [[kube-proxy/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28910]
I0712 22:55:02.214667 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (48.448879ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.215450 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.authentication.k8s.io/status: (48.946072ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.227297 1 get.go:238] Starting watch for /api/v1/services, rv=3258306 labels= fields= timeout=5m52s
I0712 22:55:02.228018 1 get.go:238] Starting watch for /apis/extensions/v1beta1/replicasets, rv=3258306 labels= fields= timeout=7m5s
I0712 22:55:02.228637 1 get.go:238] Starting watch for /api/v1/pods, rv=3258306 labels= fields=spec.schedulerName=default-scheduler,status.phase!=Failed,status.phase!=Succeeded timeout=6m22s
I0712 22:55:02.229560 1 get.go:238] Starting watch for /api/v1/persistentvolumes, rv=3258306 labels= fields= timeout=7m43s
I0712 22:55:02.230167 1 get.go:238] Starting watch for /apis/apps/v1beta1/statefulsets, rv=3258306 labels= fields= timeout=9m11s
I0712 22:55:02.230749 1 get.go:238] Starting watch for /api/v1/persistentvolumeclaims, rv=3258306 labels= fields= timeout=7m17s
I0712 22:55:02.231552 1 get.go:238] Starting watch for /api/v1/replicationcontrollers, rv=3258306 labels= fields= timeout=9m21s
I0712 22:55:02.242205 1 get.go:238] Starting watch for /api/v1/nodes, rv=3258306 labels= fields= timeout=6m53s
I0712 22:55:02.243172 1 wrap.go:42] GET /api/v1/services?limit=500&resourceVersion=0: (27.114824ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:02.243433 1 wrap.go:42] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: (17.202699ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:02.278754 1 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (45.905481ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.279912 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.batch/status: (47.826119ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.280727 1 get.go:238] Starting watch for /api/v1/services, rv=3258306 labels= fields= timeout=5m17s
I0712 22:55:02.281411 1 get.go:238] Starting watch for /api/v1/endpoints, rv=3258306 labels= fields= timeout=9m36s
I0712 22:55:02.282589 1 get.go:238] Starting watch for /api/v1/services, rv=3258306 labels= fields= timeout=6m1s
I0712 22:55:02.283787 1 get.go:238] Starting watch for /api/v1/pods, rv=3258306 labels= fields=spec.nodeName=ip-172-x-y-z.us-west-2.compute.internal timeout=7m27s
I0712 22:55:02.286014 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.events.k8s.io/status: (52.963936ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.303629 1 wrap.go:42] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: (86.566582ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:02.304789 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.extensions/status: (20.033487ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.305374 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (20.220963ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.306031 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.rbac.authorization.k8s.io/status: (20.965641ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.359505 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (285.982µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.360070 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (108.226µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.392281 1 get.go:238] Starting watch for /api/v1/nodes, rv=3258306 labels= fields=metadata.name=ip-172-x-y-z.us-west-2.compute.internal timeout=8m4s
I0712 22:55:02.393650 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.apps/status: (32.866181ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.457029 1 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (96.76376ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.457758 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (33.200982ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.578854 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (31.596011ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.644459 1 wrap.go:42] GET /api/v1/namespaces/kube-system: (33.863634ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.676675 1 wrap.go:42] GET /api/v1/services: (64.821959ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.705458 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (29.319343ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.706317 1 wrap.go:42] GET /api/v1/namespaces/default: (94.825777ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.730294 1 wrap.go:42] GET /api/v1/services: (118.218715ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.768596 1 wrap.go:42] GET /api/v1/namespaces/kube-public: (40.521683ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.833888 1 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (32.98589ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.834410 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (33.077082ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.963585 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (32.900671ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:02.964211 1 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (33.192328ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.093547 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (37.201985ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.218903 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (390.722µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:03.219941 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (33.64192ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.251274 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (259.977µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.345071 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (25.1629ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.556494 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (66.317371ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.587680 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (335.805µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.588241 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (192.348µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.717696 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (33.073009ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:03.892178 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (65.628727ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.037610 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (31.747249ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.167487 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (33.044578ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.260816 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (335.441µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:04.286934 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (26.845057ms) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.349480 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (269.221µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.446835 1 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (32.490627ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:04.572839 1 controller.go:537] quota admission added evaluator for: { endpoints}
I0712 22:55:04.604918 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (61.183018ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:04.694774 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (141.605µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.695376 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (106.95µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:04.726640 1 wrap.go:42] GET /healthz: (32.997546ms) 500
goroutine 2738 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc426f2ab60, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc426f2ab60, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc426f990c0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc42c7ed108, 0xc423c03080, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7200, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc423a32fa0, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a
net/http.HandlerFunc.ServeHTTP(0xc429f5fb40, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc423a33040, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc423a330e0, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f7220, 0x7f89d10a5d60, 0xc42c7ed108, 0xc42a1a1400)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42a0f72a0, 0x988f800, 0xc42c7ed108, 0xc42a1a1400, 0xc42bfd8720)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/shared-informers] 127.0.0.1:28944]
I0712 22:55:04.943430 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io/status: (63.444835ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:05.322146 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (416.99µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:05.500371 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (256.833µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:05.832524 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (485.382µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:05.865898 1 wrap.go:42] GET /healthz: (32.635765ms) 500
goroutine 2795 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc42dfcfe30, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc42dfcfe30, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc42dfc71c0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc4280236c0, 0xc42e026580, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7200, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc423a32fa0, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a
net/http.HandlerFunc.ServeHTTP(0xc429f5fb40, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc423a33040, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc423a330e0, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f7220, 0x7f89d10a5d60, 0xc4280236c0, 0xc430481600)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42a0f72a0, 0x988f800, 0xc4280236c0, 0xc430481600, 0xc42dd4eba0)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/shared-informers] 127.0.0.1:28944]
I0712 22:55:05.929596 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (343.372µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:06.408069 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (466.051µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:06.596131 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (346.942µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:06.723905 1 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (32.911435ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:06.821914 1 wrap.go:42] GET /healthz: (33.353542ms) 500
goroutine 2828 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc423e136c0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc423e136c0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc426aa5d40, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc427535638, 0xc42bc96dc0, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7200, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc423a32fa0, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a
net/http.HandlerFunc.ServeHTTP(0xc429f5fb40, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc423a33040, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc423a330e0, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f7220, 0x7f89d10a5d60, 0xc427535638, 0xc424227f00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42a0f72a0, 0x988f800, 0xc427535638, 0xc424227f00, 0xc429e89bc0)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/shared-informers] 127.0.0.1:28944]
I0712 22:55:06.852220 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (63.029497ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:06.932471 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (220.871µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:07.043853 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (257.912µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:07.461897 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (800.559µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:07.748638 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (296.482µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:07.837700 1 wrap.go:42] GET /healthz: (32.671138ms) 500
goroutine 2871 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc42e0d6bd0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc42e0d6bd0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc42e06ca00, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc4296aa000, 0xc42791ab00, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7200, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc423a32fa0, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a
net/http.HandlerFunc.ServeHTTP(0xc429f5fb40, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc423a33040, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc423a330e0, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f7220, 0x7f89d10a5d60, 0xc4296aa000, 0xc432c4c300)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42a0f72a0, 0x988f800, 0xc4296aa000, 0xc432c4c300, 0xc426e676e0)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/shared-informers] 127.0.0.1:28944]
I0712 22:55:07.894842 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io/status: (24.994372ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.024205 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (286.125µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.135371 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (239.766µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.362883 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal: (83.600885ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:08.514886 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (454.854µs) 403 [[kube-scheduler/v1.10.3 (linux/amd64) kubernetes/2bba012/scheduler] 127.0.0.1:28926]
I0712 22:55:08.517489 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal/status: (68.292743ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:08.571791 1 trace.go:76] Trace[1203983624]: "Create /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2018-07-12 22:55:04.35004083 +0000 UTC m=+21.336132740) (total time: 4.221707716s):
Trace[1203983624]: [4.198476408s] [4.198387179s] About to store object in database
I0712 22:55:08.572574 1 wrap.go:42] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.222644768s) 201 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.635194 1 storage_rbac.go:190] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0712 22:55:08.650708 1 wrap.go:42] GET /api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: (54.827992ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:08.717363 1 wrap.go:42] GET /healthz: (32.713229ms) 500
goroutine 2952 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc42e1ae770, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc42e1ae770, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc422d30500, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc428022840, 0xc42e24adc0, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7300, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc422311a90, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42cba1c80, 0x7f89d10a5d60, 0xc428022840, 0xc42ac38a00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42cba1d00, 0x988f800, 0xc428022840, 0xc42ac38a00, 0xc42aa4ba40)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-probe/1.10] 127.0.0.1:53972]
I0712 22:55:08.751338 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (34.75713ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.817398 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal/status: (67.819735ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
I0712 22:55:08.846631 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations: (248.558µs) 404 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.851023 1 wrap.go:42] GET /healthz: (34.924143ms) 500
goroutine 2976 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc426532af0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc426532af0, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc424617f40, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f89d10a5d60, 0xc42ca84be0, 0xc42dea6000, 0x2ae, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42cba1de0, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc424e65c00, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42a1141c0, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc42a10c1b0, 0xc42a1141c0, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc42a0f7200, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
<autogenerated>:1 +0x75
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc423a32fa0, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a
net/http.HandlerFunc.ServeHTTP(0xc429f5fb40, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc423a33040, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc423a330e0, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f7220, 0x7f89d10a5d60, 0xc42ca84be0, 0xc42a1a1e00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc42a0f72a0, 0x988f800, 0xc42ca84be0, 0xc42a1a1e00, 0xc4305ba9c0)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab
logging error output: "[+]ping ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-informers ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
[[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/shared-informers] 127.0.0.1:28944]
I0712 22:55:08.879338 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (31.842252ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:08.882431 1 trace.go:76] Trace[909006077]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-07-12 22:55:04.694350171 +0000 UTC m=+21.680442060) (total time: 4.188049702s):
Trace[909006077]: [4.087401045s] [4.087336995s] About to store object in database
Trace[909006077]: [4.187978375s] [100.57733ms] Object stored in database
I0712 22:55:08.882698 1 wrap.go:42] POST /api/v1/namespaces/kube-system/events: (4.188604718s) 201 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/controller-manager] 127.0.0.1:28944]
I0712 22:55:08.972251 1 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (32.828627ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
I0712 22:55:09.031739 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (27.282831ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.062082 1 autoregister_controller.go:160] Shutting down autoregister controller
I0712 22:55:09.062903 1 available_controller.go:274] Shutting down AvailableConditionController
I0712 22:55:09.062989 1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0712 22:55:09.063041 1 crd_finalizer.go:254] Shutting down CRDFinalizer
I0712 22:55:09.063130 1 crdregistration_controller.go:139] Shutting down crd-autoregister controller
I0712 22:55:09.063179 1 naming_controller.go:287] Shutting down NamingConditionController
I0712 22:55:09.063225 1 customresource_discovery_controller.go:185] Shutting down DiscoveryController
I0712 22:55:09.064107 1 controller.go:90] Shutting down OpenAPI AggregationController
I0712 22:55:09.064809 1 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (87.061µs) 500
goroutine 3027 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc42b0b4a10, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc42b0b4a10, 0x1f4)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
net/http.Error(0x988af00, 0xc42b0b4a10, 0x3d63f65, 0x1b, 0x1f4)
/usr/local/go/src/net/http/server.go:1930 +0xda
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:45 +0x18d
net/http.HandlerFunc.ServeHTTP(0xc429f5fb80, 0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:45 +0x20f
net/http.HandlerFunc.ServeHTTP(0xc429f5fbc0, 0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc42a0f72c0, 0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPanicRecovery.func1(0x988af00, 0xc42b0b4a10, 0xc42b9abd00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41 +0x108
net/http.HandlerFunc.ServeHTTP(0xc42a0f72e0, 0x988c0c0, 0xc424c82210, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:1918 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc42a10ed20, 0x988c0c0, 0xc424c82210, 0xc42b9abd00)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:197 +0x51
net/http.serverHandler.ServeHTTP(0xc4265d7040, 0x988c0c0, 0xc424c82210, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:2619 +0xb4
net/http.initNPNRequest.ServeHTTP(0xc42c077c00, 0xc4265d7040, 0x988c0c0, 0xc424c82210, 0xc42b9abd00)
/usr/local/go/src/net/http/server.go:3164 +0x9a
net/http.(*initNPNRequest).ServeHTTP(0xc42b12b690, 0x988c0c0, 0xc424c82210, 0xc42b9abd00)
<autogenerated>:1 +0x63
net/http.(Handler).ServeHTTP-fm(0x988c0c0, 0xc424c82210, 0xc42b9abd00)
/usr/local/go/src/net/http/h2_bundle.go:5462 +0x4d
net/http.(*http2serverConn).runHandler(0xc4291c7500, 0xc424c82210, 0xc42b9abd00, 0xc43096b280)
/usr/local/go/src/net/http/h2_bundle.go:5747 +0x89
created by net/http.(*http2serverConn).processHeaders
/usr/local/go/src/net/http/h2_bundle.go:5481 +0x495
logging error output: "apiserver is shutting down.\n"
[[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065107 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterroles?resourceVersion=3258306&timeoutSeconds=551&watch=true: (7.711998835s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065271 1 wrap.go:42] GET /api/v1/endpoints?resourceVersion=3258306&timeoutSeconds=435&watch=true: (7.766794003s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065427 1 wrap.go:42] GET /apis/apiregistration.k8s.io/v1/apiservices?resourceVersion=3258306&timeoutSeconds=415&watch=true: (7.784894561s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065586 1 wrap.go:42] GET /api/v1/limitranges?resourceVersion=3258306&timeoutSeconds=585&watch=true: (7.712009597s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065740 1 wrap.go:42] GET /api/v1/services?resourceVersion=3258306&timeoutSeconds=478&watch=true: (7.713136128s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.065920 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=3258306&timeoutSeconds=339&watch=true: (7.715019593s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.066512 1 wrap.go:42] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=3258306&timeoutSeconds=591&watch=true: (7.747796704s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.066673 1 wrap.go:42] GET /api/v1/resourcequotas?resourceVersion=3258306&timeoutSeconds=454&watch=true: (7.748893888s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.066834 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/roles?resourceVersion=3258306&timeoutSeconds=534&watch=true: (7.767573054s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.066995 1 wrap.go:42] GET /api/v1/secrets?resourceVersion=3258306&timeoutSeconds=580&watch=true: (7.768156808s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067146 1 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings?resourceVersion=3258306&timeoutSeconds=322&watch=true: (7.76920582s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067316 1 wrap.go:42] GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?resourceVersion=3258306&timeoutSeconds=567&watch=true: (7.76758022s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067463 1 wrap.go:42] GET /api/v1/namespaces?resourceVersion=3258306&timeoutSeconds=555&watch=true: (7.787360083s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067634 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?resourceVersion=3258306&timeoutSeconds=336&watch=true: (7.7878825s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067787 1 wrap.go:42] GET /api/v1/serviceaccounts?resourceVersion=3258306&timeoutSeconds=368&watch=true: (7.788571476s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.067967 1 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations?resourceVersion=3258306&timeoutSeconds=453&watch=true: (7.789250207s) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28940]
I0712 22:55:09.068621 1 serve.go:136] Stopped listening on [::]:443
I0712 22:55:09.069037 1 serve.go:136] Stopped listening on 127.0.0.1:8080
I0712 22:55:09.118238 1 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (57.218135ms) 200 [[kube-controller-manager/v1.10.3 (linux/amd64) kubernetes/2bba012/leader-election] 127.0.0.1:28944]
E0712 22:55:09.142320 1 storage_rbac.go:196] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager: Get https://127.0.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: dial tcp 127.0.0.1:443: getsockopt: connection refused
E0712 22:55:09.174454 1 storage_rbac.go:196] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:kube-scheduler: Get https://127.0.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: dial tcp 127.0.0.1:443: getsockopt: connection refused
E0712 22:55:09.239350 1 storage_rbac.go:196] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:kube-dns: Get https://127.0.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: dial tcp 127.0.0.1:443: getsockopt: connection refused
E0712 22:55:09.273309 1 storage_rbac.go:196] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner: Get https://127.0.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: dial tcp 127.0.0.1:443: getsockopt: connection refused
I0712 22:55:09.304643 1 trace.go:76] Trace[332298722]: "Create /api/v1/namespaces/default/events" (started: 2018-07-12 22:55:05.160097244 +0000 UTC m=+22.146189137) (total time: 4.144495289s):
Trace[332298722]: [4.112102885s] [4.112021167s] About to store object in database
I0712 22:55:09.305178 1 wrap.go:42] POST /api/v1/namespaces/default/events: (4.14537458s) 201 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:28990]
E0712 22:55:09.306024 1 storage_rbac.go:196] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider: Get https://127.0.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: dial tcp 127.0.0.1:443: getsockopt: connection refused
I0712 23:16:00.788209 1 flags.go:27] FLAG: --address="0.0.0.0"
I0712 23:16:00.788423 1 flags.go:27] FLAG: --allocate-node-cidrs="true"
I0712 23:16:00.788469 1 flags.go:27] FLAG: --allow-untagged-cloud="false"
I0712 23:16:00.788537 1 flags.go:27] FLAG: --allow-verification-with-non-compliant-keys="false"
I0712 23:16:00.788580 1 flags.go:27] FLAG: --alsologtostderr="false"
I0712 23:16:00.788617 1 flags.go:27] FLAG: --attach-detach-reconcile-sync-period="1m0s"
I0712 23:16:00.788685 1 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0712 23:16:00.788726 1 flags.go:27] FLAG: --cert-dir="/var/run/kubernetes"
I0712 23:16:00.788765 1 flags.go:27] FLAG: --cidr-allocator-type="RangeAllocator"
I0712 23:16:00.788801 1 flags.go:27] FLAG: --cloud-config=""
I0712 23:16:00.788860 1 flags.go:27] FLAG: --cloud-provider="aws"
I0712 23:16:00.788897 1 flags.go:27] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
I0712 23:16:00.788941 1 flags.go:27] FLAG: --cluster-cidr="100.96.0.0/11"
I0712 23:16:00.788977 1 flags.go:27] FLAG: --cluster-name="k8s.example.com"
I0712 23:16:00.789084 1 flags.go:27] FLAG: --cluster-signing-cert-file="/srv/kubernetes/ca.crt"
I0712 23:16:00.789221 1 flags.go:27] FLAG: --cluster-signing-key-file="/srv/kubernetes/ca.key"
I0712 23:16:00.789262 1 flags.go:27] FLAG: --concurrent-deployment-syncs="5"
I0712 23:16:00.789307 1 flags.go:27] FLAG: --concurrent-endpoint-syncs="5"
I0712 23:16:00.789370 1 flags.go:27] FLAG: --concurrent-gc-syncs="20"
I0712 23:16:00.789410 1 flags.go:27] FLAG: --concurrent-namespace-syncs="10"
I0712 23:16:00.789445 1 flags.go:27] FLAG: --concurrent-replicaset-syncs="5"
I0712 23:16:00.789480 1 flags.go:27] FLAG: --concurrent-resource-quota-syncs="5"
I0712 23:16:00.789540 1 flags.go:27] FLAG: --concurrent-service-syncs="1"
I0712 23:16:00.789578 1 flags.go:27] FLAG: --concurrent-serviceaccount-token-syncs="5"
I0712 23:16:00.789615 1 flags.go:27] FLAG: --concurrent_rc_syncs="5"
I0712 23:16:00.789650 1 flags.go:27] FLAG: --configure-cloud-routes="true"
I0712 23:16:00.789711 1 flags.go:27] FLAG: --contention-profiling="false"
I0712 23:16:00.789750 1 flags.go:27] FLAG: --controller-start-interval="0s"
I0712 23:16:00.789785 1 flags.go:27] FLAG: --controllers="[*]"
I0712 23:16:00.789827 1 flags.go:27] FLAG: --deleting-pods-burst="0"
I0712 23:16:00.789888 1 flags.go:27] FLAG: --deleting-pods-qps="0.1"
I0712 23:16:00.789930 1 flags.go:27] FLAG: --deployment-controller-sync-period="30s"
I0712 23:16:00.789966 1 flags.go:27] FLAG: --disable-attach-detach-reconcile-sync="false"
I0712 23:16:00.798198 1 flags.go:27] FLAG: --enable-dynamic-provisioning="true"
I0712 23:16:00.798247 1 flags.go:27] FLAG: --enable-garbage-collector="true"
I0712 23:16:00.798317 1 flags.go:27] FLAG: --enable-hostpath-provisioner="false"
I0712 23:16:00.798357 1 flags.go:27] FLAG: --enable-taint-manager="true"
I0712 23:16:00.798392 1 flags.go:27] FLAG: --experimental-cluster-signing-duration="8760h0m0s"
I0712 23:16:00.798428 1 flags.go:27] FLAG: --external-cloud-volume-plugin=""
I0712 23:16:00.798490 1 flags.go:27] FLAG: --feature-gates=""
I0712 23:16:00.798534 1 flags.go:27] FLAG: --flex-volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
I0712 23:16:00.798570 1 flags.go:27] FLAG: --help="false"
I0712 23:16:00.798605 1 flags.go:27] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s"
I0712 23:16:00.798665 1 flags.go:27] FLAG: --horizontal-pod-autoscaler-sync-period="30s"
I0712 23:16:00.798716 1 flags.go:27] FLAG: --horizontal-pod-autoscaler-tolerance="0.1"
I0712 23:16:00.798761 1 flags.go:27] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s"
I0712 23:16:00.798824 1 flags.go:27] FLAG: --horizontal-pod-autoscaler-use-rest-clients="true"
I0712 23:16:00.798863 1 flags.go:27] FLAG: --http2-max-streams-per-connection="0"
I0712 23:16:00.798902 1 flags.go:27] FLAG: --insecure-experimental-approve-all-kubelet-csrs-for-group=""
I0712 23:16:00.798937 1 flags.go:27] FLAG: --kube-api-burst="30"
I0712 23:16:00.798997 1 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0712 23:16:00.799035 1 flags.go:27] FLAG: --kube-api-qps="20"
I0712 23:16:00.799072 1 flags.go:27] FLAG: --kubeconfig="/var/lib/kube-controller-manager/kubeconfig"
I0712 23:16:00.799107 1 flags.go:27] FLAG: --large-cluster-size-threshold="50"
I0712 23:16:00.799165 1 flags.go:27] FLAG: --leader-elect="true"
I0712 23:16:00.799202 1 flags.go:27] FLAG: --leader-elect-lease-duration="15s"
I0712 23:16:00.799238 1 flags.go:27] FLAG: --leader-elect-renew-deadline="10s"
I0712 23:16:00.799272 1 flags.go:27] FLAG: --leader-elect-resource-lock="endpoints"
I0712 23:16:00.799330 1 flags.go:27] FLAG: --leader-elect-retry-period="2s"
I0712 23:16:00.799368 1 flags.go:27] FLAG: --log-backtrace-at=":0"
I0712 23:16:00.799406 1 flags.go:27] FLAG: --log-dir=""
I0712 23:16:00.799442 1 flags.go:27] FLAG: --log-flush-frequency="5s"
I0712 23:16:00.799508 1 flags.go:27] FLAG: --loglevel="1"
I0712 23:16:00.799545 1 flags.go:27] FLAG: --logtostderr="true"
I0712 23:16:00.799580 1 flags.go:27] FLAG: --master=""
I0712 23:16:00.799614 1 flags.go:27] FLAG: --min-resync-period="12h0m0s"
I0712 23:16:00.799674 1 flags.go:27] FLAG: --namespace-sync-period="5m0s"
I0712 23:16:00.799711 1 flags.go:27] FLAG: --node-cidr-mask-size="24"
I0712 23:16:00.799746 1 flags.go:27] FLAG: --node-eviction-rate="0.1"
I0712 23:16:00.799782 1 flags.go:27] FLAG: --node-monitor-grace-period="40s"
I0712 23:16:00.799840 1 flags.go:27] FLAG: --node-monitor-period="5s"
I0712 23:16:00.799878 1 flags.go:27] FLAG: --node-startup-grace-period="1m0s"
I0712 23:16:00.799913 1 flags.go:27] FLAG: --node-sync-period="0s"
I0712 23:16:00.799947 1 flags.go:27] FLAG: --pod-eviction-timeout="5m0s"
I0712 23:16:00.800004 1 flags.go:27] FLAG: --port="10252"
I0712 23:16:00.800041 1 flags.go:27] FLAG: --profiling="true"
I0712 23:16:00.800076 1 flags.go:27] FLAG: --pv-recycler-increment-timeout-nfs="30"
I0712 23:16:00.800111 1 flags.go:27] FLAG: --pv-recycler-minimum-timeout-hostpath="60"
I0712 23:16:00.800170 1 flags.go:27] FLAG: --pv-recycler-minimum-timeout-nfs="300"
I0712 23:16:00.800206 1 flags.go:27] FLAG: --pv-recycler-pod-template-filepath-hostpath=""
I0712 23:16:00.800241 1 flags.go:27] FLAG: --pv-recycler-pod-template-filepath-nfs=""
I0712 23:16:00.800275 1 flags.go:27] FLAG: --pv-recycler-timeout-increment-hostpath="30"
I0712 23:16:00.800333 1 flags.go:27] FLAG: --pvclaimbinder-sync-period="15s"
I0712 23:16:00.800371 1 flags.go:27] FLAG: --register-retry-count="10"
I0712 23:16:00.800406 1 flags.go:27] FLAG: --resource-quota-sync-period="5m0s"
I0712 23:16:00.800441 1 flags.go:27] FLAG: --root-ca-file="/srv/kubernetes/ca.crt"
I0712 23:16:00.800499 1 flags.go:27] FLAG: --route-reconciliation-period="10s"
I0712 23:16:00.800537 1 flags.go:27] FLAG: --secondary-node-eviction-rate="0.01"
I0712 23:16:00.800574 1 flags.go:27] FLAG: --secure-port="0"
I0712 23:16:00.800609 1 flags.go:27] FLAG: --service-account-private-key-file="/srv/kubernetes/server.key"
I0712 23:16:00.800667 1 flags.go:27] FLAG: --service-cluster-ip-range=""
I0712 23:16:00.800704 1 flags.go:27] FLAG: --stderrthreshold="2"
I0712 23:16:00.800739 1 flags.go:27] FLAG: --terminated-pod-gc-threshold="12500"
I0712 23:16:00.800779 1 flags.go:27] FLAG: --tls-ca-file=""
I0712 23:16:00.800838 1 flags.go:27] FLAG: --tls-cert-file=""
I0712 23:16:00.800929 1 flags.go:27] FLAG: --tls-cipher-suites="[]"
I0712 23:16:00.801001 1 flags.go:27] FLAG: --tls-min-version=""
I0712 23:16:00.801039 1 flags.go:27] FLAG: --tls-private-key-file=""
I0712 23:16:00.801088 1 flags.go:27] FLAG: --tls-sni-cert-key="[]"
I0712 23:16:00.801157 1 flags.go:27] FLAG: --unhealthy-zone-threshold="0.55"
I0712 23:16:00.801198 1 flags.go:27] FLAG: --use-service-account-credentials="true"
I0712 23:16:00.801233 1 flags.go:27] FLAG: --v="2"
I0712 23:16:00.801268 1 flags.go:27] FLAG: --version="false"
I0712 23:16:00.801334 1 flags.go:27] FLAG: --vmodule=""
I0712 23:16:00.805333 1 controllermanager.go:116] Version: v1.10.3
W0712 23:16:00.819609 1 authentication.go:55] Authentication is disabled
I0712 23:16:00.819745 1 insecure_serving.go:44] Serving insecurely on [::]:10252
I0712 23:16:00.819991 1 leaderelection.go:175] attempting to acquire leader lease kube-system/kube-controller-manager...
I0712 23:16:16.113179 1 leaderelection.go:184] successfully acquired lease kube-system/kube-controller-manager
I0712 23:16:16.113422 1 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"533d7319-7685-11e8-9d40-02ab4d0e1e2e", APIVersion:"v1", ResourceVersion:"3259176", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-x-y-z_8a6115a9-8629-11e8-95e2-02da1f9cf30e became leader
E0712 23:16:19.798075 1 controllermanager.go:355] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding
I0712 23:16:19.798688 1 aws.go:1026] Building AWS cloudprovider
I0712 23:16:19.798813 1 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
I0712 23:16:20.094752 1 tags.go:76] AWS cloud filtering on ClusterID: k8s.example.com
I0712 23:16:20.098494 1 controller_utils.go:1019] Waiting for caches to sync for tokens controller
I0712 23:16:20.137669 1 controllermanager.go:434] Starting "podgc"
I0712 23:16:20.168339 1 controllermanager.go:444] Started "podgc"
I0712 23:16:20.168481 1 controllermanager.go:434] Starting "cronjob"
I0712 23:16:20.168688 1 gc_controller.go:76] Starting GC controller
I0712 23:16:20.168745 1 controller_utils.go:1019] Waiting for caches to sync for GC controller
I0712 23:16:20.211742 1 controller_utils.go:1026] Caches are synced for tokens controller
I0712 23:16:20.234752 1 controllermanager.go:444] Started "cronjob"
I0712 23:16:20.234899 1 controllermanager.go:434] Starting "garbagecollector"
I0712 23:16:20.235113 1 cronjob_controller.go:103] Starting CronJob Manager
E0712 23:16:22.798093 1 memcache.go:153] couldn't get resource list for metrics.k8s.io/v1beta1: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding
I0712 23:16:24.226306 1 wrap.go:42] GET /healthz: (56.036µs) 200 [[kube-probe/1.10] 127.0.0.1:44564]
W0712 23:16:28.794980 1 garbagecollector.go:598] failed to discover some groups: map[metrics.k8s.io/v1beta1:an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding]
I0712 23:16:28.798740 1 controllermanager.go:444] Started "garbagecollector"
I0712 23:16:28.798818 1 controllermanager.go:434] Starting "horizontalpodautoscaling"
I0712 23:16:28.807607 1 garbagecollector.go:135] Starting garbage collector controller
I0712 23:16:28.807710 1 controller_utils.go:1019] Waiting for caches to sync for garbage collector controller
I0712 23:16:28.807815 1 graph_builder.go:323] GraphBuilder running
E0712 23:16:31.799413 1 memcache.go:153] couldn't get resource list for metrics.k8s.io/v1beta1: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding
I0712 23:16:31.799880 1 controllermanager.go:444] Started "horizontalpodautoscaling"
I0712 23:16:31.799987 1 controllermanager.go:434] Starting "ttl"
I0712 23:16:31.800202 1 horizontal.go:128] Starting HPA controller
I0712 23:16:31.800282 1 controller_utils.go:1019] Waiting for caches to sync for HPA controller
I0712 23:16:31.839894 1 controllermanager.go:444] Started "ttl"
I0712 23:16:31.839975 1 controllermanager.go:434] Starting "route"
I0712 23:16:31.840166 1 ttl_controller.go:116] Starting TTL controller
I0712 23:16:31.840285 1 controller_utils.go:1019] Waiting for caches to sync for TTL controller
I0712 23:16:31.870443 1 controllermanager.go:444] Started "route"
I0712 23:16:31.870521 1 controllermanager.go:434] Starting "endpoint"
I0712 23:16:31.870686 1 route_controller.go:99] Starting route controller
I0712 23:16:31.870722 1 controller_utils.go:1019] Waiting for caches to sync for route controller
I0712 23:16:31.897680 1 controllermanager.go:444] Started "endpoint"
I0712 23:16:31.897754 1 controllermanager.go:434] Starting "daemonset"
I0712 23:16:31.897937 1 endpoints_controller.go:153] Starting endpoint controller
I0712 23:16:31.898569 1 controller_utils.go:1019] Waiting for caches to sync for endpoint controller
I0712 23:16:31.936312 1 controllermanager.go:444] Started "daemonset"
I0712 23:16:31.936418 1 controllermanager.go:434] Starting "job"
I0712 23:16:31.936626 1 daemon_controller.go:233] Starting daemon sets controller
I0712 23:16:31.936955 1 controller_utils.go:1019] Waiting for caches to sync for daemon sets controller
I0712 23:16:31.964757 1 controllermanager.go:444] Started "job"
I0712 23:16:31.964842 1 controllermanager.go:434] Starting "disruption"
I0712 23:16:31.965035 1 job_controller.go:142] Starting job controller
I0712 23:16:31.965171 1 controller_utils.go:1019] Waiting for caches to sync for job controller
I0712 23:16:31.994747 1 controllermanager.go:444] Started "disruption"
I0712 23:16:31.994851 1 controllermanager.go:434] Starting "csrsigning"
I0712 23:16:31.995050 1 disruption.go:288] Starting disruption controller
I0712 23:16:31.995142 1 controller_utils.go:1019] Waiting for caches to sync for disruption controller
I0712 23:16:32.040384 1 controllermanager.go:444] Started "csrsigning"
I0712 23:16:32.040493 1 controllermanager.go:434] Starting "attachdetach"
I0712 23:16:32.040679 1 certificate_controller.go:113] Starting certificate controller
I0712 23:16:32.041555 1 controller_utils.go:1019] Waiting for caches to sync for certificate controller
W0712 23:16:32.064818 1 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0712 23:16:32.065284 1 plugins.go:454] Loaded volume plugin "kubernetes.io/aws-ebs"
I0712 23:16:32.065423 1 plugins.go:454] Loaded volume plugin "kubernetes.io/gce-pd"
I0712 23:16:32.065487 1 plugins.go:454] Loaded volume plugin "kubernetes.io/cinder"
I0712 23:16:32.065567 1 plugins.go:454] Loaded volume plugin "kubernetes.io/portworx-volume"
I0712 23:16:32.065615 1 plugins.go:454] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0712 23:16:32.065668 1 plugins.go:454] Loaded volume plugin "kubernetes.io/azure-disk"
I0712 23:16:32.065745 1 plugins.go:454] Loaded volume plugin "kubernetes.io/photon-pd"
I0712 23:16:32.065795 1 plugins.go:454] Loaded volume plugin "kubernetes.io/scaleio"
I0712 23:16:32.065839 1 plugins.go:454] Loaded volume plugin "kubernetes.io/storageos"
I0712 23:16:32.065913 1 plugins.go:454] Loaded volume plugin "kubernetes.io/fc"
I0712 23:16:32.065956 1 plugins.go:454] Loaded volume plugin "kubernetes.io/iscsi"
I0712 23:16:32.065998 1 plugins.go:454] Loaded volume plugin "kubernetes.io/rbd"
I0712 23:16:32.066085 1 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
I0712 23:16:32.066123 1 plugins.go:454] Loaded volume plugin "kubernetes.io/csi"
I0712 23:16:32.066294 1 controllermanager.go:444] Started "attachdetach"
I0712 23:16:32.066340 1 controllermanager.go:434] Starting "clusterrole-aggregation"
I0712 23:16:32.080942 1 attach_detach_controller.go:258] Starting attach detach controller
I0712 23:16:32.080958 1 controller_utils.go:1019] Waiting for caches to sync for attach detach controller
I0712 23:16:32.107880 1 controllermanager.go:444] Started "clusterrole-aggregation"
I0712 23:16:32.108167 1 controllermanager.go:434] Starting "resourcequota"
I0712 23:16:32.108377 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0712 23:16:32.108518 1 controller_utils.go:1019] Waiting for caches to sync for ClusterRoleAggregator controller
I0712 23:16:34.225593 1 wrap.go:42] GET /healthz: (30.242µs) 200 [[kube-probe/1.10] 127.0.0.1:44588]
W0712 23:16:34.792029 1 garbagecollector.go:598] failed to discover some groups: map[metrics.k8s.io/v1beta1:an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding]
I0712 23:16:34.792179 1 garbagecollector.go:190] syncing garbage collector with updated resources from discovery: map[{certmanager.k8s.io v1alpha1 certificates}:{} { v1 secrets}:{} { v1 replicationcontrollers}:{} {apps v1 daemonsets}:{} {certificates.k8s.io v1beta1 certificatesigningrequests}:{} {rbac.authorization.k8s.io v1 clusterrolebindings}:{} {extensions v1beta1 networkpolicies}:{} {extensions v1beta1 ingresses}:{} {apps v1 statefulsets}:{} {apps v1 controllerrevisions}:{} {admissionregistration.k8s.io v1beta1 validatingwebhookconfigurations}:{} {rbac.authorization.k8s.io v1 roles}:{} {admissionregistration.k8s.io v1beta1 mutatingwebhookconfigurations}:{} { v1 namespaces}:{} {apiregistration.k8s.io v1 apiservices}:{} {apps v1 replicasets}:{} {autoscaling v1 horizontalpodautoscalers}:{} {policy v1beta1 poddisruptionbudgets}:{} {batch v1beta1 cronjobs}:{} {certmanager.k8s.io v1alpha1 clusterissuers}:{} {certmanager.k8s.io v1alpha1 issuers}:{} { v1 limitranges}:{} {extensions v1beta1 deployments}:{} {extensions v1beta1 daemonsets}:{} {apps v1 deployments}:{} {events.k8s.io v1beta1 events}:{} {apiextensions.k8s.io v1beta1 customresourcedefinitions}:{} { v1 nodes}:{} {networking.k8s.io v1 networkpolicies}:{} {policy v1beta1 podsecuritypolicies}:{} {rbac.authorization.k8s.io v1 rolebindings}:{} {rbac.authorization.k8s.io v1 clusterroles}:{} { v1 configmaps}:{} { v1 persistentvolumes}:{} {extensions v1beta1 replicasets}:{} {storage.k8s.io v1 storageclasses}:{} {storage.k8s.io v1beta1 volumeattachments}:{} { v1 pods}:{} { v1 persistentvolumeclaims}:{} { v1 resourcequotas}:{} { v1 serviceaccounts}:{} {extensions v1beta1 podsecuritypolicies}:{} { v1 services}:{} { v1 events}:{} { v1 podtemplates}:{} { v1 endpoints}:{} {batch v1 jobs}:{}]
E0712 23:16:37.789846 1 controllermanager.go:437] Error starting "resourcequota"
F0712 23:16:37.789878 1 controllermanager.go:164] error starting controllers: failed to discover resources: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": Post https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: dial tcp 100.64.0.1:443: getsockopt: no route to host") has prevented the request from succeeding
-- Logs begin at Thu 2018-07-12 22:52:28 UTC, end at Thu 2018-07-12 23:12:44 UTC. --
Jul 12 22:53:35 ip-172-x-y-z systemd[1]: Starting Kubernetes Kubelet Server...
Jul 12 22:53:35 ip-172-x-y-z systemd[1]: Started Kubernetes Kubelet Server.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --enable-debugging-handlers has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --non-masquerade-cidr has been deprecated, will be removed in a future version
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: Flag --register-schedulable has been deprecated, will be removed in a future version
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.918211 1635 flags.go:27] FLAG: --address="0.0.0.0"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.918577 1635 flags.go:27] FLAG: --allow-privileged="true"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.918922 1635 flags.go:27] FLAG: --alsologtostderr="false"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919263 1635 flags.go:27] FLAG: --anonymous-auth="true"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919646 1635 flags.go:27] FLAG: --application-metrics-count-limit="100"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919660 1635 flags.go:27] FLAG: --authentication-token-webhook="false"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919667 1635 flags.go:27] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919678 1635 flags.go:27] FLAG: --authorization-mode="AlwaysAllow"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919686 1635 flags.go:27] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919693 1635 flags.go:27] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919699 1635 flags.go:27] FLAG: --azure-container-registry-config=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919705 1635 flags.go:27] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919712 1635 flags.go:27] FLAG: --bootstrap-checkpoint-path=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919718 1635 flags.go:27] FLAG: --bootstrap-kubeconfig=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919724 1635 flags.go:27] FLAG: --cadvisor-port="4194"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919733 1635 flags.go:27] FLAG: --cert-dir="/var/lib/kubelet/pki"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919740 1635 flags.go:27] FLAG: --cgroup-driver="cgroupfs"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919747 1635 flags.go:27] FLAG: --cgroup-root="/"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919753 1635 flags.go:27] FLAG: --cgroups-per-qos="true"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919759 1635 flags.go:27] FLAG: --chaos-chance="0"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919769 1635 flags.go:27] FLAG: --client-ca-file=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919775 1635 flags.go:27] FLAG: --cloud-config=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919780 1635 flags.go:27] FLAG: --cloud-provider="aws"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919786 1635 flags.go:27] FLAG: --cluster-dns="[100.64.0.10]"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919802 1635 flags.go:27] FLAG: --cluster-domain="cluster.local"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919808 1635 flags.go:27] FLAG: --cni-bin-dir="/opt/cni/bin/"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919816 1635 flags.go:27] FLAG: --cni-conf-dir="/etc/cni/net.d/"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919822 1635 flags.go:27] FLAG: --config=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919828 1635 flags.go:27] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919835 1635 flags.go:27] FLAG: --container-log-max-files="5"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919841 1635 flags.go:27] FLAG: --container-log-max-size="10Mi"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919847 1635 flags.go:27] FLAG: --container-runtime="docker"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919852 1635 flags.go:27] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919859 1635 flags.go:27] FLAG: --containerd="unix:///var/run/containerd.sock"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919865 1635 flags.go:27] FLAG: --containerized="false"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919872 1635 flags.go:27] FLAG: --contention-profiling="false"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919878 1635 flags.go:27] FLAG: --cpu-cfs-quota="true"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919883 1635 flags.go:27] FLAG: --cpu-manager-policy="none"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919889 1635 flags.go:27] FLAG: --cpu-manager-reconcile-period="10s"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919896 1635 flags.go:27] FLAG: --docker="unix:///var/run/docker.sock"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919902 1635 flags.go:27] FLAG: --docker-disable-shared-pid="true"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919908 1635 flags.go:27] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919914 1635 flags.go:27] FLAG: --docker-env-metadata-whitelist=""
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919920 1635 flags.go:27] FLAG: --docker-only="false"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919926 1635 flags.go:27] FLAG: --docker-root="/var/lib/docker"
Jul 12 22:53:35 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919933 1635 flags.go:27] FLAG: --docker-tls="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919939 1635 flags.go:27] FLAG: --docker-tls-ca="ca.pem"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919945 1635 flags.go:27] FLAG: --docker-tls-cert="cert.pem"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919951 1635 flags.go:27] FLAG: --docker-tls-key="key.pem"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919957 1635 flags.go:27] FLAG: --dynamic-config-dir=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919965 1635 flags.go:27] FLAG: --enable-controller-attach-detach="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919971 1635 flags.go:27] FLAG: --enable-custom-metrics="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919977 1635 flags.go:27] FLAG: --enable-debugging-handlers="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919984 1635 flags.go:27] FLAG: --enable-load-reader="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919990 1635 flags.go:27] FLAG: --enable-server="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.919996 1635 flags.go:27] FLAG: --enforce-node-allocatable="[pods]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920003 1635 flags.go:27] FLAG: --event-burst="10"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920009 1635 flags.go:27] FLAG: --event-qps="5"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920015 1635 flags.go:27] FLAG: --event-storage-age-limit="default=0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920021 1635 flags.go:27] FLAG: --event-storage-event-limit="default=0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920028 1635 flags.go:27] FLAG: --eviction-hard="imagefs.available<10%,imagefs.inodesFree<5%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920051 1635 flags.go:27] FLAG: --eviction-max-pod-grace-period="0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920058 1635 flags.go:27] FLAG: --eviction-minimum-reclaim=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920067 1635 flags.go:27] FLAG: --eviction-pressure-transition-period="5m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920073 1635 flags.go:27] FLAG: --eviction-soft=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920079 1635 flags.go:27] FLAG: --eviction-soft-grace-period=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920086 1635 flags.go:27] FLAG: --exit-on-lock-contention="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920092 1635 flags.go:27] FLAG: --experimental-allocatable-ignore-eviction="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920098 1635 flags.go:27] FLAG: --experimental-allowed-unsafe-sysctls="[]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920107 1635 flags.go:27] FLAG: --experimental-bootstrap-kubeconfig=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920115 1635 flags.go:27] FLAG: --experimental-check-node-capabilities-before-mount="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920124 1635 flags.go:27] FLAG: --experimental-dockershim="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920131 1635 flags.go:27] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920137 1635 flags.go:27] FLAG: --experimental-fail-swap-on="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920143 1635 flags.go:27] FLAG: --experimental-kernel-memcg-notification="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920149 1635 flags.go:27] FLAG: --experimental-mounter-path=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920154 1635 flags.go:27] FLAG: --experimental-qos-reserved=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920161 1635 flags.go:27] FLAG: --fail-swap-on="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920168 1635 flags.go:27] FLAG: --feature-gates="ExperimentalCriticalPodAnnotation=true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920178 1635 flags.go:27] FLAG: --file-check-frequency="20s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920185 1635 flags.go:27] FLAG: --global-housekeeping-interval="1m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920192 1635 flags.go:27] FLAG: --google-json-key=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920198 1635 flags.go:27] FLAG: --hairpin-mode="promiscuous-bridge"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920204 1635 flags.go:27] FLAG: --healthz-bind-address="127.0.0.1"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920210 1635 flags.go:27] FLAG: --healthz-port="10248"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920216 1635 flags.go:27] FLAG: --help="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920223 1635 flags.go:27] FLAG: --host-ipc-sources="[*]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920231 1635 flags.go:27] FLAG: --host-network-sources="[*]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920240 1635 flags.go:27] FLAG: --host-pid-sources="[*]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920247 1635 flags.go:27] FLAG: --hostname-override="ip-172-x-y-z.us-west-2.compute.internal"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920254 1635 flags.go:27] FLAG: --housekeeping-interval="10s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920260 1635 flags.go:27] FLAG: --http-check-frequency="20s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920266 1635 flags.go:27] FLAG: --image-gc-high-threshold="85"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920272 1635 flags.go:27] FLAG: --image-gc-low-threshold="80"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920279 1635 flags.go:27] FLAG: --image-pull-progress-deadline="1m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920285 1635 flags.go:27] FLAG: --image-service-endpoint=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920291 1635 flags.go:27] FLAG: --iptables-drop-bit="15"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920297 1635 flags.go:27] FLAG: --iptables-masquerade-bit="14"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920303 1635 flags.go:27] FLAG: --keep-terminated-pod-volumes="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920308 1635 flags.go:27] FLAG: --kube-api-burst="10"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920314 1635 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920321 1635 flags.go:27] FLAG: --kube-api-qps="5"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920328 1635 flags.go:27] FLAG: --kube-reserved=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920334 1635 flags.go:27] FLAG: --kube-reserved-cgroup=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920340 1635 flags.go:27] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920346 1635 flags.go:27] FLAG: --kubelet-cgroups=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920352 1635 flags.go:27] FLAG: --lock-file=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920357 1635 flags.go:27] FLAG: --log-backtrace-at=":0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920365 1635 flags.go:27] FLAG: --log-cadvisor-usage="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920371 1635 flags.go:27] FLAG: --log-dir=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920377 1635 flags.go:27] FLAG: --log-flush-frequency="5s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920383 1635 flags.go:27] FLAG: --logtostderr="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920389 1635 flags.go:27] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920396 1635 flags.go:27] FLAG: --make-iptables-util-chains="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920402 1635 flags.go:27] FLAG: --manifest-url=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920408 1635 flags.go:27] FLAG: --manifest-url-header=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920417 1635 flags.go:27] FLAG: --master-service-namespace="default"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920424 1635 flags.go:27] FLAG: --max-open-files="1000000"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920433 1635 flags.go:27] FLAG: --max-pods="110"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920439 1635 flags.go:27] FLAG: --maximum-dead-containers="-1"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920445 1635 flags.go:27] FLAG: --maximum-dead-containers-per-container="1"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920451 1635 flags.go:27] FLAG: --minimum-container-ttl-duration="0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920457 1635 flags.go:27] FLAG: --minimum-image-ttl-duration="2m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920463 1635 flags.go:27] FLAG: --network-plugin="kubenet"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920469 1635 flags.go:27] FLAG: --network-plugin-mtu="9001"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920475 1635 flags.go:27] FLAG: --node-ip=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920482 1635 flags.go:27] FLAG: --node-labels="kops.k8s.io/instancegroup=master-us-west-2a,kubernetes.io/role=master,node-role.kubernetes.io/master="
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920496 1635 flags.go:27] FLAG: --node-status-update-frequency="10s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920502 1635 flags.go:27] FLAG: --non-masquerade-cidr="100.64.0.0/10"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920508 1635 flags.go:27] FLAG: --oom-score-adj="-999"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920514 1635 flags.go:27] FLAG: --pod-cidr=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920520 1635 flags.go:27] FLAG: --pod-infra-container-image="k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920526 1635 flags.go:27] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920532 1635 flags.go:27] FLAG: --pod-max-pids="-1"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920539 1635 flags.go:27] FLAG: --pods-per-core="0"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920545 1635 flags.go:27] FLAG: --port="10250"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920551 1635 flags.go:27] FLAG: --protect-kernel-defaults="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920557 1635 flags.go:27] FLAG: --provider-id=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920563 1635 flags.go:27] FLAG: --read-only-port="10255"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920569 1635 flags.go:27] FLAG: --really-crash-for-testing="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920575 1635 flags.go:27] FLAG: --register-node="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920581 1635 flags.go:27] FLAG: --register-schedulable="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920587 1635 flags.go:27] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920600 1635 flags.go:27] FLAG: --registry-burst="10"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920606 1635 flags.go:27] FLAG: --registry-qps="5"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920612 1635 flags.go:27] FLAG: --resolv-conf="/etc/resolv.conf"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920618 1635 flags.go:27] FLAG: --rkt-api-endpoint="localhost:15441"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920624 1635 flags.go:27] FLAG: --rkt-path=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920629 1635 flags.go:27] FLAG: --rkt-stage1-image=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920635 1635 flags.go:27] FLAG: --root-dir="/var/lib/kubelet"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920642 1635 flags.go:27] FLAG: --rotate-certificates="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920648 1635 flags.go:27] FLAG: --runonce="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920654 1635 flags.go:27] FLAG: --runtime-cgroups=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920659 1635 flags.go:27] FLAG: --runtime-request-timeout="2m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920665 1635 flags.go:27] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920671 1635 flags.go:27] FLAG: --serialize-image-pulls="true"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920677 1635 flags.go:27] FLAG: --stderrthreshold="2"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920683 1635 flags.go:27] FLAG: --storage-driver-buffer-duration="1m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920690 1635 flags.go:27] FLAG: --storage-driver-db="cadvisor"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920696 1635 flags.go:27] FLAG: --storage-driver-host="localhost:8086"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920702 1635 flags.go:27] FLAG: --storage-driver-password="root"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920708 1635 flags.go:27] FLAG: --storage-driver-secure="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920719 1635 flags.go:27] FLAG: --storage-driver-table="stats"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920725 1635 flags.go:27] FLAG: --storage-driver-user="root"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920731 1635 flags.go:27] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920737 1635 flags.go:27] FLAG: --sync-frequency="1m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920744 1635 flags.go:27] FLAG: --system-cgroups=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920750 1635 flags.go:27] FLAG: --system-reserved=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920756 1635 flags.go:27] FLAG: --system-reserved-cgroup=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920762 1635 flags.go:27] FLAG: --tls-cert-file=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920768 1635 flags.go:27] FLAG: --tls-cipher-suites="[]"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920778 1635 flags.go:27] FLAG: --tls-min-version=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920784 1635 flags.go:27] FLAG: --tls-private-key-file=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920789 1635 flags.go:27] FLAG: --v="2"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920796 1635 flags.go:27] FLAG: --version="false"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920806 1635 flags.go:27] FLAG: --vmodule=""
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920812 1635 flags.go:27] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920819 1635 flags.go:27] FLAG: --volume-stats-agg-period="1m0s"
Jul 12 22:53:36 ip-172-x-y-z kubelet[1635]: I0712 22:53:35.920848 1635 feature_gate.go:226] feature gates: &{{} map[ExperimentalCriticalPodAnnotation:true]}
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.522007 1635 mount_linux.go:211] Detected OS with systemd
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: W0712 22:53:37.522161 1635 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d/
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.541630 1635 iptables.go:198] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: W0712 22:53:37.541682 1635 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.541710 1635 server.go:376] Version: v1.10.3
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.541770 1635 feature_gate.go:226] feature gates: &{{} map[ExperimentalCriticalPodAnnotation:true]}
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.541958 1635 aws.go:1026] Building AWS cloudprovider
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.541989 1635 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.912467 1635 tags.go:76] AWS cloud filtering on ClusterID: k8s.example.com
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.913133 1635 server.go:494] Successfully initialized cloud provider: "aws" from the config file: ""
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.913535 1635 server.go:732] cloud provider determined current node name to be ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.917197 1635 manager.go:154] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct"
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.958114 1635 fs.go:142] Filesystem UUIDs: map[a9ace4b2-cb9e-4b3e-acce-1721ddcf8917:/dev/xvdu d8d35a68-455c-477a-a5da-f106e36bd695:/dev/xvda1 f51d3941-73e3-4af3-b9f9-d6ccd2003432:/dev/xvdv 68b01e07-4e65-403b-aefd-0aad1f103f98:/dev/xvdc]
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.958142 1635 fs.go:143] Filesystem partitions: map[/dev/xvdu:{mountpoint:/mnt/master-vol-05afc76a2db15883e major:202 minor:5120 fsType:ext4 blockSize:0} /dev/xvdv:{mountpoint:/mnt/master-vol-086321b7659228943 major:202 minor:5376 fsType:ext4 blockSize:0} tmpfs:{mountpoint:/run major:0 minor:17 fsType:tmpfs blockSize:0} /dev/xvda1:{mountpoint:/var/lib/docker/overlay major:202 minor:1 fsType:ext4 blockSize:0} /dev/xvdc:{mountpoint:/mnt major:202 minor:32 fsType:ext3 blockSize:0} shm:{mountpoint:/var/lib/docker/containers/b70cc9c844c1ea37bf61a46b76c1862c838bcbf40c298c25e691251454368919/shm major:0 minor:38 fsType:tmpfs blockSize:0}]
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.961928 1635 manager.go:227] Machine: {NumCores:1 CpuFrequency:2500084 MemoryCapacity:3949846528 HugePages:[{PageSize:2048 NumPages:0}] MachineID:1033d92edec04302aea1546e2ccd621f SystemUUID:EC20DE04-B64D-EAA7-288D-0FA1F20B6EFC BootID:cfbd5a44-2f0d-44e7-8169-c0988a913f30 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:789970944 Type:vfs Inodes:482159 HasInodes:true} {Device:/dev/xvda1 DeviceMajor:202 DeviceMinor:1 Capacity:64278245376 Type:vfs Inodes:16777216 HasInodes:true} {Device:/dev/xvdc DeviceMajor:202 DeviceMinor:32 Capacity:4154654720 Type:vfs Inodes:262144 HasInodes:true} {Device:shm DeviceMajor:0 DeviceMinor:38 Capacity:67108864 Type:vfs Inodes:482159 HasInodes:true} {Device:/dev/xvdu DeviceMajor:202 DeviceMinor:5120 Capacity:21003628544 Type:vfs Inodes:1310720 HasInodes:true} {Device:/dev/xvdv DeviceMajor:202 DeviceMinor:5376 Capacity:21003628544 Type:vfs Inodes:1310720 HasInodes:true}] DiskMap:map[202:0:{Name:xvda Major:202 Minor:0 Size:68719476736 Scheduler:none} 202:32:{Name:xvdc Major:202 Minor:32 Size:4289200128 Scheduler:none} 202:5120:{Name:xvdu Major:202 Minor:5120 Size:21474836480 Scheduler:none} 202:5376:{Name:xvdv Major:202 Minor:5376 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:02:da:1f:9c:f3:0e Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:3949846528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:AWS InstanceType:m3.medium InstanceID:i-0783b72cf1dc797c1}
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.971433 1635 manager.go:233] Version: {KernelVersion:4.4.121-k8s ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:17.03.2-ce DockerAPIVersion:1.27 CadvisorVersion: CadvisorRevision:}
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.972567 1635 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.972960 1635 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.973649 1635 container_manager_linux.go:266] Creating device plugin manager: true
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.974031 1635 manager.go:102] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.974477 1635 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.974941 1635 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/kubelet/cpu_manager_state"
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.975358 1635 server.go:732] cloud provider determined current node name to be ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.975752 1635 server.go:888] Using root directory: /var/lib/kubelet
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.976153 1635 kubelet.go:387] cloud provider determined current node name to be ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.976175 1635 kubelet.go:273] Adding pod path: /etc/kubernetes/manifests
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.976190 1635 file.go:52] Watching path "/etc/kubernetes/manifests"
Jul 12 22:53:37 ip-172-x-y-z kubelet[1635]: I0712 22:53:37.976201 1635 kubelet.go:298] Watching apiserver
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.038164 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.038680 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.047204 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.066309 1635 iptables.go:198] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.066692 1635 kubelet.go:558] Hairpin mode set to "promiscuous-bridge"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.071333 1635 plugins.go:190] Loaded network plugin "kubenet"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.071364 1635 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.080145 1635 client.go:104] Start docker client with request timeout=2m0s
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: W0712 22:53:38.129368 1635 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d/
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.150607 1635 iptables.go:198] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: W0712 22:53:38.151016 1635 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.153310 1635 plugins.go:190] Loaded network plugin "kubenet"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.153719 1635 docker_service.go:244] Docker cri networking managed by kubenet
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.160596 1635 docker_service.go:249] Docker Info: &{ID:OZW2:XQYG:HNRH:Q36R:LGHP:26T7:AYGW:HD23:LZK5:YG7P:SNY5:TMVM Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay DriverStatus:[[Backing Filesystem extfs] [Supports d_type true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:false KernelMemory:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2018-07-12T22:53:38.155423503Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:1 KernelVersion:4.4.121-k8s OperatingSystem:Debian GNU/Linux 8 (jessie) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4206ceaf0 NCPU:1 MemTotal:3949846528 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-x-y-z Labels:[] ExperimentalBuild:false ServerVersion:17.03.2-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420b2a3c0} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:4ab9917febca54791c5f071a9d1f404867857fcc Expected:4ab9917febca54791c5f071a9d1f404867857fcc} RuncCommit:{ID:54296cf40ad8143b62dbcaa1d90e520a2136ddfe Expected:54296cf40ad8143b62dbcaa1d90e520a2136ddfe} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[]}
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.160669 1635 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.160719 1635 kubelet.go:636] Starting the GRPC server for the docker CRI shim.
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.160734 1635 docker_server.go:57] Start dockershim grpc server
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.177068 1635 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.178802 1635 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.03.2-ce, apiVersion: 1.27.0
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: W0712 22:53:38.179444 1635 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.179648 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/aws-ebs"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.179663 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/empty-dir"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.179673 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/gce-pd"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.179682 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/git-repo"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.188967 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/host-path"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.188982 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/nfs"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.188993 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/secret"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189002 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/iscsi"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189013 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/glusterfs"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189023 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/rbd"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189033 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/cinder"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189042 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/quobyte"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189052 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/cephfs"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189064 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/downward-api"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189073 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/fc"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189082 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/flocker"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189092 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/azure-file"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189122 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/configmap"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189133 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/vsphere-volume"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189143 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/azure-disk"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189152 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/photon-pd"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189162 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/projected"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189172 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/portworx-volume"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189182 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/scaleio"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189227 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/local-volume"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189241 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/storageos"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189256 1635 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.189286 1635 plugins.go:454] Loaded volume plugin "kubernetes.io/csi"
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.190178 1635 server.go:149] Starting to listen read-only on 0.0.0.0:10255
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.191226 1635 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.191647 1635 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.191665 1635 status_manager.go:140] Starting to sync pod status with apiserver
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.191697 1635 kubelet.go:1782] Starting kubelet main sync loop.
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.191716 1635 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.191841 1635 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.192491 1635 server.go:299] Adding debug handlers to kubelet server.
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.195177 1635 volume_manager.go:245] The desired_state_of_world populator starts
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.195187 1635 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.195870 1635 server.go:945] Started kubelet
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.195896 1635 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.208296 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: dial tcp 127.0.0.1:443: getsockopt: connection refused' (may retry after sleeping)
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.225953 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.253515 1635 factory.go:356] Registering Docker factory
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.291888 1635 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.295365 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.295768 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.296167 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.296553 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.301889 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.302311 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.302708 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.303102 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.303531 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.304181 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.492463 1635 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.504741 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.505128 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.505490 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.505843 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.511215 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.511660 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.512053 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.512442 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.512837 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.513457 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.893004 1635 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.913986 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.914394 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.914782 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.915164 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.928397 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.928810 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.929204 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.929595 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: I0712 22:53:38.929994 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.930638 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: E0712 22:53:39.039041 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: E0712 22:53:39.048189 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: E0712 22:53:39.066642 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.693674 1635 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.731208 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.731783 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.732177 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.732565 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.738069 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.738491 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.738891 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.739283 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: I0712 22:53:39.739720 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:39 ip-172-x-y-z kubelet[1635]: E0712 22:53:39.740367 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: E0712 22:53:40.039910 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: E0712 22:53:40.048986 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: E0712 22:53:40.067438 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.253824 1635 factory.go:54] Registering systemd factory
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.254481 1635 factory.go:86] Registering Raw factory
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.255013 1635 manager.go:1205] Started watching for new ooms in manager
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.256058 1635 manager.go:356] Starting recovery of all containers
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.283565 1635 manager.go:361] Recovery completed
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.286750 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.287138 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.287551 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.287909 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.302536 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.302931 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.303299 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.303697 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.304070 1635 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.304415 1635 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.304763 1635 policy_none.go:42] [cpumanager] none policy: Start
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.305552 1635 container_manager_linux.go:369] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.334065 1635 manager.go:205] Starting Device Plugin manager
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: Starting Device Plugin manager
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.334876 1635 manager.go:237] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: E0712 22:53:40.335298 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: W0712 22:53:40.336866 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1386
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: W0712 22:53:40.337249 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1386
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: I0712 22:53:40.337609 1635 container_manager_linux.go:427] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: W0712 22:53:40.338026 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1635
Jul 12 22:53:40 ip-172-x-y-z kubelet[1635]: W0712 22:53:40.338373 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1635
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: E0712 22:53:41.041032 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: E0712 22:53:41.049773 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: E0712 22:53:41.068222 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.294497 1635 kubelet.go:1861] SyncLoop (ADD, "file"): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd), kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1), kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c), etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c), etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db), kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)"
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.295320 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.295720 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.296076 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.296439 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.302030 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.302461 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.302862 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.303261 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.304252 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.304689 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkitls") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.305099 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkica-trust") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.305515 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.305932 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlibssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306342 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-logfile") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306375 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrsharessl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306407 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlocalopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306435 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306481 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306513 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "srvkube" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-srvkube") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.306541 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlibkcm" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varlibkcm") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329405 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329423 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329431 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329439 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329716 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329730 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329738 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.329745 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.339000 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.339018 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.339030 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.339042 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.358582 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.358963 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.359327 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.359711 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.360417 1635 status_manager.go:461] Failed to get status for pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.361007 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.361374 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.361738 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.362097 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.365277 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.365657 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.366022 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.366376 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.366945 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.367315 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.367722 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.368076 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.379278 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.379719 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.380118 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.380509 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.380913 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: E0712 22:53:41.382104 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.395049 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.395068 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.395086 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.395098 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.397165 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.397185 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.397197 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.397209 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.409833 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.410244 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.410635 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.411017 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.411804 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.412192 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.412578 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.412954 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.415758 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlocalopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.416176 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.416591 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.417012 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "srvkube" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-srvkube") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425525 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-logfile") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425556 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-logfile") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425589 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425629 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkitls") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425676 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkica-trust") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425712 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-iptableslock") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425742 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrsharessl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425771 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varlibkcm" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varlibkcm") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425797 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-kubeconfig") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425823 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlibkubescheduler" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-varlibkubescheduler") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425852 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425880 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlibssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425935 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-logfile") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.425987 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "modules" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-modules") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426013 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl-certs-hosts" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-ssl-certs-hosts") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426649 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlocalopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426699 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426745 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcopenssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426793 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "srvkube" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-srvkube") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426880 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426917 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkitls") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.426952 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-etcpkica-trust") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.427001 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrsharessl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.427036 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varlibkcm" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-varlibkcm") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.427102 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.427140 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-usrlibssl") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.427184 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/358effbb6a9829e718b6ec105343a9cd-logfile") pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" (UID: "358effbb6a9829e718b6ec105343a9cd")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.427299 1635 status_manager.go:461] Failed to get status for pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.454881 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.454902 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.454915 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.454927 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.456531 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.456554 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.456567 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.456579 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470126 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470140 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470148 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470155 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470815 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470829 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470837 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.470844 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.471072 1635 status_manager.go:461] Failed to get status for pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.503120 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.503538 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.503942 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.504332 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.505245 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.505640 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.506034 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.506420 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.512122 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.512509 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.512895 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.513279 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.514207 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.522704 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.523040 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.523384 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.523969 1635 status_manager.go:461] Failed to get status for pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.526911 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-logfile") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527311 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-logfile") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527349 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-iptableslock") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527383 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varlogetcd") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527412 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-kubeconfig") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527466 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varlibkubescheduler" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-varlibkubescheduler") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527495 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varetcdata") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527519 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "hosts" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-hosts") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527544 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varetcdata") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.527567 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "hosts" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-hosts") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530100 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "modules" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-modules") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530143 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "ssl-certs-hosts" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-ssl-certs-hosts") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530172 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varlogetcd") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530260 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-logfile") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530303 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-logfile") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530370 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "iptableslock" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-iptableslock") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530457 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-kubeconfig") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530505 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varlibkubescheduler" (UniqueName: "kubernetes.io/host-path/9490af50fbd859e60994c08c8af0c55c-varlibkubescheduler") pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" (UID: "9490af50fbd859e60994c08c8af0c55c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530608 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "modules" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-modules") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.530646 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "ssl-certs-hosts" (UniqueName: "kubernetes.io/host-path/ebcdbad12e23036616a75b4591735ea1-ssl-certs-hosts") pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" (UID: "ebcdbad12e23036616a75b4591735ea1")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.549296 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.549708 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.550104 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.550498 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.565938 1635 status_manager.go:461] Failed to get status for pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-server-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.567239 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.567666 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.568060 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.568449 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.571838 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.572231 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.572612 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.572989 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.594143 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.594540 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.594933 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.595300 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: W0712 22:53:41.612260 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630426 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varlogetcd") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630459 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-logfile") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630490 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlocalopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630529 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-varssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630556 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630586 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varetcdata") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630611 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630635 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkitls") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630660 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrsharessl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630685 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "srvkube" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvkube") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630709 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlibssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630738 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varlogetcd") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630764 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkica-trust") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630793 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "hosts" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-hosts") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630821 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varetcdata") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630854 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "hosts" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-hosts") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630880 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usrssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630906 1635 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "srvsshproxy" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvsshproxy") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.630971 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varlogetcd") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.631073 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varetcdata") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.631171 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-varlogetcd") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.631222 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "hosts" (UniqueName: "kubernetes.io/host-path/e17653dc7c659b517905a5b8a82bab6c-hosts") pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal" (UID: "e17653dc7c659b517905a5b8a82bab6c")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.631260 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varetcdata" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-varetcdata") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.631295 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "hosts" (UniqueName: "kubernetes.io/host-path/584af863a533de6bd60fc3b70f54b9db-hosts") pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal" (UID: "584af863a533de6bd60fc3b70f54b9db")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.657786 1635 kuberuntime_manager.go:385] No sandbox for pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)" can be found. Need to start a new one
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.658835 1635 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.715830 1635 kuberuntime_manager.go:385] No sandbox for pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)" can be found. Need to start a new one
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731130 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlocalopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731168 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "varssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-varssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731251 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731298 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrlocalopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlocalopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731339 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "varssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-varssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731360 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcopenssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcopenssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731414 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731443 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkitls") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731472 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrsharessl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731505 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "srvkube" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvkube") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731563 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlibssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731596 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkica-trust") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731624 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "usrssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731652 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "srvsshproxy" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvsshproxy") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731681 1635 reconciler.go:252] operationExecutor.MountVolume started for volume "logfile" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-logfile") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731744 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-logfile") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731784 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731820 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkitls") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731878 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrsharessl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrsharessl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731917 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "srvkube" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvkube") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731956 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrlibssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrlibssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.731994 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-etcpkica-trust") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.732053 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "usrssl" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-usrssl") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.732093 1635 operation_generator.go:557] MountVolume.SetUp succeeded for volume "srvsshproxy" (UniqueName: "kubernetes.io/host-path/28832ee64de4d2f079c81d4acb312edc-srvsshproxy") pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" (UID: "28832ee64de4d2f079c81d4acb312edc")
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.775339 1635 kuberuntime_manager.go:385] No sandbox for pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)" can be found. Need to start a new one
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.811126 1635 kuberuntime_manager.go:385] No sandbox for pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)" can be found. Need to start a new one
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.866007 1635 kuberuntime_manager.go:385] No sandbox for pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db)" can be found. Need to start a new one
Jul 12 22:53:41 ip-172-x-y-z kubelet[1635]: I0712 22:53:41.911800 1635 kuberuntime_manager.go:385] No sandbox for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)" can be found. Need to start a new one
Jul 12 22:53:42 ip-172-x-y-z kubelet[1635]: E0712 22:53:42.042188 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:42 ip-172-x-y-z kubelet[1635]: E0712 22:53:42.050585 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:42 ip-172-x-y-z kubelet[1635]: E0712 22:53:42.069048 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: E0712 22:53:43.042747 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: E0712 22:53:43.053875 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: E0712 22:53:43.069503 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.093822 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Downloaded newer image for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.099094 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Downloaded newer image for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.119971 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Image is up to date for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.120147 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Image is up to date for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.122722 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Image is up to date for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.154645 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/pause-amd64:3.0": "Status: Image is up to date for k8s.gcr.io/pause-amd64:3.0"
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.832384 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)", event: &pleg.PodLifecycleEvent{ID:"9490af50fbd859e60994c08c8af0c55c", Type:"ContainerStarted", Data:"e7774b8334b9257f89e5991715b3fc3d351d8eee14ccde6cd355685e6570f965"}
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.834886 1635 kubelet.go:1906] SyncLoop (PLEG): "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db)", event: &pleg.PodLifecycleEvent{ID:"584af863a533de6bd60fc3b70f54b9db", Type:"ContainerStarted", Data:"ecb109d0353b3349a73cd72e76063d1a1af0c55ed2e1011ab6b910d64b711019"}
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.845382 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)", event: &pleg.PodLifecycleEvent{ID:"ebcdbad12e23036616a75b4591735ea1", Type:"ContainerStarted", Data:"3acffffc0739ab45e32484a22c6b9c2f4c349861ee93470869eeb147df81fc7d"}
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.847730 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerStarted", Data:"bbbb733f6bd4951108ede7c0abd2b56363a0e18e2f928e6303304c4c0a00472c"}
Jul 12 22:53:43 ip-172-x-y-z kubelet[1635]: I0712 22:53:43.849998 1635 kubelet.go:1906] SyncLoop (PLEG): "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)", event: &pleg.PodLifecycleEvent{ID:"e17653dc7c659b517905a5b8a82bab6c", Type:"ContainerStarted", Data:"63126a7f358a065569ed0641fe7707500848dfb8db0d8109802d225f8286be34"}
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: E0712 22:53:44.043199 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: E0712 22:53:44.054232 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: E0712 22:53:44.069880 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.582656 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.582687 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.582697 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.582704 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.589383 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.589408 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.589422 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.589435 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.589457 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: E0712 22:53:44.589710 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:44 ip-172-x-y-z kubelet[1635]: I0712 22:53:44.917397 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"daaf42a1b53f857be0ce18e03e805a34909dbd7db6172c77ae62d55186b07d23"}
Jul 12 22:53:45 ip-172-x-y-z kubelet[1635]: E0712 22:53:45.044099 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:45 ip-172-x-y-z kubelet[1635]: E0712 22:53:45.055562 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:45 ip-172-x-y-z kubelet[1635]: E0712 22:53:45.070679 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:45 ip-172-x-y-z kubelet[1635]: E0712 22:53:45.336390 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:53:46 ip-172-x-y-z kubelet[1635]: E0712 22:53:46.056010 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:46 ip-172-x-y-z kubelet[1635]: E0712 22:53:46.056104 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:46 ip-172-x-y-z kubelet[1635]: E0712 22:53:46.071917 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:46 ip-172-x-y-z kubelet[1635]: E0712 22:53:46.859129 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: dial tcp 127.0.0.1:443: getsockopt: connection refused' (may retry after sleeping)
Jul 12 22:53:47 ip-172-x-y-z kubelet[1635]: E0712 22:53:47.056491 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:47 ip-172-x-y-z kubelet[1635]: E0712 22:53:47.057857 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:47 ip-172-x-y-z kubelet[1635]: E0712 22:53:47.072384 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:48 ip-172-x-y-z kubelet[1635]: E0712 22:53:48.057005 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:48 ip-172-x-y-z kubelet[1635]: E0712 22:53:48.058618 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:48 ip-172-x-y-z kubelet[1635]: E0712 22:53:48.072861 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:49 ip-172-x-y-z kubelet[1635]: E0712 22:53:49.057684 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:49 ip-172-x-y-z kubelet[1635]: E0712 22:53:49.059482 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:49 ip-172-x-y-z kubelet[1635]: E0712 22:53:49.074176 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: E0712 22:53:50.058246 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: E0712 22:53:50.060284 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: E0712 22:53:50.074657 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: E0712 22:53:50.335774 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: E0712 22:53:50.395945 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: I0712 22:53:50.993566 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: I0712 22:53:50.993593 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: I0712 22:53:50.993605 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:50 ip-172-x-y-z kubelet[1635]: I0712 22:53:50.993613 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:51 ip-172-x-y-z kubelet[1635]: E0712 22:53:51.062861 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:51 ip-172-x-y-z kubelet[1635]: E0712 22:53:51.062928 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:51 ip-172-x-y-z kubelet[1635]: E0712 22:53:51.079849 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: E0712 22:53:52.068052 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: E0712 22:53:52.068123 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: E0712 22:53:52.083835 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: I0712 22:53:52.379718 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: I0712 22:53:52.379749 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: I0712 22:53:52.379764 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: I0712 22:53:52.379778 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: I0712 22:53:52.379803 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:52 ip-172-x-y-z kubelet[1635]: E0712 22:53:52.380171 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:53 ip-172-x-y-z kubelet[1635]: E0712 22:53:53.068629 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:53 ip-172-x-y-z kubelet[1635]: E0712 22:53:53.069592 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:53 ip-172-x-y-z kubelet[1635]: E0712 22:53:53.084166 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:54 ip-172-x-y-z kubelet[1635]: E0712 22:53:54.069216 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:54 ip-172-x-y-z kubelet[1635]: E0712 22:53:54.070409 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:54 ip-172-x-y-z kubelet[1635]: E0712 22:53:54.084601 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:54 ip-172-x-y-z kubelet[1635]: I0712 22:53:54.208581 1635 kube_docker_client.go:345] Pulling image "k8s.gcr.io/kube-apiserver:v1.10.3": "3d69cb69186e: Extracting [==================================================>] 32.21MB/32.21MB"
Jul 12 22:53:55 ip-172-x-y-z kubelet[1635]: E0712 22:53:55.070401 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:55 ip-172-x-y-z kubelet[1635]: E0712 22:53:55.071366 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:55 ip-172-x-y-z kubelet[1635]: E0712 22:53:55.085460 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:53:55 ip-172-x-y-z kubelet[1635]: E0712 22:53:55.396801 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:53:55 ip-172-x-y-z kubelet[1635]: I0712 22:53:55.551873 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/kube-apiserver:v1.10.3": "Status: Downloaded newer image for k8s.gcr.io/kube-apiserver:v1.10.3"
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.289847 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerStarted", Data:"62e1e4e419add16f588df894eb4acd7c3e3c9f7fdbfdbb781aa24984e0ea4413"}
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.290515 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.290918 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.291310 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.291725 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.325345 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.325772 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.326168 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:56 ip-172-x-y-z kubelet[1635]: I0712 22:53:56.326186 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.409046 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.409618 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.409633 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.409641 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.488701 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.488726 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.488740 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:57 ip-172-x-y-z kubelet[1635]: I0712 22:53:57.488753 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.382985 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.383013 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.383023 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.383030 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.545567 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.545594 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.545609 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.545622 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:53:59 ip-172-x-y-z kubelet[1635]: I0712 22:53:59.545646 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:00 ip-172-x-y-z kubelet[1635]: E0712 22:54:00.335985 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:54:00 ip-172-x-y-z kubelet[1635]: E0712 22:54:00.465718 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:00 ip-172-x-y-z kubelet[1635]: I0712 22:54:00.760060 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/etcd:2.2.1": "Status: Downloaded newer image for k8s.gcr.io/etcd:2.2.1"
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.751598 1635 kubelet.go:1906] SyncLoop (PLEG): "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)", event: &pleg.PodLifecycleEvent{ID:"e17653dc7c659b517905a5b8a82bab6c", Type:"ContainerStarted", Data:"22221a66c3e4f6668f297b20c252d66c1e9e5768ceddcb340e65bf5f51cb1f5b"}
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.752272 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.752669 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.778316 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.778979 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.785577 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.794011 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.794029 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:01 ip-172-x-y-z kubelet[1635]: I0712 22:54:01.794042 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:03 ip-172-x-y-z kubelet[1635]: I0712 22:54:03.141413 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:03 ip-172-x-y-z kubelet[1635]: I0712 22:54:03.142083 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:03 ip-172-x-y-z kubelet[1635]: I0712 22:54:03.142095 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:03 ip-172-x-y-z kubelet[1635]: I0712 22:54:03.142103 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:04 ip-172-x-y-z kubelet[1635]: I0712 22:54:04.327092 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:04 ip-172-x-y-z kubelet[1635]: I0712 22:54:04.327124 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:04 ip-172-x-y-z kubelet[1635]: I0712 22:54:04.327140 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:04 ip-172-x-y-z kubelet[1635]: I0712 22:54:04.327154 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:05 ip-172-x-y-z kubelet[1635]: E0712 22:54:05.545804 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: E0712 22:54:06.074531 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: E0712 22:54:06.074613 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: E0712 22:54:06.094524 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: W0712 22:54:06.327364 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: net/http: TLS handshake timeout
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: I0712 22:54:06.856588 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/kube-scheduler:v1.10.3": "Status: Downloaded newer image for k8s.gcr.io/kube-scheduler:v1.10.3"
Jul 12 22:54:06 ip-172-x-y-z kubelet[1635]: E0712 22:54:06.900969 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: net/http: TLS handshake timeout' (may retry after sleeping)
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.325725 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/etcd:2.2.1": "Status: Image is up to date for k8s.gcr.io/etcd:2.2.1"
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.826509 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.827122 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.827554 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.827949 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.828525 1635 kubelet.go:1906] SyncLoop (PLEG): "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db)", event: &pleg.PodLifecycleEvent{ID:"584af863a533de6bd60fc3b70f54b9db", Type:"ContainerStarted", Data:"c3b17a4ea027d761ff9de6bdbf85495b25fcbbd952f92503aecf188f15ba5333"}
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.859301 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)", event: &pleg.PodLifecycleEvent{ID:"9490af50fbd859e60994c08c8af0c55c", Type:"ContainerStarted", Data:"9c6ad5cdd1831139cf18cc0093ffd046e8e3d3cc3dd5f9390c9069394e54cb08"}
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.859894 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.860285 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.860636 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.860990 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.874256 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.874731 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.875133 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.875563 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.877977 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.877997 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.878010 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:07 ip-172-x-y-z kubelet[1635]: I0712 22:54:07.878023 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.031965 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.036038 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.036734 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.037216 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.051942 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.062459 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.062995 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.063619 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.162569 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.162600 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.162615 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.162628 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.164893 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.164935 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.164952 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: I0712 22:54:09.164964 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:09 ip-172-x-y-z kubelet[1635]: E0712 22:54:09.546301 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: net/http: TLS handshake timeout
Jul 12 22:54:10 ip-172-x-y-z kubelet[1635]: E0712 22:54:10.342486 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:54:10 ip-172-x-y-z kubelet[1635]: E0712 22:54:10.851950 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:15 ip-172-x-y-z kubelet[1635]: E0712 22:54:15.933447 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: W0712 22:54:16.327971 1635 status_manager.go:461] Failed to get status for pod "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal: net/http: TLS handshake timeout
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.552521 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.552554 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.552564 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.552571 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.618631 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.618659 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.618673 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.618691 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:16 ip-172-x-y-z kubelet[1635]: I0712 22:54:16.618718 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:17 ip-172-x-y-z kubelet[1635]: E0712 22:54:17.083303 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:17 ip-172-x-y-z kubelet[1635]: E0712 22:54:17.083378 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:17 ip-172-x-y-z kubelet[1635]: E0712 22:54:17.106206 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:17 ip-172-x-y-z kubelet[1635]: I0712 22:54:17.775558 1635 kube_docker_client.go:345] Pulling image "k8s.gcr.io/kube-controller-manager:v1.10.3": "995740e66e13: Extracting [=======================================> ] 23.3MB/29.26MB"
Jul 12 22:54:18 ip-172-x-y-z kubelet[1635]: I0712 22:54:18.672103 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): Get http://127.0.0.1:8080/healthz: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jul 12 22:54:20 ip-172-x-y-z kubelet[1635]: E0712 22:54:20.344130 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:54:21 ip-172-x-y-z kubelet[1635]: E0712 22:54:21.028177 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:21 ip-172-x-y-z kubelet[1635]: I0712 22:54:21.545362 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/kube-controller-manager:v1.10.3": "Status: Downloaded newer image for k8s.gcr.io/kube-controller-manager:v1.10.3"
Jul 12 22:54:21 ip-172-x-y-z kubelet[1635]: E0712 22:54:21.563592 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.084469 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"5d4981402226fd06dcf979f49095b15110cc38018abfd75b671a6625abe69e4b"}
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.085113 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.085483 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.099523 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.099904 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.135184 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.135651 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.136049 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:22 ip-172-x-y-z kubelet[1635]: I0712 22:54:22.136067 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.163109 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.163732 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.164134 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.164148 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.553333 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.553385 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.553400 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:23 ip-172-x-y-z kubelet[1635]: I0712 22:54:23.553413 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:26 ip-172-x-y-z kubelet[1635]: E0712 22:54:26.068573 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:26 ip-172-x-y-z kubelet[1635]: W0712 22:54:26.329095 1635 status_manager.go:461] Failed to get status for pod "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(584af863a533de6bd60fc3b70f54b9db)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-server-ip-172-x-y-z.us-west-2.compute.internal: net/http: TLS handshake timeout
Jul 12 22:54:26 ip-172-x-y-z kubelet[1635]: E0712 22:54:26.623473 1635 kubelet_node_status.go:106] Unable to register node "ip-172-x-y-z.us-west-2.compute.internal" with API server: Post https://127.0.0.1/api/v1/nodes: net/http: TLS handshake timeout
Jul 12 22:54:26 ip-172-x-y-z kubelet[1635]: E0712 22:54:26.911109 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: net/http: TLS handshake timeout' (may retry after sleeping)
Jul 12 22:54:28 ip-172-x-y-z kubelet[1635]: E0712 22:54:28.084136 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:28 ip-172-x-y-z kubelet[1635]: E0712 22:54:28.085008 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:28 ip-172-x-y-z kubelet[1635]: E0712 22:54:28.112549 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:28 ip-172-x-y-z kubelet[1635]: I0712 22:54:28.640499 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): Get http://127.0.0.1:8080/healthz: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jul 12 22:54:30 ip-172-x-y-z kubelet[1635]: E0712 22:54:30.346793 1635 eviction_manager.go:247] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-x-y-z.us-west-2.compute.internal" not found
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: W0712 22:54:31.100217 1635 status_manager.go:461] Failed to get status for pod "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)": pods "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: E0712 22:54:31.173923 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: E0712 22:54:31.197013 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: E0712 22:54:31.205722 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: E0712 22:54:31.206229 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: W0712 22:54:31.404817 1635 status_manager.go:461] Failed to get status for pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)": pods "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:54:31 ip-172-x-y-z kubelet[1635]: I0712 22:54:31.958117 1635 kube_docker_client.go:345] Pulling image "k8s.gcr.io/kube-proxy:v1.10.3": "14bfddfd7fdf: Extracting [================================> ] 524.3kB/795.9kB"
Jul 12 22:54:32 ip-172-x-y-z kubelet[1635]: I0712 22:54:32.349584 1635 kubelet.go:1861] SyncLoop (ADD, "api"): ""
Jul 12 22:54:32 ip-172-x-y-z kubelet[1635]: I0712 22:54:32.396263 1635 reconciler.go:154] Reconciler: start to sync state
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.623881 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.623911 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.623922 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.623929 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.735923 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.735951 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.735967 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.735980 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:33 ip-172-x-y-z kubelet[1635]: I0712 22:54:33.736058 1635 kubelet_node_status.go:82] Attempting to register node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:36 ip-172-x-y-z kubelet[1635]: E0712 22:54:36.201771 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.378032 1635 kube_docker_client.go:348] Stop pulling image "k8s.gcr.io/kube-proxy:v1.10.3": "Status: Downloaded newer image for k8s.gcr.io/kube-proxy:v1.10.3"
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.704551 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)", event: &pleg.PodLifecycleEvent{ID:"ebcdbad12e23036616a75b4591735ea1", Type:"ContainerStarted", Data:"07f92199df5c6b2436f7d457e854b92815c29a5f1ed43e059720c4873099ba4b"}
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.705151 1635 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.705598 1635 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m3.medium
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.719284 1635 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.719726 1635 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.724939 1635 kubelet_node_status.go:425] Recording NodeHasSufficientDisk event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.725349 1635 kubelet_node_status.go:425] Recording NodeHasSufficientMemory event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.725749 1635 kubelet_node_status.go:425] Recording NodeHasNoDiskPressure event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:37 ip-172-x-y-z kubelet[1635]: I0712 22:54:37.726144 1635 kubelet_node_status.go:425] Recording NodeHasSufficientPID event message for node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:38 ip-172-x-y-z kubelet[1635]: I0712 22:54:38.091843 1635 kubelet_node_status.go:85] Successfully registered node ip-172-x-y-z.us-west-2.compute.internal
Jul 12 22:54:38 ip-172-x-y-z kubelet[1635]: I0712 22:54:38.700316 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500
Jul 12 22:54:38 ip-172-x-y-z kubelet[1635]: I0712 22:54:38.700371 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)"
Jul 12 22:54:39 ip-172-x-y-z kubelet[1635]: I0712 22:54:39.227153 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(8e5e8ccc-8626-11e8-833e-02da1f9cf30e)"
Jul 12 22:54:39 ip-172-x-y-z kubelet[1635]: I0712 22:54:39.317706 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(8e58d1f8-8626-11e8-833e-02da1f9cf30e)"
Jul 12 22:54:39 ip-172-x-y-z kubelet[1635]: I0712 22:54:39.623587 1635 kuberuntime_manager.go:549] Container "kube-apiserver" ({"docker" "62e1e4e419add16f588df894eb4acd7c3e3c9f7fdbfdbb781aa24984e0ea4413"}) of pod kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc): Container failed liveness probe.. Container will be killed and recreated.
Jul 12 22:54:39 ip-172-x-y-z kubelet[1635]: I0712 22:54:39.623629 1635 kuberuntime_container.go:547] Killing container "docker://62e1e4e419add16f588df894eb4acd7c3e3c9f7fdbfdbb781aa24984e0ea4413" with 30 second grace period
Jul 12 22:54:40 ip-172-x-y-z kubelet[1635]: E0712 22:54:40.411990 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:54:40 ip-172-x-y-z kubelet[1635]: E0712 22:54:40.412561 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:54:41 ip-172-x-y-z kubelet[1635]: E0712 22:54:41.203378 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.236515 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: dial tcp 127.0.0.1:443: getsockopt: connection refused' (may retry after sleeping)
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.811940 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.827104 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.827799 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.865074 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to watch *v1.Service: Get https://127.0.0.1/api/v1/services?resourceVersion=3258295&timeoutSeconds=372&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.865576 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&resourceVersion=3258306&timeoutSeconds=376&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:54:42 ip-172-x-y-z kubelet[1635]: E0712 22:54:42.866093 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to watch *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&resourceVersion=3258304&timeoutSeconds=501&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:54:43 ip-172-x-y-z kubelet[1635]: I0712 22:54:43.816286 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerDied", Data:"62e1e4e419add16f588df894eb4acd7c3e3c9f7fdbfdbb781aa24984e0ea4413"}
Jul 12 22:54:43 ip-172-x-y-z kubelet[1635]: I0712 22:54:43.835526 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerStarted", Data:"9b32d23af5b1815b67beb70eac7cf62e1682d3df08ba6062520e65b0d5548470"}
Jul 12 22:54:46 ip-172-x-y-z kubelet[1635]: E0712 22:54:46.245086 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:50 ip-172-x-y-z kubelet[1635]: E0712 22:54:50.464214 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:54:50 ip-172-x-y-z kubelet[1635]: E0712 22:54:50.464793 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:54:51 ip-172-x-y-z kubelet[1635]: E0712 22:54:51.246776 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:53 ip-172-x-y-z kubelet[1635]: W0712 22:54:53.836709 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: net/http: TLS handshake timeout
Jul 12 22:54:53 ip-172-x-y-z kubelet[1635]: E0712 22:54:53.884301 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:53 ip-172-x-y-z kubelet[1635]: E0712 22:54:53.884387 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:53 ip-172-x-y-z kubelet[1635]: E0712 22:54:53.884436 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:54:56 ip-172-x-y-z kubelet[1635]: E0712 22:54:56.254522 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:54:58 ip-172-x-y-z kubelet[1635]: E0712 22:54:58.384550 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?resourceVersion=0&timeout=10s: net/http: TLS handshake timeout
Jul 12 22:54:58 ip-172-x-y-z kubelet[1635]: I0712 22:54:58.648127 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): Get http://127.0.0.1:8080/healthz: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Jul 12 22:55:00 ip-172-x-y-z kubelet[1635]: E0712 22:55:00.502181 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:00 ip-172-x-y-z kubelet[1635]: E0712 22:55:00.502772 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.032172 1635 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-x-y-z.us-west-2.compute.internal.1540c0f1d476749b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-x-y-z.us-west-2.compute.internal", UID:"ip-172-x-y-z.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-x-y-z.us-west-2.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc8b55809b, ext:2635216556, loc:(*time.Location)(0x5ba4020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc8b55809b, ext:2635216556, loc:(*time.Location)(0x5ba4020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "default"' (will not retry!)
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: W0712 22:55:01.114792 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": pods "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.116070 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.116718 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.117271 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.117838 1635 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-x-y-z.us-west-2.compute.internal.1540c0f1db25abfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-x-y-z.us-west-2.compute.internal", UID:"ip-172-x-y-z.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node ip-172-x-y-z.us-west-2.compute.internal status is now: NodeHasSufficientDisk", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-x-y-z.us-west-2.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc9204b7fd, ext:2747362832, loc:(*time.Location)(0x5ba4020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc9204b7fd, ext:2747362832, loc:(*time.Location)(0x5ba4020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "default"' (will not retry!)
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.143352 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": nodes "ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get nodes at the cluster scope
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.143954 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:28900->127.0.0.1:443: use of closed network connection
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.144403 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:28900->127.0.0.1:443: use of closed network connection
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.144849 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:28900->127.0.0.1:443: use of closed network connection
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.145219 1635 kubelet_node_status.go:366] Unable to update node status: update node status exceeds retry count
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.145662 1635 event.go:209] Unable to write event: 'Post https://127.0.0.1/api/v1/namespaces/default/events: read tcp 127.0.0.1:28900->127.0.0.1:443: use of closed network connection; some request body already written' (may retry after sleeping)
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: W0712 22:55:01.146062 1635 status_manager.go:461] Failed to get status for pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-x-y-z.us-west-2.compute.internal: read tcp 127.0.0.1:28900->127.0.0.1:443: use of closed network connection
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: W0712 22:55:01.246847 1635 status_manager.go:461] Failed to get status for pod "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ebcdbad12e23036616a75b4591735ea1)": pods "kube-proxy-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: W0712 22:55:01.271743 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": pods "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:55:01 ip-172-x-y-z kubelet[1635]: E0712 22:55:01.285634 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:06 ip-172-x-y-z kubelet[1635]: E0712 22:55:06.290658 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:08 ip-172-x-y-z kubelet[1635]: I0712 22:55:08.718277 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500
Jul 12 22:55:08 ip-172-x-y-z kubelet[1635]: I0712 22:55:08.718332 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)"
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: I0712 22:55:09.028800 1635 kuberuntime_manager.go:549] Container "kube-apiserver" ({"docker" "9b32d23af5b1815b67beb70eac7cf62e1682d3df08ba6062520e65b0d5548470"}) of pod kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc): Container failed liveness probe.. Container will be killed and recreated.
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: I0712 22:55:09.028841 1635 kuberuntime_container.go:547] Killing container "docker://9b32d23af5b1815b67beb70eac7cf62e1682d3df08ba6062520e65b0d5548470" with 30 second grace period
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.384564 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.384887 1635 request.go:785] Unexpected error when reading response body: http2.GoAwayError{LastStreamID:0x19, ErrCode:0x0, DebugData:""}
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.384959 1635 event.go:209] Unable to write event: 'Unexpected error http2.GoAwayError{LastStreamID:0x19, ErrCode:0x0, DebugData:""} when reading response body. Please retry.' (may retry after sleeping)
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.384984 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.385168 1635 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.385398 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&resourceVersion=3258312&timeoutSeconds=577&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.387511 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to watch *v1.Service: Get https://127.0.0.1/api/v1/services?resourceVersion=3258306&timeoutSeconds=495&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: E0712 22:55:09.387568 1635 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to watch *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&resourceVersion=3258306&timeoutSeconds=548&watch=true: dial tcp 127.0.0.1:443: getsockopt: connection refused
Jul 12 22:55:09 ip-172-x-y-z kubelet[1635]: I0712 22:55:09.452119 1635 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)"
Jul 12 22:55:10 ip-172-x-y-z kubelet[1635]: I0712 22:55:10.222102 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerDied", Data:"9b32d23af5b1815b67beb70eac7cf62e1682d3df08ba6062520e65b0d5548470"}
Jul 12 22:55:10 ip-172-x-y-z kubelet[1635]: I0712 22:55:10.222763 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)", event: &pleg.PodLifecycleEvent{ID:"28832ee64de4d2f079c81d4acb312edc", Type:"ContainerStarted", Data:"f7bf082455874036cd6d9b25ca3e5966be96c4e901e5de6b38905c075952801e"}
Jul 12 22:55:10 ip-172-x-y-z kubelet[1635]: E0712 22:55:10.528298 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:10 ip-172-x-y-z kubelet[1635]: E0712 22:55:10.528862 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:11 ip-172-x-y-z kubelet[1635]: E0712 22:55:11.297491 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:16 ip-172-x-y-z kubelet[1635]: E0712 22:55:16.304406 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: W0712 22:55:20.241987 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": Get https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal: net/http: TLS handshake timeout
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: E0712 22:55:20.403991 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: E0712 22:55:20.404055 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-x-y-z.us-west-2.compute.internal&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: E0712 22:55:20.404101 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: E0712 22:55:20.551559 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:20 ip-172-x-y-z kubelet[1635]: E0712 22:55:20.552055 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:21 ip-172-x-y-z kubelet[1635]: E0712 22:55:21.153419 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?resourceVersion=0&timeout=10s: net/http: TLS handshake timeout
Jul 12 22:55:21 ip-172-x-y-z kubelet[1635]: E0712 22:55:21.333947 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:22 ip-172-x-y-z kubelet[1635]: E0712 22:55:22.214800 1635 kubelet.go:1622] Failed creating a mirror pod for "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(e17653dc7c659b517905a5b8a82bab6c)": Post https://127.0.0.1/api/v1/namespaces/kube-system/pods: net/http: TLS handshake timeout
Jul 12 22:55:26 ip-172-x-y-z kubelet[1635]: E0712 22:55:26.343336 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.201203 1635 kubelet.go:1622] Failed creating a mirror pod for "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(9490af50fbd859e60994c08c8af0c55c)": Post https://127.0.0.1/api/v1/namespaces/kube-system/pods: net/http: TLS handshake timeout
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: W0712 22:55:27.521415 1635 status_manager.go:461] Failed to get status for pod "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc)": pods "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get pods in the namespace "kube-system"
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.522142 1635 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-x-y-z.us-west-2.compute.internal.1540c0f1db2bbeeb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-x-y-z.us-west-2.compute.internal", UID:"ip-172-x-y-z.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-x-y-z.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-x-y-z.us-west-2.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc920acaeb, ext:2747760895, loc:(*time.Location)(0x5ba4020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbeca14dc920acaeb, ext:2747760895, loc:(*time.Location)(0x5ba4020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "default"' (will not retry!)
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561251 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": nodes "ip-172-x-y-z.us-west-2.compute.internal" is forbidden: User "kubelet" cannot get nodes at the cluster scope
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561391 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:29270->127.0.0.1:443: use of closed network connection
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561499 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:29270->127.0.0.1:443: use of closed network connection
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561588 1635 kubelet_node_status.go:377] Error updating node status, will retry: error getting node "ip-172-x-y-z.us-west-2.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-172-x-y-z.us-west-2.compute.internal?timeout=10s: write tcp 127.0.0.1:29270->127.0.0.1:443: use of closed network connection
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561599 1635 kubelet_node_status.go:366] Unable to update node status: update node status exceeds retry count
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561682 1635 request.go:785] Unexpected error when reading response body: &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)}
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561734 1635 event.go:209] Unable to write event: 'Unexpected error &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)} when reading response body. Please retry.' (may retry after sleeping)
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561746 1635 request.go:785] Unexpected error when reading response body: &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)}
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561785 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unexpected error &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)} when reading response body. Please retry.
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561800 1635 request.go:785] Unexpected error when reading response body: &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)}
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561859 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unexpected error &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)} when reading response body. Please retry.
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561879 1635 request.go:785] Unexpected error when reading response body: &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)}
Jul 12 22:55:27 ip-172-x-y-z kubelet[1635]: E0712 22:55:27.561919 1635 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unexpected error &net.OpError{Op:"read", Net:"tcp", Source:(*net.TCPAddr)(0xc421236570), Addr:(*net.TCPAddr)(0xc4212365a0), Err:(*errors.errorString)(0xc420038110)} when reading response body. Please retry.
Jul 12 22:55:28 ip-172-x-y-z kubelet[1635]: I0712 22:55:28.501925 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"5d4981402226fd06dcf979f49095b15110cc38018abfd75b671a6625abe69e4b"}
Jul 12 22:55:28 ip-172-x-y-z kubelet[1635]: I0712 22:55:28.722449 1635 prober.go:111] Liveness probe for "kube-apiserver-ip-172-x-y-z.us-west-2.compute.internal_kube-system(28832ee64de4d2f079c81d4acb312edc):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500
Jul 12 22:55:30 ip-172-x-y-z kubelet[1635]: E0712 22:55:30.663536 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:30 ip-172-x-y-z kubelet[1635]: E0712 22:55:30.664118 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:31 ip-172-x-y-z kubelet[1635]: E0712 22:55:31.350486 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: I0712 22:55:33.136899 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(ae7c3b60-8626-11e8-8e3d-02da1f9cf30e)"
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: I0712 22:55:33.438226 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: I0712 22:55:33.438335 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: E0712 22:55:33.448926 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 22:55:33 ip-172-x-y-z kubelet[1635]: I0712 22:55:33.762792 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"5d1ebb20e7cce5668e75af0c44aa5703e56c6ae1f8b14c7b582574bde942ee46"}
Jul 12 22:55:35 ip-172-x-y-z kubelet[1635]: I0712 22:55:35.401972 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "etcd-server-ip-172-x-y-z.us-west-2.compute.internal_kube-system(afef182c-8626-11e8-8e3d-02da1f9cf30e)"
Jul 12 22:55:36 ip-172-x-y-z kubelet[1635]: E0712 22:55:36.357385 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:40 ip-172-x-y-z kubelet[1635]: E0712 22:55:40.728803 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:40 ip-172-x-y-z kubelet[1635]: E0712 22:55:40.730199 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:41 ip-172-x-y-z kubelet[1635]: E0712 22:55:41.358651 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:46 ip-172-x-y-z kubelet[1635]: E0712 22:55:46.360080 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:50 ip-172-x-y-z kubelet[1635]: E0712 22:55:50.738982 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:55:50 ip-172-x-y-z kubelet[1635]: E0712 22:55:50.739589 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:55:51 ip-172-x-y-z kubelet[1635]: E0712 22:55:51.361660 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:55:56 ip-172-x-y-z kubelet[1635]: E0712 22:55:56.362629 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:00 ip-172-x-y-z kubelet[1635]: E0712 22:56:00.756144 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:00 ip-172-x-y-z kubelet[1635]: E0712 22:56:00.756716 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:01 ip-172-x-y-z kubelet[1635]: E0712 22:56:01.363886 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: I0712 22:56:05.003985 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"5d1ebb20e7cce5668e75af0c44aa5703e56c6ae1f8b14c7b582574bde942ee46"}
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: I0712 22:56:05.305053 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: I0712 22:56:05.305159 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: I0712 22:56:05.305294 1635 kuberuntime_manager.go:767] Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:56:05 ip-172-x-y-z kubelet[1635]: E0712 22:56:05.305359 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:06 ip-172-x-y-z kubelet[1635]: E0712 22:56:06.364743 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:10 ip-172-x-y-z kubelet[1635]: E0712 22:56:10.765366 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:10 ip-172-x-y-z kubelet[1635]: E0712 22:56:10.774020 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:11 ip-172-x-y-z kubelet[1635]: E0712 22:56:11.366470 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: I0712 22:56:14.221543 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: I0712 22:56:14.522695 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: I0712 22:56:14.522802 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: I0712 22:56:14.522961 1635 kuberuntime_manager.go:767] Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:56:14 ip-172-x-y-z kubelet[1635]: E0712 22:56:14.522995 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:16 ip-172-x-y-z kubelet[1635]: E0712 22:56:16.367699 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:20 ip-172-x-y-z kubelet[1635]: E0712 22:56:20.790042 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:20 ip-172-x-y-z kubelet[1635]: E0712 22:56:20.790596 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:21 ip-172-x-y-z kubelet[1635]: E0712 22:56:21.369244 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:26 ip-172-x-y-z kubelet[1635]: E0712 22:56:26.370267 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: I0712 22:56:30.495729 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: I0712 22:56:30.497188 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: E0712 22:56:30.499098 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: E0712 22:56:30.806667 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:30 ip-172-x-y-z kubelet[1635]: E0712 22:56:30.816164 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:31 ip-172-x-y-z kubelet[1635]: I0712 22:56:31.209785 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"601873b6be1911596c4de91caef3e96f85195a8a0e09576901499dc7c3605736"}
Jul 12 22:56:31 ip-172-x-y-z kubelet[1635]: E0712 22:56:31.371275 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:36 ip-172-x-y-z kubelet[1635]: E0712 22:56:36.375175 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:40 ip-172-x-y-z kubelet[1635]: E0712 22:56:40.824153 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:40 ip-172-x-y-z kubelet[1635]: E0712 22:56:40.824751 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:41 ip-172-x-y-z kubelet[1635]: E0712 22:56:41.383334 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:46 ip-172-x-y-z kubelet[1635]: E0712 22:56:46.384572 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:49 ip-172-x-y-z kubelet[1635]: I0712 22:56:49.199406 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "etcd-server-events-ip-172-x-y-z.us-west-2.compute.internal_kube-system(dbf53637-8626-11e8-8e3d-02da1f9cf30e)"
Jul 12 22:56:50 ip-172-x-y-z kubelet[1635]: I0712 22:56:50.247781 1635 kubelet.go:1861] SyncLoop (ADD, "api"): "kube-scheduler-ip-172-x-y-z.us-west-2.compute.internal_kube-system(dc8ec78a-8626-11e8-8e3d-02da1f9cf30e)"
Jul 12 22:56:50 ip-172-x-y-z kubelet[1635]: E0712 22:56:50.833668 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:56:50 ip-172-x-y-z kubelet[1635]: E0712 22:56:50.834271 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:56:51 ip-172-x-y-z kubelet[1635]: E0712 22:56:51.398294 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:56:56 ip-172-x-y-z kubelet[1635]: E0712 22:56:56.407785 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:00 ip-172-x-y-z kubelet[1635]: E0712 22:57:00.851986 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:00 ip-172-x-y-z kubelet[1635]: E0712 22:57:00.852617 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:01 ip-172-x-y-z kubelet[1635]: E0712 22:57:01.409179 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:06 ip-172-x-y-z kubelet[1635]: E0712 22:57:06.410840 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:10 ip-172-x-y-z kubelet[1635]: E0712 22:57:10.860997 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:10 ip-172-x-y-z kubelet[1635]: E0712 22:57:10.861640 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:11 ip-172-x-y-z kubelet[1635]: E0712 22:57:11.411851 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: I0712 22:57:14.224598 1635 prober.go:111] Liveness probe for "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd):kube-controller-manager" failed (failure): Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: I0712 22:57:14.527210 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"601873b6be1911596c4de91caef3e96f85195a8a0e09576901499dc7c3605736"}
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: I0712 22:57:14.829004 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: I0712 22:57:14.829114 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: I0712 22:57:14.829250 1635 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:57:14 ip-172-x-y-z kubelet[1635]: E0712 22:57:14.829286 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:16 ip-172-x-y-z kubelet[1635]: E0712 22:57:16.413041 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:20 ip-172-x-y-z kubelet[1635]: E0712 22:57:20.878704 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:20 ip-172-x-y-z kubelet[1635]: E0712 22:57:20.879298 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:21 ip-172-x-y-z kubelet[1635]: E0712 22:57:21.414660 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: I0712 22:57:24.226116 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: I0712 22:57:24.527330 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: I0712 22:57:24.527436 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: I0712 22:57:24.527598 1635 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:57:24 ip-172-x-y-z kubelet[1635]: E0712 22:57:24.527644 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:26 ip-172-x-y-z kubelet[1635]: E0712 22:57:26.415832 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:30 ip-172-x-y-z kubelet[1635]: E0712 22:57:30.887957 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:30 ip-172-x-y-z kubelet[1635]: E0712 22:57:30.888419 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:31 ip-172-x-y-z kubelet[1635]: E0712 22:57:31.417532 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:36 ip-172-x-y-z kubelet[1635]: E0712 22:57:36.419064 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:37 ip-172-x-y-z kubelet[1635]: I0712 22:57:37.492446 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:57:37 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:57:37 ip-172-x-y-z kubelet[1635]: I0712 22:57:37.492563 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:57:37 ip-172-x-y-z kubelet[1635]: E0712 22:57:37.495772 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 22:57:37 ip-172-x-y-z kubelet[1635]: I0712 22:57:37.723069 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"e3d78e2c880be81b35559a2063e67a3fa471648d65aedc7fc4334c38d92c0875"}
Jul 12 22:57:40 ip-172-x-y-z kubelet[1635]: E0712 22:57:40.905216 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:40 ip-172-x-y-z kubelet[1635]: E0712 22:57:40.905807 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:41 ip-172-x-y-z kubelet[1635]: E0712 22:57:41.431826 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:46 ip-172-x-y-z kubelet[1635]: E0712 22:57:46.432788 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:50 ip-172-x-y-z kubelet[1635]: E0712 22:57:50.923727 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:57:50 ip-172-x-y-z kubelet[1635]: E0712 22:57:50.924311 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:57:51 ip-172-x-y-z kubelet[1635]: E0712 22:57:51.434417 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:57:56 ip-172-x-y-z kubelet[1635]: E0712 22:57:56.440586 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:00 ip-172-x-y-z kubelet[1635]: E0712 22:58:00.932917 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:00 ip-172-x-y-z kubelet[1635]: E0712 22:58:00.933512 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:01 ip-172-x-y-z kubelet[1635]: E0712 22:58:01.442275 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:06 ip-172-x-y-z kubelet[1635]: E0712 22:58:06.443960 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:07 ip-172-x-y-z kubelet[1635]: I0712 22:58:07.938994 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"e3d78e2c880be81b35559a2063e67a3fa471648d65aedc7fc4334c38d92c0875"}
Jul 12 22:58:08 ip-172-x-y-z kubelet[1635]: I0712 22:58:08.240415 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:58:08 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:58:08 ip-172-x-y-z kubelet[1635]: I0712 22:58:08.241855 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:08 ip-172-x-y-z kubelet[1635]: I0712 22:58:08.242384 1635 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:58:08 ip-172-x-y-z kubelet[1635]: E0712 22:58:08.242800 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:10 ip-172-x-y-z kubelet[1635]: E0712 22:58:10.942150 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:10 ip-172-x-y-z kubelet[1635]: E0712 22:58:10.950543 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:11 ip-172-x-y-z kubelet[1635]: E0712 22:58:11.459476 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: I0712 22:58:14.221780 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: I0712 22:58:14.524515 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: I0712 22:58:14.524637 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: I0712 22:58:14.524797 1635 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:58:14 ip-172-x-y-z kubelet[1635]: E0712 22:58:14.524853 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:16 ip-172-x-y-z kubelet[1635]: E0712 22:58:16.461230 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:20 ip-172-x-y-z kubelet[1635]: E0712 22:58:20.958193 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:20 ip-172-x-y-z kubelet[1635]: E0712 22:58:20.958818 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:21 ip-172-x-y-z kubelet[1635]: E0712 22:58:21.462909 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:26 ip-172-x-y-z kubelet[1635]: E0712 22:58:26.475791 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:27 ip-172-x-y-z kubelet[1635]: I0712 22:58:27.492528 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:58:27 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:58:27 ip-172-x-y-z kubelet[1635]: I0712 22:58:27.492671 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:27 ip-172-x-y-z kubelet[1635]: I0712 22:58:27.492835 1635 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:58:27 ip-172-x-y-z kubelet[1635]: E0712 22:58:27.492872 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:30 ip-172-x-y-z kubelet[1635]: E0712 22:58:30.967984 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:30 ip-172-x-y-z kubelet[1635]: E0712 22:58:30.968573 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:31 ip-172-x-y-z kubelet[1635]: E0712 22:58:31.477440 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:36 ip-172-x-y-z kubelet[1635]: E0712 22:58:36.479167 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:38 ip-172-x-y-z kubelet[1635]: I0712 22:58:38.196754 1635 kubelet.go:1292] Image garbage collection succeeded
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: W0712 22:58:40.338929 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1386
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: W0712 22:58:40.339538 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1386
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: I0712 22:58:40.339905 1635 container_manager_linux.go:427] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: W0712 22:58:40.340337 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1635
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: W0712 22:58:40.340703 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1635
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: E0712 22:58:40.976357 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:40 ip-172-x-y-z kubelet[1635]: E0712 22:58:40.976954 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: I0712 22:58:41.492464 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: I0712 22:58:41.505122 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: I0712 22:58:41.505312 1635 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: E0712 22:58:41.505361 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:41 ip-172-x-y-z kubelet[1635]: E0712 22:58:41.506425 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:46 ip-172-x-y-z kubelet[1635]: E0712 22:58:46.507720 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:50 ip-172-x-y-z kubelet[1635]: E0712 22:58:50.985224 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:58:50 ip-172-x-y-z kubelet[1635]: E0712 22:58:50.985814 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:58:51 ip-172-x-y-z kubelet[1635]: E0712 22:58:51.509606 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:56 ip-172-x-y-z kubelet[1635]: I0712 22:58:56.494648 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:58:56 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:58:56 ip-172-x-y-z kubelet[1635]: I0712 22:58:56.496135 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:58:56 ip-172-x-y-z kubelet[1635]: E0712 22:58:56.497957 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 22:58:56 ip-172-x-y-z kubelet[1635]: E0712 22:58:56.534309 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:58:57 ip-172-x-y-z kubelet[1635]: I0712 22:58:57.172030 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"cc8946ea34292c60a26ce499b9c9c45670f16acacf83af66ba702ab7aaab3a3d"}
Jul 12 22:59:00 ip-172-x-y-z kubelet[1635]: E0712 22:59:00.994587 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:00 ip-172-x-y-z kubelet[1635]: E0712 22:59:00.995222 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:01 ip-172-x-y-z kubelet[1635]: E0712 22:59:01.535453 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:06 ip-172-x-y-z kubelet[1635]: E0712 22:59:06.536571 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:11 ip-172-x-y-z kubelet[1635]: E0712 22:59:11.003172 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:11 ip-172-x-y-z kubelet[1635]: E0712 22:59:11.003802 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:11 ip-172-x-y-z kubelet[1635]: E0712 22:59:11.552064 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:16 ip-172-x-y-z kubelet[1635]: E0712 22:59:16.553613 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:21 ip-172-x-y-z kubelet[1635]: E0712 22:59:21.012778 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:21 ip-172-x-y-z kubelet[1635]: E0712 22:59:21.013467 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:21 ip-172-x-y-z kubelet[1635]: E0712 22:59:21.555276 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:26 ip-172-x-y-z kubelet[1635]: E0712 22:59:26.572059 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:31 ip-172-x-y-z kubelet[1635]: E0712 22:59:31.037726 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:31 ip-172-x-y-z kubelet[1635]: E0712 22:59:31.038294 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:31 ip-172-x-y-z kubelet[1635]: E0712 22:59:31.573493 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:36 ip-172-x-y-z kubelet[1635]: E0712 22:59:36.574882 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:41 ip-172-x-y-z kubelet[1635]: E0712 22:59:41.046417 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:41 ip-172-x-y-z kubelet[1635]: E0712 22:59:41.046997 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:41 ip-172-x-y-z kubelet[1635]: E0712 22:59:41.575776 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:46 ip-172-x-y-z kubelet[1635]: E0712 22:59:46.576917 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: I0712 22:59:50.532807 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"cc8946ea34292c60a26ce499b9c9c45670f16acacf83af66ba702ab7aaab3a3d"}
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: I0712 22:59:50.834104 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: I0712 22:59:50.834202 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: I0712 22:59:50.834333 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:59:50 ip-172-x-y-z kubelet[1635]: E0712 22:59:50.834368 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:59:51 ip-172-x-y-z kubelet[1635]: E0712 22:59:51.055466 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 22:59:51 ip-172-x-y-z kubelet[1635]: E0712 22:59:51.056037 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 22:59:51 ip-172-x-y-z kubelet[1635]: E0712 22:59:51.577728 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: I0712 22:59:54.221467 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: I0712 22:59:54.522706 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: I0712 22:59:54.522802 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: I0712 22:59:54.522966 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 22:59:54 ip-172-x-y-z kubelet[1635]: E0712 22:59:54.523023 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 22:59:56 ip-172-x-y-z kubelet[1635]: E0712 22:59:56.579194 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:01 ip-172-x-y-z kubelet[1635]: E0712 23:00:01.079468 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:01 ip-172-x-y-z kubelet[1635]: E0712 23:00:01.080076 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:01 ip-172-x-y-z kubelet[1635]: E0712 23:00:01.582283 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:06 ip-172-x-y-z kubelet[1635]: E0712 23:00:06.584035 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:09 ip-172-x-y-z kubelet[1635]: I0712 23:00:09.492582 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:00:09 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:00:09 ip-172-x-y-z kubelet[1635]: I0712 23:00:09.492687 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:09 ip-172-x-y-z kubelet[1635]: I0712 23:00:09.492854 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:00:09 ip-172-x-y-z kubelet[1635]: E0712 23:00:09.492912 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:11 ip-172-x-y-z kubelet[1635]: E0712 23:00:11.099131 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:11 ip-172-x-y-z kubelet[1635]: E0712 23:00:11.099742 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:11 ip-172-x-y-z kubelet[1635]: E0712 23:00:11.585253 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:16 ip-172-x-y-z kubelet[1635]: E0712 23:00:16.586440 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: E0712 23:00:21.111427 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: E0712 23:00:21.112033 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: I0712 23:00:21.492548 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: I0712 23:00:21.492662 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: I0712 23:00:21.492832 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: E0712 23:00:21.492888 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:21 ip-172-x-y-z kubelet[1635]: E0712 23:00:21.587565 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:26 ip-172-x-y-z kubelet[1635]: E0712 23:00:26.588767 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:31 ip-172-x-y-z kubelet[1635]: E0712 23:00:31.134888 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:31 ip-172-x-y-z kubelet[1635]: E0712 23:00:31.135475 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:31 ip-172-x-y-z kubelet[1635]: E0712 23:00:31.589924 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:33 ip-172-x-y-z kubelet[1635]: I0712 23:00:33.499251 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:00:33 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:00:33 ip-172-x-y-z kubelet[1635]: I0712 23:00:33.499393 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:33 ip-172-x-y-z kubelet[1635]: I0712 23:00:33.499558 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:00:33 ip-172-x-y-z kubelet[1635]: E0712 23:00:33.499594 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:36 ip-172-x-y-z kubelet[1635]: E0712 23:00:36.591130 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:41 ip-172-x-y-z kubelet[1635]: E0712 23:00:41.144039 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:41 ip-172-x-y-z kubelet[1635]: E0712 23:00:41.144650 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:41 ip-172-x-y-z kubelet[1635]: E0712 23:00:41.592257 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: I0712 23:00:46.494910 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: I0712 23:00:46.495002 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: I0712 23:00:46.495121 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: E0712 23:00:46.495175 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:00:46 ip-172-x-y-z kubelet[1635]: E0712 23:00:46.593366 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:51 ip-172-x-y-z kubelet[1635]: E0712 23:00:51.153662 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:00:51 ip-172-x-y-z kubelet[1635]: E0712 23:00:51.154261 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:00:51 ip-172-x-y-z kubelet[1635]: E0712 23:00:51.594483 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:00:56 ip-172-x-y-z kubelet[1635]: E0712 23:00:56.595609 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:00 ip-172-x-y-z kubelet[1635]: I0712 23:01:00.494626 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:01:00 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:01:00 ip-172-x-y-z kubelet[1635]: I0712 23:01:00.494736 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:00 ip-172-x-y-z kubelet[1635]: I0712 23:01:00.494903 1635 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:01:00 ip-172-x-y-z kubelet[1635]: E0712 23:01:00.494961 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:01 ip-172-x-y-z kubelet[1635]: E0712 23:01:01.162488 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:01 ip-172-x-y-z kubelet[1635]: E0712 23:01:01.163109 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:01 ip-172-x-y-z kubelet[1635]: E0712 23:01:01.596761 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:06 ip-172-x-y-z kubelet[1635]: E0712 23:01:06.608725 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:11 ip-172-x-y-z kubelet[1635]: E0712 23:01:11.172113 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:11 ip-172-x-y-z kubelet[1635]: E0712 23:01:11.172704 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:11 ip-172-x-y-z kubelet[1635]: E0712 23:01:11.622090 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:13 ip-172-x-y-z kubelet[1635]: I0712 23:01:13.492494 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:01:13 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:01:13 ip-172-x-y-z kubelet[1635]: I0712 23:01:13.492613 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:13 ip-172-x-y-z kubelet[1635]: E0712 23:01:13.495917 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 23:01:14 ip-172-x-y-z kubelet[1635]: I0712 23:01:14.000774 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"f36a748d5461f331ee197e94860c980b0db8fb41c7dd315fc924725e052b0534"}
Jul 12 23:01:16 ip-172-x-y-z kubelet[1635]: E0712 23:01:16.623253 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:21 ip-172-x-y-z kubelet[1635]: E0712 23:01:21.181383 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:21 ip-172-x-y-z kubelet[1635]: E0712 23:01:21.181983 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:21 ip-172-x-y-z kubelet[1635]: E0712 23:01:21.624410 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:26 ip-172-x-y-z kubelet[1635]: E0712 23:01:26.625643 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:31 ip-172-x-y-z kubelet[1635]: E0712 23:01:31.190353 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:31 ip-172-x-y-z kubelet[1635]: E0712 23:01:31.190963 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:31 ip-172-x-y-z kubelet[1635]: E0712 23:01:31.626773 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:36 ip-172-x-y-z kubelet[1635]: E0712 23:01:36.627902 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:41 ip-172-x-y-z kubelet[1635]: E0712 23:01:41.199438 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:41 ip-172-x-y-z kubelet[1635]: E0712 23:01:41.200568 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:41 ip-172-x-y-z kubelet[1635]: E0712 23:01:41.629088 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:46 ip-172-x-y-z kubelet[1635]: E0712 23:01:46.630218 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:51 ip-172-x-y-z kubelet[1635]: E0712 23:01:51.208753 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:01:51 ip-172-x-y-z kubelet[1635]: E0712 23:01:51.209346 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:01:51 ip-172-x-y-z kubelet[1635]: E0712 23:01:51.631292 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: I0712 23:01:53.202613 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"f36a748d5461f331ee197e94860c980b0db8fb41c7dd315fc924725e052b0534"}
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: I0712 23:01:53.503910 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: I0712 23:01:53.504004 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: I0712 23:01:53.504135 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:01:53 ip-172-x-y-z kubelet[1635]: E0712 23:01:53.504171 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: I0712 23:01:54.229962 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: I0712 23:01:54.549167 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: I0712 23:01:54.549278 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: I0712 23:01:54.549414 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:01:54 ip-172-x-y-z kubelet[1635]: E0712 23:01:54.549450 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:01:56 ip-172-x-y-z kubelet[1635]: E0712 23:01:56.632456 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:01 ip-172-x-y-z kubelet[1635]: E0712 23:02:01.218099 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:01 ip-172-x-y-z kubelet[1635]: E0712 23:02:01.218724 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:01 ip-172-x-y-z kubelet[1635]: E0712 23:02:01.633672 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: I0712 23:02:06.495209 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: I0712 23:02:06.496674 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: I0712 23:02:06.496833 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: E0712 23:02:06.496870 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:06 ip-172-x-y-z kubelet[1635]: E0712 23:02:06.634777 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:11 ip-172-x-y-z kubelet[1635]: E0712 23:02:11.227269 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:11 ip-172-x-y-z kubelet[1635]: E0712 23:02:11.227912 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:11 ip-172-x-y-z kubelet[1635]: E0712 23:02:11.635904 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:16 ip-172-x-y-z kubelet[1635]: E0712 23:02:16.637092 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:20 ip-172-x-y-z kubelet[1635]: I0712 23:02:20.494916 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:02:20 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:02:20 ip-172-x-y-z kubelet[1635]: I0712 23:02:20.496371 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:20 ip-172-x-y-z kubelet[1635]: I0712 23:02:20.496893 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:02:20 ip-172-x-y-z kubelet[1635]: E0712 23:02:20.497306 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:21 ip-172-x-y-z kubelet[1635]: E0712 23:02:21.237220 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:21 ip-172-x-y-z kubelet[1635]: E0712 23:02:21.237849 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:21 ip-172-x-y-z kubelet[1635]: E0712 23:02:21.638081 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:26 ip-172-x-y-z kubelet[1635]: E0712 23:02:26.639201 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:31 ip-172-x-y-z kubelet[1635]: E0712 23:02:31.246102 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:31 ip-172-x-y-z kubelet[1635]: E0712 23:02:31.254725 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:31 ip-172-x-y-z kubelet[1635]: E0712 23:02:31.640399 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:34 ip-172-x-y-z kubelet[1635]: I0712 23:02:34.494958 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:02:34 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:02:34 ip-172-x-y-z kubelet[1635]: I0712 23:02:34.496455 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:34 ip-172-x-y-z kubelet[1635]: I0712 23:02:34.496606 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:02:34 ip-172-x-y-z kubelet[1635]: E0712 23:02:34.496641 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:36 ip-172-x-y-z kubelet[1635]: E0712 23:02:36.641593 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:41 ip-172-x-y-z kubelet[1635]: E0712 23:02:41.270970 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:41 ip-172-x-y-z kubelet[1635]: E0712 23:02:41.271581 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:41 ip-172-x-y-z kubelet[1635]: E0712 23:02:41.642595 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:46 ip-172-x-y-z kubelet[1635]: E0712 23:02:46.643621 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:48 ip-172-x-y-z kubelet[1635]: I0712 23:02:48.495787 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:02:48 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:02:48 ip-172-x-y-z kubelet[1635]: I0712 23:02:48.495894 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:48 ip-172-x-y-z kubelet[1635]: I0712 23:02:48.496030 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:02:48 ip-172-x-y-z kubelet[1635]: E0712 23:02:48.496065 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:02:51 ip-172-x-y-z kubelet[1635]: E0712 23:02:51.280298 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:02:51 ip-172-x-y-z kubelet[1635]: E0712 23:02:51.280902 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:02:51 ip-172-x-y-z kubelet[1635]: E0712 23:02:51.645067 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:02:56 ip-172-x-y-z kubelet[1635]: E0712 23:02:56.651659 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:01 ip-172-x-y-z kubelet[1635]: E0712 23:03:01.297136 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:01 ip-172-x-y-z kubelet[1635]: E0712 23:03:01.297748 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:01 ip-172-x-y-z kubelet[1635]: E0712 23:03:01.654011 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:03 ip-172-x-y-z kubelet[1635]: I0712 23:03:03.492561 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:03:03 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:03:03 ip-172-x-y-z kubelet[1635]: I0712 23:03:03.492682 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:03 ip-172-x-y-z kubelet[1635]: I0712 23:03:03.492875 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:03:03 ip-172-x-y-z kubelet[1635]: E0712 23:03:03.492913 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:06 ip-172-x-y-z kubelet[1635]: E0712 23:03:06.656173 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:11 ip-172-x-y-z kubelet[1635]: E0712 23:03:11.306434 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:11 ip-172-x-y-z kubelet[1635]: E0712 23:03:11.307142 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:11 ip-172-x-y-z kubelet[1635]: E0712 23:03:11.664398 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: I0712 23:03:16.495703 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: I0712 23:03:16.495807 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: I0712 23:03:16.495974 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: E0712 23:03:16.498183 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:16 ip-172-x-y-z kubelet[1635]: E0712 23:03:16.666186 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:21 ip-172-x-y-z kubelet[1635]: E0712 23:03:21.331238 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:21 ip-172-x-y-z kubelet[1635]: E0712 23:03:21.332400 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:21 ip-172-x-y-z kubelet[1635]: E0712 23:03:21.667381 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:26 ip-172-x-y-z kubelet[1635]: E0712 23:03:26.668630 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: E0712 23:03:31.342802 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: E0712 23:03:31.343411 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: I0712 23:03:31.492644 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: I0712 23:03:31.494128 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: I0712 23:03:31.494666 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: E0712 23:03:31.494718 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:31 ip-172-x-y-z kubelet[1635]: E0712 23:03:31.669812 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:36 ip-172-x-y-z kubelet[1635]: E0712 23:03:36.671009 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:40 ip-172-x-y-z kubelet[1635]: W0712 23:03:40.341857 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1386
Jul 12 23:03:40 ip-172-x-y-z kubelet[1635]: W0712 23:03:40.342423 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1386
Jul 12 23:03:40 ip-172-x-y-z kubelet[1635]: I0712 23:03:40.342782 1635 container_manager_linux.go:427] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 12 23:03:40 ip-172-x-y-z kubelet[1635]: W0712 23:03:40.343206 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1635
Jul 12 23:03:40 ip-172-x-y-z kubelet[1635]: W0712 23:03:40.343584 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1635
Jul 12 23:03:41 ip-172-x-y-z kubelet[1635]: E0712 23:03:41.363335 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:41 ip-172-x-y-z kubelet[1635]: E0712 23:03:41.372024 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:41 ip-172-x-y-z kubelet[1635]: E0712 23:03:41.672223 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:42 ip-172-x-y-z kubelet[1635]: I0712 23:03:42.494749 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:03:42 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:03:42 ip-172-x-y-z kubelet[1635]: I0712 23:03:42.494848 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:42 ip-172-x-y-z kubelet[1635]: I0712 23:03:42.495011 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:03:42 ip-172-x-y-z kubelet[1635]: E0712 23:03:42.495068 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:46 ip-172-x-y-z kubelet[1635]: E0712 23:03:46.673559 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:51 ip-172-x-y-z kubelet[1635]: E0712 23:03:51.385113 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:03:51 ip-172-x-y-z kubelet[1635]: E0712 23:03:51.385717 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:03:51 ip-172-x-y-z kubelet[1635]: E0712 23:03:51.674884 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:03:54 ip-172-x-y-z kubelet[1635]: I0712 23:03:54.494753 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:03:54 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:03:54 ip-172-x-y-z kubelet[1635]: I0712 23:03:54.494863 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:54 ip-172-x-y-z kubelet[1635]: I0712 23:03:54.495045 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:03:54 ip-172-x-y-z kubelet[1635]: E0712 23:03:54.495083 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:03:56 ip-172-x-y-z kubelet[1635]: E0712 23:03:56.676019 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:01 ip-172-x-y-z kubelet[1635]: E0712 23:04:01.413808 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:01 ip-172-x-y-z kubelet[1635]: E0712 23:04:01.414402 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:01 ip-172-x-y-z kubelet[1635]: E0712 23:04:01.677271 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:06 ip-172-x-y-z kubelet[1635]: E0712 23:04:06.678478 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:07 ip-172-x-y-z kubelet[1635]: I0712 23:04:07.492577 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:04:07 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:04:07 ip-172-x-y-z kubelet[1635]: I0712 23:04:07.492691 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:04:07 ip-172-x-y-z kubelet[1635]: I0712 23:04:07.492830 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:04:07 ip-172-x-y-z kubelet[1635]: E0712 23:04:07.492867 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:04:11 ip-172-x-y-z kubelet[1635]: E0712 23:04:11.431067 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:11 ip-172-x-y-z kubelet[1635]: E0712 23:04:11.431701 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:11 ip-172-x-y-z kubelet[1635]: E0712 23:04:11.679745 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:16 ip-172-x-y-z kubelet[1635]: E0712 23:04:16.680931 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: E0712 23:04:21.440893 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: E0712 23:04:21.441497 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: I0712 23:04:21.492505 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: I0712 23:04:21.492617 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: I0712 23:04:21.492762 1635 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: E0712 23:04:21.492798 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:04:21 ip-172-x-y-z kubelet[1635]: E0712 23:04:21.682721 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:26 ip-172-x-y-z kubelet[1635]: E0712 23:04:26.683805 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:31 ip-172-x-y-z kubelet[1635]: E0712 23:04:31.466213 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:31 ip-172-x-y-z kubelet[1635]: E0712 23:04:31.466846 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:31 ip-172-x-y-z kubelet[1635]: E0712 23:04:31.684956 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:35 ip-172-x-y-z kubelet[1635]: I0712 23:04:35.493192 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:04:35 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:04:35 ip-172-x-y-z kubelet[1635]: I0712 23:04:35.493285 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:04:35 ip-172-x-y-z kubelet[1635]: E0712 23:04:35.496703 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 23:04:36 ip-172-x-y-z kubelet[1635]: I0712 23:04:36.139902 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"3bc1ffe0b1984a161a4a0e57949ea40192262b4cc933921ea5c48fe8f1116b71"}
Jul 12 23:04:36 ip-172-x-y-z kubelet[1635]: E0712 23:04:36.686206 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:41 ip-172-x-y-z kubelet[1635]: E0712 23:04:41.475515 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:41 ip-172-x-y-z kubelet[1635]: E0712 23:04:41.476106 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:41 ip-172-x-y-z kubelet[1635]: E0712 23:04:41.687307 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:46 ip-172-x-y-z kubelet[1635]: E0712 23:04:46.688592 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:51 ip-172-x-y-z kubelet[1635]: E0712 23:04:51.484852 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:04:51 ip-172-x-y-z kubelet[1635]: E0712 23:04:51.485427 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:04:51 ip-172-x-y-z kubelet[1635]: E0712 23:04:51.689643 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:04:56 ip-172-x-y-z kubelet[1635]: E0712 23:04:56.691388 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:01 ip-172-x-y-z kubelet[1635]: E0712 23:05:01.502343 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:01 ip-172-x-y-z kubelet[1635]: E0712 23:05:01.502925 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:01 ip-172-x-y-z kubelet[1635]: E0712 23:05:01.692678 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:06 ip-172-x-y-z kubelet[1635]: E0712 23:05:06.694426 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:11 ip-172-x-y-z kubelet[1635]: E0712 23:05:11.523123 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:11 ip-172-x-y-z kubelet[1635]: E0712 23:05:11.524291 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:11 ip-172-x-y-z kubelet[1635]: E0712 23:05:11.695477 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:16 ip-172-x-y-z kubelet[1635]: E0712 23:05:16.697236 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: I0712 23:05:20.361443 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"3bc1ffe0b1984a161a4a0e57949ea40192262b4cc933921ea5c48fe8f1116b71"}
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: I0712 23:05:20.670776 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: I0712 23:05:20.670882 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: I0712 23:05:20.671016 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:05:20 ip-172-x-y-z kubelet[1635]: E0712 23:05:20.671052 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:21 ip-172-x-y-z kubelet[1635]: E0712 23:05:21.532122 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:21 ip-172-x-y-z kubelet[1635]: E0712 23:05:21.533191 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:21 ip-172-x-y-z kubelet[1635]: E0712 23:05:21.698338 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: I0712 23:05:24.222392 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: I0712 23:05:24.523544 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: I0712 23:05:24.523666 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: I0712 23:05:24.523816 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:05:24 ip-172-x-y-z kubelet[1635]: E0712 23:05:24.523851 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:26 ip-172-x-y-z kubelet[1635]: E0712 23:05:26.700196 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:31 ip-172-x-y-z kubelet[1635]: E0712 23:05:31.548259 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:31 ip-172-x-y-z kubelet[1635]: E0712 23:05:31.548853 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:31 ip-172-x-y-z kubelet[1635]: E0712 23:05:31.708634 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:36 ip-172-x-y-z kubelet[1635]: E0712 23:05:36.709694 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:39 ip-172-x-y-z kubelet[1635]: I0712 23:05:39.492563 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:05:39 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:05:39 ip-172-x-y-z kubelet[1635]: I0712 23:05:39.492695 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:39 ip-172-x-y-z kubelet[1635]: I0712 23:05:39.492863 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:05:39 ip-172-x-y-z kubelet[1635]: E0712 23:05:39.492900 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:41 ip-172-x-y-z kubelet[1635]: E0712 23:05:41.594139 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:41 ip-172-x-y-z kubelet[1635]: E0712 23:05:41.594731 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:41 ip-172-x-y-z kubelet[1635]: E0712 23:05:41.711799 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:46 ip-172-x-y-z kubelet[1635]: E0712 23:05:46.713912 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:51 ip-172-x-y-z kubelet[1635]: E0712 23:05:51.602901 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:05:51 ip-172-x-y-z kubelet[1635]: E0712 23:05:51.603519 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:05:51 ip-172-x-y-z kubelet[1635]: E0712 23:05:51.715313 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:05:53 ip-172-x-y-z kubelet[1635]: I0712 23:05:53.492591 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:05:53 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:05:53 ip-172-x-y-z kubelet[1635]: I0712 23:05:53.492716 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:53 ip-172-x-y-z kubelet[1635]: I0712 23:05:53.492877 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:05:53 ip-172-x-y-z kubelet[1635]: E0712 23:05:53.492933 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:05:56 ip-172-x-y-z kubelet[1635]: E0712 23:05:56.716764 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:01 ip-172-x-y-z kubelet[1635]: E0712 23:06:01.613006 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:01 ip-172-x-y-z kubelet[1635]: E0712 23:06:01.613616 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:01 ip-172-x-y-z kubelet[1635]: E0712 23:06:01.718678 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:06 ip-172-x-y-z kubelet[1635]: E0712 23:06:06.720256 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:07 ip-172-x-y-z kubelet[1635]: I0712 23:06:07.492615 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:06:07 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:06:07 ip-172-x-y-z kubelet[1635]: I0712 23:06:07.492740 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:07 ip-172-x-y-z kubelet[1635]: I0712 23:06:07.492885 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:06:07 ip-172-x-y-z kubelet[1635]: E0712 23:06:07.492923 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:11 ip-172-x-y-z kubelet[1635]: E0712 23:06:11.621852 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:11 ip-172-x-y-z kubelet[1635]: E0712 23:06:11.622435 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:11 ip-172-x-y-z kubelet[1635]: E0712 23:06:11.721052 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:16 ip-172-x-y-z kubelet[1635]: E0712 23:06:16.722686 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: I0712 23:06:21.492524 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: I0712 23:06:21.492640 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: I0712 23:06:21.492806 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: E0712 23:06:21.492863 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: E0712 23:06:21.631044 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: E0712 23:06:21.631563 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:21 ip-172-x-y-z kubelet[1635]: E0712 23:06:21.723559 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:26 ip-172-x-y-z kubelet[1635]: E0712 23:06:26.731514 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:31 ip-172-x-y-z kubelet[1635]: E0712 23:06:31.640714 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:31 ip-172-x-y-z kubelet[1635]: E0712 23:06:31.641367 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:31 ip-172-x-y-z kubelet[1635]: E0712 23:06:31.732410 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:32 ip-172-x-y-z kubelet[1635]: I0712 23:06:32.494615 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:06:32 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:06:32 ip-172-x-y-z kubelet[1635]: I0712 23:06:32.496167 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:32 ip-172-x-y-z kubelet[1635]: I0712 23:06:32.496338 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:06:32 ip-172-x-y-z kubelet[1635]: E0712 23:06:32.496375 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:36 ip-172-x-y-z kubelet[1635]: E0712 23:06:36.733512 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:41 ip-172-x-y-z kubelet[1635]: E0712 23:06:41.657585 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:41 ip-172-x-y-z kubelet[1635]: E0712 23:06:41.658427 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:41 ip-172-x-y-z kubelet[1635]: E0712 23:06:41.734375 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:44 ip-172-x-y-z kubelet[1635]: I0712 23:06:44.494673 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:06:44 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:06:44 ip-172-x-y-z kubelet[1635]: I0712 23:06:44.496142 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:44 ip-172-x-y-z kubelet[1635]: I0712 23:06:44.496302 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:06:44 ip-172-x-y-z kubelet[1635]: E0712 23:06:44.496338 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:46 ip-172-x-y-z kubelet[1635]: E0712 23:06:46.735410 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:51 ip-172-x-y-z kubelet[1635]: E0712 23:06:51.670041 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:06:51 ip-172-x-y-z kubelet[1635]: E0712 23:06:51.670621 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:06:51 ip-172-x-y-z kubelet[1635]: E0712 23:06:51.736326 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:06:55 ip-172-x-y-z kubelet[1635]: I0712 23:06:55.492684 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:06:55 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:06:55 ip-172-x-y-z kubelet[1635]: I0712 23:06:55.492759 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:55 ip-172-x-y-z kubelet[1635]: I0712 23:06:55.492883 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:06:55 ip-172-x-y-z kubelet[1635]: E0712 23:06:55.492918 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:06:56 ip-172-x-y-z kubelet[1635]: E0712 23:06:56.738170 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:01 ip-172-x-y-z kubelet[1635]: E0712 23:07:01.699130 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:01 ip-172-x-y-z kubelet[1635]: E0712 23:07:01.699764 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:01 ip-172-x-y-z kubelet[1635]: E0712 23:07:01.741571 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:06 ip-172-x-y-z kubelet[1635]: E0712 23:07:06.743285 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:07 ip-172-x-y-z kubelet[1635]: I0712 23:07:07.492587 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:07:07 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:07:07 ip-172-x-y-z kubelet[1635]: I0712 23:07:07.494063 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:07 ip-172-x-y-z kubelet[1635]: I0712 23:07:07.494228 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:07:07 ip-172-x-y-z kubelet[1635]: E0712 23:07:07.494264 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:11 ip-172-x-y-z kubelet[1635]: E0712 23:07:11.709057 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:11 ip-172-x-y-z kubelet[1635]: E0712 23:07:11.709698 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:11 ip-172-x-y-z kubelet[1635]: E0712 23:07:11.744503 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:16 ip-172-x-y-z kubelet[1635]: E0712 23:07:16.746121 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:20 ip-172-x-y-z kubelet[1635]: I0712 23:07:20.494617 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:07:20 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:07:20 ip-172-x-y-z kubelet[1635]: I0712 23:07:20.496166 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:20 ip-172-x-y-z kubelet[1635]: I0712 23:07:20.496330 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:07:20 ip-172-x-y-z kubelet[1635]: E0712 23:07:20.496366 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:21 ip-172-x-y-z kubelet[1635]: E0712 23:07:21.728103 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:21 ip-172-x-y-z kubelet[1635]: E0712 23:07:21.728671 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:21 ip-172-x-y-z kubelet[1635]: E0712 23:07:21.746989 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:26 ip-172-x-y-z kubelet[1635]: E0712 23:07:26.756669 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: I0712 23:07:31.492469 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: I0712 23:07:31.492583 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: I0712 23:07:31.492722 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: E0712 23:07:31.492756 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: E0712 23:07:31.755660 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: E0712 23:07:31.756226 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:31 ip-172-x-y-z kubelet[1635]: E0712 23:07:31.758983 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:36 ip-172-x-y-z kubelet[1635]: E0712 23:07:36.760455 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:41 ip-172-x-y-z kubelet[1635]: E0712 23:07:41.764523 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:41 ip-172-x-y-z kubelet[1635]: E0712 23:07:41.765150 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:41 ip-172-x-y-z kubelet[1635]: E0712 23:07:41.766848 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:46 ip-172-x-y-z kubelet[1635]: E0712 23:07:46.768359 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:47 ip-172-x-y-z kubelet[1635]: I0712 23:07:47.492530 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:07:47 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:07:47 ip-172-x-y-z kubelet[1635]: I0712 23:07:47.492648 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:47 ip-172-x-y-z kubelet[1635]: I0712 23:07:47.492790 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:07:47 ip-172-x-y-z kubelet[1635]: E0712 23:07:47.492825 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:07:51 ip-172-x-y-z kubelet[1635]: E0712 23:07:51.793646 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:07:51 ip-172-x-y-z kubelet[1635]: E0712 23:07:51.799984 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:07:51 ip-172-x-y-z kubelet[1635]: E0712 23:07:51.800421 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:07:56 ip-172-x-y-z kubelet[1635]: E0712 23:07:56.794776 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:01 ip-172-x-y-z kubelet[1635]: E0712 23:08:01.796002 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:01 ip-172-x-y-z kubelet[1635]: E0712 23:08:01.824295 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:01 ip-172-x-y-z kubelet[1635]: E0712 23:08:01.829234 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:03 ip-172-x-y-z kubelet[1635]: I0712 23:08:03.492771 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:08:03 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:08:03 ip-172-x-y-z kubelet[1635]: I0712 23:08:03.492911 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:03 ip-172-x-y-z kubelet[1635]: I0712 23:08:03.493082 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:08:03 ip-172-x-y-z kubelet[1635]: E0712 23:08:03.493121 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:06 ip-172-x-y-z kubelet[1635]: E0712 23:08:06.797056 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:11 ip-172-x-y-z kubelet[1635]: E0712 23:08:11.798235 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:11 ip-172-x-y-z kubelet[1635]: E0712 23:08:11.837263 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:11 ip-172-x-y-z kubelet[1635]: E0712 23:08:11.837780 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:15 ip-172-x-y-z kubelet[1635]: I0712 23:08:15.492593 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:08:15 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:08:15 ip-172-x-y-z kubelet[1635]: I0712 23:08:15.494086 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:15 ip-172-x-y-z kubelet[1635]: I0712 23:08:15.494249 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:08:15 ip-172-x-y-z kubelet[1635]: E0712 23:08:15.494286 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:16 ip-172-x-y-z kubelet[1635]: E0712 23:08:16.799408 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:21 ip-172-x-y-z kubelet[1635]: E0712 23:08:21.800527 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:21 ip-172-x-y-z kubelet[1635]: E0712 23:08:21.858550 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:21 ip-172-x-y-z kubelet[1635]: E0712 23:08:21.859034 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:26 ip-172-x-y-z kubelet[1635]: E0712 23:08:26.801624 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:28 ip-172-x-y-z kubelet[1635]: I0712 23:08:28.495204 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:08:28 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:08:28 ip-172-x-y-z kubelet[1635]: I0712 23:08:28.496675 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:28 ip-172-x-y-z kubelet[1635]: I0712 23:08:28.497194 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:08:28 ip-172-x-y-z kubelet[1635]: E0712 23:08:28.497610 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:31 ip-172-x-y-z kubelet[1635]: E0712 23:08:31.802677 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:31 ip-172-x-y-z kubelet[1635]: E0712 23:08:31.867793 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:31 ip-172-x-y-z kubelet[1635]: E0712 23:08:31.868255 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:36 ip-172-x-y-z kubelet[1635]: E0712 23:08:36.803834 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:40 ip-172-x-y-z kubelet[1635]: W0712 23:08:40.347599 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1386
Jul 12 23:08:40 ip-172-x-y-z kubelet[1635]: W0712 23:08:40.348180 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1386
Jul 12 23:08:40 ip-172-x-y-z kubelet[1635]: I0712 23:08:40.348578 1635 container_manager_linux.go:427] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 12 23:08:40 ip-172-x-y-z kubelet[1635]: W0712 23:08:40.349030 1635 container_manager_linux.go:791] CPUAccounting not enabled for pid: 1635
Jul 12 23:08:40 ip-172-x-y-z kubelet[1635]: W0712 23:08:40.349420 1635 container_manager_linux.go:794] MemoryAccounting not enabled for pid: 1635
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: I0712 23:08:41.492544 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: I0712 23:08:41.492665 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: I0712 23:08:41.492827 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: E0712 23:08:41.492862 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: E0712 23:08:41.804958 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: E0712 23:08:41.876106 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:41 ip-172-x-y-z kubelet[1635]: E0712 23:08:41.876639 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:46 ip-172-x-y-z kubelet[1635]: E0712 23:08:46.805942 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:51 ip-172-x-y-z kubelet[1635]: E0712 23:08:51.807105 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:08:51 ip-172-x-y-z kubelet[1635]: E0712 23:08:51.885391 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:08:51 ip-172-x-y-z kubelet[1635]: E0712 23:08:51.885877 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: I0712 23:08:56.507127 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: I0712 23:08:56.508983 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: I0712 23:08:56.509602 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: E0712 23:08:56.510054 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:08:56 ip-172-x-y-z kubelet[1635]: E0712 23:08:56.808158 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:01 ip-172-x-y-z kubelet[1635]: E0712 23:09:01.809920 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:01 ip-172-x-y-z kubelet[1635]: E0712 23:09:01.903297 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:01 ip-172-x-y-z kubelet[1635]: E0712 23:09:01.912363 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:06 ip-172-x-y-z kubelet[1635]: E0712 23:09:06.811084 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:09 ip-172-x-y-z kubelet[1635]: I0712 23:09:09.492501 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:09:09 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:09:09 ip-172-x-y-z kubelet[1635]: I0712 23:09:09.492604 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:09 ip-172-x-y-z kubelet[1635]: I0712 23:09:09.492765 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:09:09 ip-172-x-y-z kubelet[1635]: E0712 23:09:09.492821 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:11 ip-172-x-y-z kubelet[1635]: E0712 23:09:11.812283 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:11 ip-172-x-y-z kubelet[1635]: E0712 23:09:11.929515 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:11 ip-172-x-y-z kubelet[1635]: E0712 23:09:11.930120 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:16 ip-172-x-y-z kubelet[1635]: E0712 23:09:16.813233 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:21 ip-172-x-y-z kubelet[1635]: E0712 23:09:21.814186 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:21 ip-172-x-y-z kubelet[1635]: E0712 23:09:21.938569 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:21 ip-172-x-y-z kubelet[1635]: E0712 23:09:21.939057 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:23 ip-172-x-y-z kubelet[1635]: I0712 23:09:23.492567 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:09:23 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:09:23 ip-172-x-y-z kubelet[1635]: I0712 23:09:23.494040 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:23 ip-172-x-y-z kubelet[1635]: I0712 23:09:23.494212 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:09:23 ip-172-x-y-z kubelet[1635]: E0712 23:09:23.494249 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:26 ip-172-x-y-z kubelet[1635]: E0712 23:09:26.820344 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:31 ip-172-x-y-z kubelet[1635]: E0712 23:09:31.821736 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:31 ip-172-x-y-z kubelet[1635]: E0712 23:09:31.947208 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:31 ip-172-x-y-z kubelet[1635]: E0712 23:09:31.947724 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: I0712 23:09:36.494781 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: I0712 23:09:36.494879 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: I0712 23:09:36.495020 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: E0712 23:09:36.495055 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:36 ip-172-x-y-z kubelet[1635]: E0712 23:09:36.823190 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:41 ip-172-x-y-z kubelet[1635]: E0712 23:09:41.831143 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:41 ip-172-x-y-z kubelet[1635]: E0712 23:09:41.956602 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:41 ip-172-x-y-z kubelet[1635]: E0712 23:09:41.957068 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:46 ip-172-x-y-z kubelet[1635]: E0712 23:09:46.832932 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:49 ip-172-x-y-z kubelet[1635]: I0712 23:09:49.492431 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:09:49 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:09:49 ip-172-x-y-z kubelet[1635]: I0712 23:09:49.492559 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:49 ip-172-x-y-z kubelet[1635]: I0712 23:09:49.492724 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:09:49 ip-172-x-y-z kubelet[1635]: E0712 23:09:49.492759 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:09:51 ip-172-x-y-z kubelet[1635]: E0712 23:09:51.834064 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:09:51 ip-172-x-y-z kubelet[1635]: E0712 23:09:51.964742 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:09:51 ip-172-x-y-z kubelet[1635]: E0712 23:09:51.965248 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:09:56 ip-172-x-y-z kubelet[1635]: E0712 23:09:56.835168 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:01 ip-172-x-y-z kubelet[1635]: E0712 23:10:01.836357 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:01 ip-172-x-y-z kubelet[1635]: E0712 23:10:01.974575 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:01 ip-172-x-y-z kubelet[1635]: E0712 23:10:01.975178 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:02 ip-172-x-y-z kubelet[1635]: I0712 23:10:02.494802 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:10:02 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:10:02 ip-172-x-y-z kubelet[1635]: I0712 23:10:02.494961 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:02 ip-172-x-y-z kubelet[1635]: I0712 23:10:02.495132 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:10:02 ip-172-x-y-z kubelet[1635]: E0712 23:10:02.495168 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:06 ip-172-x-y-z kubelet[1635]: E0712 23:10:06.837587 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:11 ip-172-x-y-z kubelet[1635]: E0712 23:10:11.838714 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:11 ip-172-x-y-z kubelet[1635]: E0712 23:10:11.983574 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:11 ip-172-x-y-z kubelet[1635]: E0712 23:10:11.984639 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:15 ip-172-x-y-z kubelet[1635]: I0712 23:10:15.492513 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:10:15 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:10:15 ip-172-x-y-z kubelet[1635]: I0712 23:10:15.492649 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:15 ip-172-x-y-z kubelet[1635]: I0712 23:10:15.492811 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:10:15 ip-172-x-y-z kubelet[1635]: E0712 23:10:15.492850 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:16 ip-172-x-y-z kubelet[1635]: E0712 23:10:16.839948 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:21 ip-172-x-y-z kubelet[1635]: E0712 23:10:21.841064 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:21 ip-172-x-y-z kubelet[1635]: E0712 23:10:21.993259 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:21 ip-172-x-y-z kubelet[1635]: E0712 23:10:21.993829 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:26 ip-172-x-y-z kubelet[1635]: E0712 23:10:26.842188 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:27 ip-172-x-y-z kubelet[1635]: I0712 23:10:27.492468 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:10:27 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:10:27 ip-172-x-y-z kubelet[1635]: I0712 23:10:27.492577 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:27 ip-172-x-y-z kubelet[1635]: E0712 23:10:27.495849 1635 kubelet_pods.go:395] hostname for pod:"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-172-x-y-z.us-west-2.compute.inte"
Jul 12 23:10:28 ip-172-x-y-z kubelet[1635]: I0712 23:10:28.099060 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerStarted", Data:"3debbbd367155c38170363ce774b3707cb00190fc3b5f0035291ca2600881b02"}
Jul 12 23:10:31 ip-172-x-y-z kubelet[1635]: E0712 23:10:31.843375 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:32 ip-172-x-y-z kubelet[1635]: E0712 23:10:32.023775 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:32 ip-172-x-y-z kubelet[1635]: E0712 23:10:32.027539 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:36 ip-172-x-y-z kubelet[1635]: E0712 23:10:36.844653 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:41 ip-172-x-y-z kubelet[1635]: E0712 23:10:41.845837 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:42 ip-172-x-y-z kubelet[1635]: E0712 23:10:42.036117 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:42 ip-172-x-y-z kubelet[1635]: E0712 23:10:42.036701 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:46 ip-172-x-y-z kubelet[1635]: E0712 23:10:46.847382 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:51 ip-172-x-y-z kubelet[1635]: E0712 23:10:51.848507 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:10:52 ip-172-x-y-z kubelet[1635]: E0712 23:10:52.045355 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:10:52 ip-172-x-y-z kubelet[1635]: E0712 23:10:52.045883 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: I0712 23:10:53.545594 1635 kubelet.go:1906] SyncLoop (PLEG): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)", event: &pleg.PodLifecycleEvent{ID:"358effbb6a9829e718b6ec105343a9cd", Type:"ContainerDied", Data:"3debbbd367155c38170363ce774b3707cb00190fc3b5f0035291ca2600881b02"}
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: I0712 23:10:53.846950 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: I0712 23:10:53.847050 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: I0712 23:10:53.847190 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:10:53 ip-172-x-y-z kubelet[1635]: E0712 23:10:53.847225 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: I0712 23:10:54.223814 1635 kubelet.go:1939] SyncLoop (container unhealthy): "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: I0712 23:10:54.855176 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: I0712 23:10:54.855309 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: I0712 23:10:54.855470 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:10:54 ip-172-x-y-z kubelet[1635]: E0712 23:10:54.855505 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:10:56 ip-172-x-y-z kubelet[1635]: E0712 23:10:56.849643 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:01 ip-172-x-y-z kubelet[1635]: E0712 23:11:01.850807 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:02 ip-172-x-y-z kubelet[1635]: E0712 23:11:02.062777 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:02 ip-172-x-y-z kubelet[1635]: E0712 23:11:02.071404 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: I0712 23:11:06.494792 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: I0712 23:11:06.494904 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: I0712 23:11:06.495063 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: E0712 23:11:06.495119 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:06 ip-172-x-y-z kubelet[1635]: E0712 23:11:06.851897 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:11 ip-172-x-y-z kubelet[1635]: E0712 23:11:11.853095 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:12 ip-172-x-y-z kubelet[1635]: E0712 23:11:12.088833 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:12 ip-172-x-y-z kubelet[1635]: E0712 23:11:12.089414 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:16 ip-172-x-y-z kubelet[1635]: E0712 23:11:16.854189 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: I0712 23:11:21.492550 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: I0712 23:11:21.492664 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: I0712 23:11:21.492832 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: E0712 23:11:21.492888 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:21 ip-172-x-y-z kubelet[1635]: E0712 23:11:21.855374 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:22 ip-172-x-y-z kubelet[1635]: E0712 23:11:22.101403 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:22 ip-172-x-y-z kubelet[1635]: E0712 23:11:22.101996 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:26 ip-172-x-y-z kubelet[1635]: E0712 23:11:26.856268 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:31 ip-172-x-y-z kubelet[1635]: E0712 23:11:31.857111 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:32 ip-172-x-y-z kubelet[1635]: E0712 23:11:32.114647 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:32 ip-172-x-y-z kubelet[1635]: E0712 23:11:32.115756 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:33 ip-172-x-y-z kubelet[1635]: I0712 23:11:33.492612 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:11:33 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:11:33 ip-172-x-y-z kubelet[1635]: I0712 23:11:33.492728 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:33 ip-172-x-y-z kubelet[1635]: I0712 23:11:33.492892 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:11:33 ip-172-x-y-z kubelet[1635]: E0712 23:11:33.492949 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:36 ip-172-x-y-z kubelet[1635]: E0712 23:11:36.858611 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:41 ip-172-x-y-z kubelet[1635]: E0712 23:11:41.864448 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:42 ip-172-x-y-z kubelet[1635]: E0712 23:11:42.134783 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:42 ip-172-x-y-z kubelet[1635]: E0712 23:11:42.135390 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:45 ip-172-x-y-z kubelet[1635]: I0712 23:11:45.492656 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:11:45 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:11:45 ip-172-x-y-z kubelet[1635]: I0712 23:11:45.492766 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:45 ip-172-x-y-z kubelet[1635]: I0712 23:11:45.492928 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:11:45 ip-172-x-y-z kubelet[1635]: E0712 23:11:45.492985 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:46 ip-172-x-y-z kubelet[1635]: E0712 23:11:46.865860 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:51 ip-172-x-y-z kubelet[1635]: E0712 23:11:51.867364 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:52 ip-172-x-y-z kubelet[1635]: E0712 23:11:52.144460 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:11:52 ip-172-x-y-z kubelet[1635]: E0712 23:11:52.145047 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:11:56 ip-172-x-y-z kubelet[1635]: E0712 23:11:56.876329 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:11:58 ip-172-x-y-z kubelet[1635]: I0712 23:11:58.495026 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:11:58 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:11:58 ip-172-x-y-z kubelet[1635]: I0712 23:11:58.495141 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:11:58 ip-172-x-y-z kubelet[1635]: I0712 23:11:58.495289 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:11:58 ip-172-x-y-z kubelet[1635]: E0712 23:11:58.495323 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:01 ip-172-x-y-z kubelet[1635]: E0712 23:12:01.877121 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:02 ip-172-x-y-z kubelet[1635]: E0712 23:12:02.161357 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:12:02 ip-172-x-y-z kubelet[1635]: E0712 23:12:02.161972 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:12:06 ip-172-x-y-z kubelet[1635]: E0712 23:12:06.878877 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:11 ip-172-x-y-z kubelet[1635]: E0712 23:12:11.880002 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:12 ip-172-x-y-z kubelet[1635]: E0712 23:12:12.170398 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:12:12 ip-172-x-y-z kubelet[1635]: E0712 23:12:12.171460 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:12:13 ip-172-x-y-z kubelet[1635]: I0712 23:12:13.492571 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:12:13 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:12:13 ip-172-x-y-z kubelet[1635]: I0712 23:12:13.492677 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:13 ip-172-x-y-z kubelet[1635]: I0712 23:12:13.492906 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:12:13 ip-172-x-y-z kubelet[1635]: E0712 23:12:13.492948 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:16 ip-172-x-y-z kubelet[1635]: E0712 23:12:16.881195 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:21 ip-172-x-y-z kubelet[1635]: E0712 23:12:21.882376 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:22 ip-172-x-y-z kubelet[1635]: E0712 23:12:22.188081 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:12:22 ip-172-x-y-z kubelet[1635]: E0712 23:12:22.188736 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: I0712 23:12:26.494781 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: I0712 23:12:26.494921 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: I0712 23:12:26.495087 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: E0712 23:12:26.495123 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:26 ip-172-x-y-z kubelet[1635]: E0712 23:12:26.887554 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:31 ip-172-x-y-z kubelet[1635]: E0712 23:12:31.888670 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:32 ip-172-x-y-z kubelet[1635]: E0712 23:12:32.209237 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:12:32 ip-172-x-y-z kubelet[1635]: E0712 23:12:32.209844 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 12 23:12:36 ip-172-x-y-z kubelet[1635]: E0712 23:12:36.889866 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:40 ip-172-x-y-z kubelet[1635]: I0712 23:12:40.496345 1635 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager:v1.10.3 Command:[/bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-controller-manager.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-controller-manager --allocate-node-cidrs=true --attach-detach-reconcile-sync-period=1m0s --cloud-provider=aws --cluster-cidr=100.96.0.0/11 --cluster-name=k8s.example.com --cluster-signing-cert-file=/srv/kubernetes/ca.crt --cluster-signing-key-file=/srv/kubernetes/ca.key --configure-cloud-routes=true --kubeconfig=/var/lib/kube-controller-manager/kubeconfig --leader-elect=true --root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key --use-service-account-credentials=true --v=2 > /tmp/pipe 2>&1] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:etcssl ReadOnly:true MountPath:/etc/ssl SubPath: MountPropagation:<nil>} {Name:etcpkitls ReadOnly:true MountPath:/etc/pki/tls SubPath: MountPropagation:<nil>} {Name:etcpkica-trust ReadOnly:true MountPath:/etc/pki/ca-trust SubPath: MountPropagation:<nil>} {Name:usrsharessl ReadOnly:true MountPath:/usr/share/ssl SubPath: MountPropagation:<nil>} {Name:usrssl ReadOnly:true MountPath:/usr/ssl SubPath: MountPropagation:<nil>} {Name:usrlibssl ReadOnly:true MountPath:/usr/lib/ssl SubPath: MountPropagation:<nil>} {Name:usrlocalopenssl ReadOnly:true MountPath:/usr/local/openssl SubPath: MountPropagation:<nil>} {Name:varssl ReadOnly:true MountPath:/var/ssl SubPath: MountPropagation:<nil>} {Name:etcopenssl ReadOnly:true MountPath:/etc/openssl SubPath: MountPropagation:<nil>} {Name:srvkube ReadOnly:true MountPath:/srv/kubernetes SubPath: MountPropagation:<nil>} {Name:logfile ReadOnly:false MountPath:/var/log/kube-controller-manager.log SubPath: MountPropagation:<nil>} {Name:varlibkcm ReadOnly:true MountPath:/var/lib/kube-controller-manager SubPath: MountPro
Jul 12 23:12:40 ip-172-x-y-z kubelet[1635]: pagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 12 23:12:40 ip-172-x-y-z kubelet[1635]: I0712 23:12:40.496454 1635 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:40 ip-172-x-y-z kubelet[1635]: I0712 23:12:40.496596 1635 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)
Jul 12 23:12:40 ip-172-x-y-z kubelet[1635]: E0712 23:12:40.496631 1635 pod_workers.go:186] Error syncing pod 358effbb6a9829e718b6ec105343a9cd ("kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-x-y-z.us-west-2.compute.internal_kube-system(358effbb6a9829e718b6ec105343a9cd)"
Jul 12 23:12:41 ip-172-x-y-z kubelet[1635]: E0712 23:12:41.890984 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Jul 12 23:12:42 ip-172-x-y-z kubelet[1635]: E0712 23:12:42.235144 1635 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Jul 12 23:12:42 ip-172-x-y-z kubelet[1635]: E0712 23:12:42.235770 1635 summary.go:102] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
@ezhiliamplus
Copy link

Hi,

I am also facing this error in api server. Do you have any solution to this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment