Skip to content

Instantly share code, notes, and snippets.

@brianpursley
Created May 12, 2020 16:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save brianpursley/d43005446abbf3733ebd8f58dbaeb418 to your computer and use it in GitHub Desktop.
Save brianpursley/d43005446abbf3733ebd8f58dbaeb418 to your computer and use it in GitHub Desktop.
make test-integration WHAT=./test/integration/scheduler GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestNominatedNodeCleanUp$$$$'
+++ [0512 12:10:46] Checking etcd is on PATH
/usr/bin/etcd
+++ [0512 12:10:46] Starting etcd instance
/home/bpursley/go/src/k8s.io/kubernetes/third_party/etcd:/home/bpursley/gems/bin:/home/bpursley/.nvm/versions/node/v12.14.0/bin:/usr/lib/jvm/jdk-13.0.1/bin:/home/bpursley/.local/bin:/home/bpursley/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/bpursley/.dotnet/tools:/usr/share/rvm/bin:/home/bpursley/bin:/usr/local/go/bin:/home/bpursley/go/bin:/home/bpursley/.krew/bin:/home/bpursley/kubectl-plugins
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.Ga0I2sAFta --listen-client-urls http://127.0.0.1:2379 --debug > "/dev/null" 2>/dev/null
Waiting for etcd to come up.
+++ [0512 12:10:47] On try 1, etcd: : {"health":"true"}
{"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}+++ [0512 12:10:47] Running integration test cases
+++ [0512 12:10:52] Running tests without code coverage
I0512 12:11:07.061310 921399 etcd.go:81] etcd already running at http://127.0.0.1:2379
=== RUN TestNominatedNodeCleanUp
W0512 12:11:07.062545 921399 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0512 12:11:07.062570 921399 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0512 12:11:07.062588 921399 master.go:315] Node port range unspecified. Defaulting to 30000-32767.
I0512 12:11:07.062599 921399 master.go:271] Using reconciler:
I0512 12:11:07.062849 921399 config.go:628] Not requested to run hook priority-and-fairness-config-consumer
I0512 12:11:07.064287 921399 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.064615 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.064784 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.065731 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.065760 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.066341 921399 etcd3.go:271] Start monitoring storage db size metric for endpoint http://127.0.0.1:2379 with polling interval 30s
I0512 12:11:07.066590 921399 client.go:360] parsed scheme: "passthrough"
I0512 12:11:07.066637 921399 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0512 12:11:07.066653 921399 clientconn.go:933] ClientConn switching balancer to "pick_first"
I0512 12:11:07.066857 921399 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00075d080, {CONNECTING <nil>}
I0512 12:11:07.067231 921399 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00075d080, {READY <nil>}
I0512 12:11:07.070171 921399 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0512 12:11:07.070838 921399 store.go:1366] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0512 12:11:07.070999 921399 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.071256 921399 reflector.go:243] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0512 12:11:07.071695 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.071748 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.072960 921399 store.go:1366] Monitoring events count at <storage-prefix>//events
I0512 12:11:07.073026 921399 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.073086 921399 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events
I0512 12:11:07.073184 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.073196 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.073221 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.073989 921399 store.go:1366] Monitoring limitranges count at <storage-prefix>//limitranges
I0512 12:11:07.074091 921399 reflector.go:243] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0512 12:11:07.074186 921399 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.074397 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.074425 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.074454 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.075671 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.075808 921399 store.go:1366] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0512 12:11:07.075885 921399 reflector.go:243] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0512 12:11:07.075986 921399 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.076148 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.076180 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.077150 921399 store.go:1366] Monitoring secrets count at <storage-prefix>//secrets
I0512 12:11:07.077150 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.077217 921399 reflector.go:243] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0512 12:11:07.077352 921399 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.077479 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.077502 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.078806 921399 store.go:1366] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0512 12:11:07.078859 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.078938 921399 reflector.go:243] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0512 12:11:07.079068 921399 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.079224 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.079256 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.080182 921399 store.go:1366] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0512 12:11:07.080273 921399 reflector.go:243] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0512 12:11:07.080409 921399 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.080613 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.080650 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.080658 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.081190 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.081339 921399 store.go:1366] Monitoring configmaps count at <storage-prefix>//configmaps
I0512 12:11:07.081430 921399 reflector.go:243] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0512 12:11:07.081538 921399 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.081682 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.081706 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.082295 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.082533 921399 store.go:1366] Monitoring namespaces count at <storage-prefix>//namespaces
I0512 12:11:07.082602 921399 reflector.go:243] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0512 12:11:07.082741 921399 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.082887 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.082922 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.083563 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.083625 921399 store.go:1366] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0512 12:11:07.083701 921399 reflector.go:243] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0512 12:11:07.083947 921399 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.084108 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.084140 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.085547 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.088484 921399 store.go:1366] Monitoring nodes count at <storage-prefix>//minions
I0512 12:11:07.088588 921399 reflector.go:243] Listing and watching *core.Node from storage/cacher.go:/minions
I0512 12:11:07.088737 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.088891 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.088923 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.089537 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.090060 921399 store.go:1366] Monitoring pods count at <storage-prefix>//pods
I0512 12:11:07.090160 921399 reflector.go:243] Listing and watching *core.Pod from storage/cacher.go:/pods
I0512 12:11:07.090279 921399 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.090443 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.090472 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.091717 921399 store.go:1366] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0512 12:11:07.091982 921399 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.092186 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.092222 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.092328 921399 reflector.go:243] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0512 12:11:07.092427 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.093255 921399 store.go:1366] Monitoring services count at <storage-prefix>//services/specs
I0512 12:11:07.093319 921399 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.093494 921399 reflector.go:243] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0512 12:11:07.093527 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.093578 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.093632 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.094353 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.094391 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.094440 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.095089 921399 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.095273 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.095306 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.096116 921399 store.go:1366] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0512 12:11:07.096137 921399 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0512 12:11:07.096203 921399 reflector.go:243] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0512 12:11:07.096851 921399 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.097201 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.097198 921399 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.098172 921399 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.098978 921399 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.099761 921399 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.100548 921399 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.101080 921399 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.101235 921399 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.101482 921399 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.102056 921399 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.102679 921399 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.102930 921399 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.103783 921399 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.104115 921399 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.104698 921399 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.104991 921399 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.105723 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.105973 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.106145 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.106297 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.106519 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.106687 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.106911 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.107759 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.108031 921399 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.108915 921399 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.109800 921399 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.110192 921399 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.110490 921399 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.111220 921399 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.111699 921399 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.112481 921399 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.113327 921399 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.114161 921399 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.114905 921399 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.115167 921399 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.115260 921399 master.go:529] Skipping disabled API group "auditregistration.k8s.io".
I0512 12:11:07.115285 921399 master.go:540] Enabling API group "authentication.k8s.io".
I0512 12:11:07.115302 921399 master.go:540] Enabling API group "authorization.k8s.io".
I0512 12:11:07.115476 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.115619 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.115644 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.116604 921399 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0512 12:11:07.116695 921399 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0512 12:11:07.116945 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.117099 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.117158 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.117741 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.117742 921399 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0512 12:11:07.117772 921399 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0512 12:11:07.118035 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.118190 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.118224 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.118800 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.118845 921399 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0512 12:11:07.118872 921399 master.go:540] Enabling API group "autoscaling".
I0512 12:11:07.118905 921399 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0512 12:11:07.119118 921399 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.120153 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.120278 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.120678 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.124469 921399 store.go:1366] Monitoring jobs.batch count at <storage-prefix>//jobs
I0512 12:11:07.124584 921399 reflector.go:243] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0512 12:11:07.125136 921399 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.125367 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.125397 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.126005 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.126137 921399 store.go:1366] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0512 12:11:07.126161 921399 master.go:540] Enabling API group "batch".
I0512 12:11:07.126242 921399 reflector.go:243] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0512 12:11:07.126373 921399 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.126557 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.126588 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.127162 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.127438 921399 store.go:1366] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0512 12:11:07.127475 921399 master.go:540] Enabling API group "certificates.k8s.io".
I0512 12:11:07.127495 921399 reflector.go:243] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0512 12:11:07.127706 921399 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.127859 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.127892 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.128945 921399 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0512 12:11:07.129263 921399 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.129449 921399 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0512 12:11:07.129540 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.129570 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.130435 921399 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0512 12:11:07.130437 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.130467 921399 master.go:540] Enabling API group "coordination.k8s.io".
I0512 12:11:07.130493 921399 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0512 12:11:07.130683 921399 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.130828 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.130841 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.130875 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.131499 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.131637 921399 store.go:1366] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0512 12:11:07.131668 921399 master.go:540] Enabling API group "discovery.k8s.io".
I0512 12:11:07.131770 921399 reflector.go:243] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0512 12:11:07.131896 921399 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.132031 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.132055 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.132599 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.132861 921399 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0512 12:11:07.132892 921399 master.go:540] Enabling API group "extensions".
I0512 12:11:07.132905 921399 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0512 12:11:07.133141 921399 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.133316 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.133352 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.133805 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.134179 921399 store.go:1366] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0512 12:11:07.134246 921399 reflector.go:243] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0512 12:11:07.134387 921399 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.134569 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.134597 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.135141 921399 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0512 12:11:07.135146 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.135199 921399 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0512 12:11:07.135396 921399 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.135586 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.135616 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.136195 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.136279 921399 store.go:1366] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0512 12:11:07.136299 921399 master.go:540] Enabling API group "networking.k8s.io".
I0512 12:11:07.136350 921399 reflector.go:243] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0512 12:11:07.136504 921399 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.136706 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.136735 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.137300 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.137440 921399 store.go:1366] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0512 12:11:07.137459 921399 master.go:540] Enabling API group "node.k8s.io".
I0512 12:11:07.137504 921399 reflector.go:243] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0512 12:11:07.137646 921399 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.137806 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.137834 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.138530 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.138626 921399 store.go:1366] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0512 12:11:07.138692 921399 reflector.go:243] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0512 12:11:07.138884 921399 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.139073 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.139103 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.139820 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.140093 921399 store.go:1366] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0512 12:11:07.140113 921399 master.go:540] Enabling API group "policy".
I0512 12:11:07.140172 921399 reflector.go:243] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0512 12:11:07.140198 921399 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.140366 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.140397 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.141066 921399 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0512 12:11:07.141097 921399 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0512 12:11:07.141131 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.141318 921399 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.141477 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.141505 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.143315 921399 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0512 12:11:07.143387 921399 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.143560 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.143598 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.143756 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.143868 921399 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0512 12:11:07.144909 921399 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0512 12:11:07.144983 921399 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0512 12:11:07.145112 921399 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.145224 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.145315 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.145343 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.146180 921399 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0512 12:11:07.146244 921399 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0512 12:11:07.146293 921399 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.146461 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.146495 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.146539 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.147064 921399 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0512 12:11:07.147131 921399 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0512 12:11:07.147279 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.147277 921399 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.147468 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.147502 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.148047 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.148197 921399 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0512 12:11:07.148278 921399 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0512 12:11:07.148285 921399 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.148431 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.148467 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.149057 921399 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0512 12:11:07.149127 921399 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0512 12:11:07.149147 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.149256 921399 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.149372 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.149398 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.149993 921399 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0512 12:11:07.150027 921399 master.go:540] Enabling API group "rbac.authorization.k8s.io".
I0512 12:11:07.150057 921399 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0512 12:11:07.150104 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.150918 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.152457 921399 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.152632 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.152662 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.153273 921399 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0512 12:11:07.153355 921399 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0512 12:11:07.153483 921399 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.153599 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.153644 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.154340 921399 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0512 12:11:07.154347 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.154361 921399 master.go:540] Enabling API group "scheduling.k8s.io".
I0512 12:11:07.154396 921399 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0512 12:11:07.154540 921399 master.go:529] Skipping disabled API group "settings.k8s.io".
I0512 12:11:07.154775 921399 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.154901 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.154935 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.155349 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.155673 921399 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0512 12:11:07.155745 921399 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0512 12:11:07.155896 921399 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.156062 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.156092 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.156677 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.156799 921399 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0512 12:11:07.156894 921399 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0512 12:11:07.157014 921399 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.157182 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.157215 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.157958 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.158345 921399 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0512 12:11:07.158405 921399 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0512 12:11:07.158610 921399 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.158791 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.158831 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.159321 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.159554 921399 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0512 12:11:07.159821 921399 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.159975 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.160003 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.160431 921399 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0512 12:11:07.161679 921399 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0512 12:11:07.161979 921399 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.162167 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.162195 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.162452 921399 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0512 12:11:07.163060 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.183836 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.190828 921399 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0512 12:11:07.191081 921399 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0512 12:11:07.191273 921399 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.191649 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.191687 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.194794 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.195573 921399 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0512 12:11:07.195751 921399 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0512 12:11:07.195994 921399 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.196286 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.196338 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.197362 921399 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0512 12:11:07.197393 921399 master.go:540] Enabling API group "storage.k8s.io".
I0512 12:11:07.197421 921399 master.go:529] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0512 12:11:07.197814 921399 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.198049 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.198104 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.198517 921399 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0512 12:11:07.199379 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.200084 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.201017 921399 store.go:1366] Monitoring deployments.apps count at <storage-prefix>//deployments
I0512 12:11:07.201111 921399 reflector.go:243] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0512 12:11:07.201245 921399 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.201456 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.201492 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.202084 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.202372 921399 store.go:1366] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0512 12:11:07.202484 921399 reflector.go:243] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0512 12:11:07.202640 921399 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.202794 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.202825 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.203467 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.203797 921399 store.go:1366] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0512 12:11:07.203885 921399 reflector.go:243] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0512 12:11:07.204041 921399 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.204258 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.204296 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.204813 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.205051 921399 store.go:1366] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0512 12:11:07.205115 921399 reflector.go:243] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0512 12:11:07.205284 921399 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.205436 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.205474 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.206036 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.206091 921399 store.go:1366] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0512 12:11:07.206114 921399 master.go:540] Enabling API group "apps".
I0512 12:11:07.206144 921399 reflector.go:243] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0512 12:11:07.206290 921399 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.206406 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.206434 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.207044 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.207245 921399 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0512 12:11:07.207308 921399 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0512 12:11:07.207455 921399 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.207584 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.207617 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.208193 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.239043 921399 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0512 12:11:07.239247 921399 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0512 12:11:07.239337 921399 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.239506 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.239532 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.240316 921399 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0512 12:11:07.240420 921399 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0512 12:11:07.240511 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.240576 921399 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.240767 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.240802 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.241539 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.241534 921399 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0512 12:11:07.241583 921399 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0512 12:11:07.241590 921399 master.go:540] Enabling API group "admissionregistration.k8s.io".
I0512 12:11:07.241670 921399 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.241996 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:07.242039 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:07.242657 921399 store.go:1366] Monitoring events count at <storage-prefix>//events
I0512 12:11:07.242662 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.242680 921399 master.go:540] Enabling API group "events.k8s.io".
I0512 12:11:07.242694 921399 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events
I0512 12:11:07.242976 921399 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.243323 921399 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.243626 921399 watch_cache.go:523] Replace watchCache (rev: 2)
I0512 12:11:07.243806 921399 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.243984 921399 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.244194 921399 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.244361 921399 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.244650 921399 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.244803 921399 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.244953 921399 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.245124 921399 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.246241 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.246598 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.247633 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.248024 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.249003 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.249363 921399 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.250379 921399 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.250694 921399 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.251571 921399 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.251895 921399 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.251952 921399 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
I0512 12:11:07.252749 921399 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.252938 921399 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.254188 921399 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.255224 921399 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.256126 921399 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.257119 921399 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.257191 921399 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0512 12:11:07.258125 921399 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.258523 921399 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.259823 921399 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.260842 921399 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.261630 921399 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.262093 921399 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.263071 921399 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.263209 921399 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0512 12:11:07.264995 921399 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.265475 921399 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.266206 921399 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.266986 921399 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.267624 921399 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.268460 921399 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.269350 921399 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.270062 921399 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.270617 921399 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.271435 921399 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.272207 921399 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.272331 921399 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0512 12:11:07.273119 921399 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.273857 921399 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.273940 921399 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0512 12:11:07.274734 921399 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.275353 921399 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.276059 921399 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.276746 921399 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.277080 921399 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.277800 921399 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.278363 921399 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.278934 921399 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.279549 921399 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.279631 921399 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0512 12:11:07.280569 921399 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.281366 921399 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.281861 921399 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.282834 921399 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.283246 921399 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.283580 921399 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.284423 921399 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.284819 921399 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.285151 921399 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.286164 921399 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.286537 921399 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.286898 921399 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0512 12:11:07.286976 921399 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0512 12:11:07.287005 921399 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0512 12:11:07.287778 921399 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.288564 921399 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.289363 921399 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.290137 921399 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.291075 921399 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d6261458-a290-4c4c-a97f-d2ebbae3a400", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0512 12:11:07.295283 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.295391 921399 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished
I0512 12:11:07.295409 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.295432 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.295443 921399 healthz.go:186] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0512 12:11:07.295455 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
W0512 12:11:07.295505 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0512 12:11:07.295606 921399 httplog.go:90] verb="GET" URI="/healthz" latency=460.057µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48718":
I0512 12:11:07.295825 921399 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0512 12:11:07.295843 921399 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0512 12:11:07.296397 921399 reflector.go:207] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0512 12:11:07.296420 921399 reflector.go:243] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0512 12:11:07.297551 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=2.107188ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:07.297589 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=655.676µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48718":
I0512 12:11:07.299801 921399 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=2 labels= fields= timeout=8m34s
I0512 12:11:07.300934 921399 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.074806ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:07.308829 921399 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.032825ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:07.310960 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.310999 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.311019 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.311048 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.311123 921399 httplog.go:90] verb="GET" URI="/healthz" latency=245.475µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:07.312293 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.063361ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48722":
I0512 12:11:07.312825 921399 httplog.go:90] verb="GET" URI="/api/v1/services" latency=970.778µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:07.313175 921399 httplog.go:90] verb="GET" URI="/api/v1/services" latency=974.839µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.315468 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=2.683087ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48722":
I0512 12:11:07.316788 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=862.348µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.318686 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.481474ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.320000 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=904.373µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.321634 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.207647ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.396051 921399 shared_informer.go:270] caches populated
I0512 12:11:07.396075 921399 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0512 12:11:07.396620 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.396684 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.396706 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.396726 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.396805 921399 httplog.go:90] verb="GET" URI="/healthz" latency=319.63µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.411989 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.412059 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.412070 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.412104 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.412166 921399 httplog.go:90] verb="GET" URI="/healthz" latency=364.044µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.496565 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.496612 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.496622 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.496636 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.496748 921399 httplog.go:90] verb="GET" URI="/healthz" latency=265µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.511842 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.511862 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.511871 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.511877 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.511968 921399 httplog.go:90] verb="GET" URI="/healthz" latency=258.361µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.596718 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.596746 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.596784 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.596796 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.596880 921399 httplog.go:90] verb="GET" URI="/healthz" latency=291.766µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.611913 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.611958 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.611996 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.612009 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.612121 921399 httplog.go:90] verb="GET" URI="/healthz" latency=272.447µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.696970 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.697034 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.697055 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.697075 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.697208 921399 httplog.go:90] verb="GET" URI="/healthz" latency=460.154µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.711936 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.711994 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.712010 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.712023 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.712112 921399 httplog.go:90] verb="GET" URI="/healthz" latency=222.583µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.797006 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.797071 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.797094 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.797110 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.797263 921399 httplog.go:90] verb="GET" URI="/healthz" latency=444.353µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.812920 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.812982 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.813005 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.813028 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.813155 921399 httplog.go:90] verb="GET" URI="/healthz" latency=455.965µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.897293 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.897372 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.897402 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.897432 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.897552 921399 httplog.go:90] verb="GET" URI="/healthz" latency=489.069µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:07.912111 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.912135 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.912164 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.912170 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.912249 921399 httplog.go:90] verb="GET" URI="/healthz" latency=219.724µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:07.997410 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:07.997470 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:07.997492 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:07.997506 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:07.997620 921399 httplog.go:90] verb="GET" URI="/healthz" latency=456.863µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:08.012392 921399 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0512 12:11:08.012452 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.012473 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.012487 921399 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.012617 921399 httplog.go:90] verb="GET" URI="/healthz" latency=400.911µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.062693 921399 client.go:360] parsed scheme: "endpoint"
I0512 12:11:08.062844 921399 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 <nil> 0 <nil>}]
I0512 12:11:08.097664 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.097711 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.097721 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.097792 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.31385ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:08.112657 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.112676 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.112685 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.112742 921399 httplog.go:90] verb="GET" URI="/healthz" latency=923.969µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.197390 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.197520 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.197544 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.197644 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.246064ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:08.212618 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.212659 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.212667 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.212768 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.023642ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.296783 921399 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.711985ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.296925 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.856559ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.297965 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.298019 921399 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0512 12:11:08.298044 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.298151 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.158827ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48728":
I0512 12:11:08.299976 921399 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=2.643398ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.300243 921399 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0512 12:11:08.303181 921399 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=2.647569ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.303356 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=4.265252ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48728":
I0512 12:11:08.305064 921399 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.377456ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.305249 921399 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0512 12:11:08.305270 921399 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0512 12:11:08.307865 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=3.203219ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.309036 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=757.839µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.310183 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=763.227µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.311281 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=734.75µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.314516 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.314547 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.314600 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=2.957029ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.314610 921399 httplog.go:90] verb="GET" URI="/healthz" latency=3.01244ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.315837 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=748.033µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.316990 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=776.457µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.318100 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=727.575µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.320785 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.268699ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.320999 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0512 12:11:08.322219 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency=1.010767ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.323866 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.252959ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.324111 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0512 12:11:08.325012 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency=678.206µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.326589 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.175899ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.326783 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0512 12:11:08.327799 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency=786.971µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.329514 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.247073ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.329712 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0512 12:11:08.330764 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=812.385µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.332553 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.339026ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.332774 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
I0512 12:11:08.333742 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=727.446µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.335571 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.30392ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.335809 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
I0512 12:11:08.338072 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=2.046959ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.339604 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.15577ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.339797 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
I0512 12:11:08.340807 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=767.406µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.345734 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=4.502328ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.345984 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0512 12:11:08.346925 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=721.254µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.349186 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.675339ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.349478 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0512 12:11:08.350475 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=765.712µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.352525 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.568405ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.352857 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0512 12:11:08.355076 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency=1.984668ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.356637 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.147615ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.356859 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0512 12:11:08.357862 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency=727.442µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.359803 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.517913ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.360089 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node
I0512 12:11:08.362107 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency=1.827913ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.363824 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.30212ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.364010 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0512 12:11:08.365052 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency=812.138µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.366762 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.291809ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.366998 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0512 12:11:08.368046 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency=803.642µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.369653 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.186693ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.369879 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0512 12:11:08.370914 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency=796.369µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.372526 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.200921ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.372719 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0512 12:11:08.375141 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency=2.209332ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.376700 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.15925ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.376885 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0512 12:11:08.377887 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency=766.846µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.379744 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.408725ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.379988 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0512 12:11:08.380942 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency=724.036µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.382581 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.216033ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.382798 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0512 12:11:08.383723 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency=704.819µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.385486 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.275321ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.385767 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0512 12:11:08.388068 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency=2.044036ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.389651 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.171712ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.389862 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0512 12:11:08.390771 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency=668.292µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.392410 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.159499ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.392596 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0512 12:11:08.393493 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency=677.186µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.395225 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.27876ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.395466 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0512 12:11:08.396479 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency=778.113µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.397001 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.397035 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.397098 921399 httplog.go:90] verb="GET" URI="/healthz" latency=798.433µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:08.398130 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.234678ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.398315 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0512 12:11:08.399213 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency=685.256µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.400858 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.20658ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.401074 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0512 12:11:08.403528 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency=2.157983ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.405120 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.203954ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.405343 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0512 12:11:08.406256 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency=692.611µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.407914 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.246427ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.408144 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0512 12:11:08.409069 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency=715.463µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.410729 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.264721ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.410975 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0512 12:11:08.411981 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency=794.08µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.412351 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.412373 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.412460 921399 httplog.go:90] verb="GET" URI="/healthz" latency=825.51µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.413907 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.460752ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.414229 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0512 12:11:08.416388 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency=1.932201ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.418085 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.292711ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.418348 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0512 12:11:08.419352 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency=774.93µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.420982 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.244864ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.421212 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0512 12:11:08.422213 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency=742.552µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.423985 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.341252ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.424217 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0512 12:11:08.426573 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency=2.138869ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.428265 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.252233ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.428493 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0512 12:11:08.429427 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency=710.172µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.431166 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.317705ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.431457 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0512 12:11:08.432381 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency=712.177µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.434083 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.279346ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.434307 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0512 12:11:08.436512 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency=1.975962ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.438217 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.296436ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.438451 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0512 12:11:08.439464 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=782.592µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.441182 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.294949ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.441436 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0512 12:11:08.442425 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=756.881µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.444249 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.395495ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.444501 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0512 12:11:08.449383 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=4.626494ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.451192 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.405348ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.451437 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0512 12:11:08.453509 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=1.855498ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.455159 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.244862ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.455443 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0512 12:11:08.456469 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=793.945µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.459255 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.375945ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.459509 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0512 12:11:08.460954 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=1.177425ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.462843 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.462053ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.463089 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0512 12:11:08.464080 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency=777.521µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.465810 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.327386ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.466082 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0512 12:11:08.468291 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency=1.984309ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.470134 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.41644ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.470399 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0512 12:11:08.471368 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency=739.379µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.473074 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.297982ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.473270 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0512 12:11:08.474348 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency=863.092µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.476161 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.383203ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.476401 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0512 12:11:08.477437 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency=795.003µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.479350 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.481116ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.479584 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0512 12:11:08.481801 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency=1.991946ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.485528 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.285252ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.485824 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0512 12:11:08.486932 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency=873.435µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.488709 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.347049ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.488923 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0512 12:11:08.489890 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency=719.88µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.491657 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.342806ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.491898 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0512 12:11:08.495884 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency=727.693µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.497023 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.497047 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.497117 921399 httplog.go:90] verb="GET" URI="/healthz" latency=797.244µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:08.512913 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.512948 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.513044 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.229566ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.517000 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.697322ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.517242 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0512 12:11:08.536750 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency=1.335695ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.557661 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.198237ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.558053 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0512 12:11:08.577397 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency=1.721187ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.598829 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.051363ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.599296 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0512 12:11:08.600211 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.600259 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.600380 921399 httplog.go:90] verb="GET" URI="/healthz" latency=3.647358ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:08.613334 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.613361 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.613501 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.397762ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.616439 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency=1.109095ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.639213 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.544417ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.639668 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0512 12:11:08.660548 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency=4.940183ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.677511 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.985997ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.677847 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0512 12:11:08.697847 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency=2.144212ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.698272 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.698330 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.698503 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.835953ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:08.713294 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.713328 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.713420 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.381564ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.717099 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.79956ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.717376 921399 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0512 12:11:08.736483 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=1.073894ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.757999 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.459655ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.758389 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0512 12:11:08.776683 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.11255ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.797730 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.797791 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.797871 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.352135ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:08.798454 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.377515ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.798721 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0512 12:11:08.814043 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.814100 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.814254 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.965581ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.816783 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.363892ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.838530 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.98764ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.839061 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0512 12:11:08.857974 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=2.102215ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.878587 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.910978ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.878970 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0512 12:11:08.896898 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.330164ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:08.897674 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.897717 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.897813 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.230846ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:08.913805 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.913867 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.914024 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.959902ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.917803 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.259698ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.918078 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0512 12:11:08.936603 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=1.056469ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.958755 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.975599ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.959274 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0512 12:11:08.976903 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.275936ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.997356 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.00815ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:08.997694 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0512 12:11:08.998741 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:08.998767 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:08.998858 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.406764ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:09.013382 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.013406 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.013558 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.400541ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.016692 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=1.188871ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.038497 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.771312ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.038792 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0512 12:11:09.057289 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=1.778793ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.077293 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.988925ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.077532 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0512 12:11:09.096983 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.462252ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.097605 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.097652 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.097721 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.209368ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:09.113313 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.113396 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.113544 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.461036ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.117206 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.773716ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.117488 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0512 12:11:09.136829 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=1.195637ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.157672 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.131616ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.158011 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0512 12:11:09.181181 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=5.472599ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.196975 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.745535ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.197284 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0512 12:11:09.198439 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.198539 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.198596 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.223752ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:09.212826 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.212871 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.213030 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.227038ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.216504 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=1.099078ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.237443 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.884712ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.237763 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0512 12:11:09.256887 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=1.264489ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.277494 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.854562ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.277816 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0512 12:11:09.296705 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=1.333555ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.297520 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.297539 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.297615 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.20505ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:09.312950 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.313011 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.313148 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.227794ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.317460 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.024804ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.317731 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0512 12:11:09.336611 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=1.178784ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.358475 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.861974ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.358849 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0512 12:11:09.376797 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=1.44624ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.397469 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.940651ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.397780 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0512 12:11:09.398854 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.398894 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.398987 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.372437ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:09.412858 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.412880 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.413056 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.121711ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.416298 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=977.951µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.438831 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.024185ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.439220 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0512 12:11:09.456977 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=1.42738ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.478833 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.207034ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.479336 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0512 12:11:09.496583 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=1.220904ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.497640 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.497689 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.497771 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.273198ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:09.514063 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.514114 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.514245 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.978587ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.517299 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.89889ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.517611 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0512 12:11:09.537209 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.561844ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.557792 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.308804ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.558240 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0512 12:11:09.576391 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.067311ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.597240 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.915378ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.597558 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0512 12:11:09.597596 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.597615 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.597696 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.314979ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:09.613124 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.613148 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.613288 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.272901ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.616635 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=1.216932ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.637885 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.415217ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.638330 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0512 12:11:09.657212 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.452044ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.677467 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.155367ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.677833 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0512 12:11:09.701105 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.701132 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.701243 921399 httplog.go:90] verb="GET" URI="/healthz" latency=4.725697ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:09.701256 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=5.861736ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.713224 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.713299 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.713545 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.4909ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.717090 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.755276ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.717460 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0512 12:11:09.736500 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=1.024887ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.758862 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.978219ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.759265 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0512 12:11:09.776844 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=1.344019ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.797497 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.048363ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.797763 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0512 12:11:09.799050 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.799095 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.799181 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.659797ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:09.812700 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.812718 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.812902 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.138033ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.816454 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=1.15241ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.837890 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.410489ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.838274 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0512 12:11:09.856523 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=1.009161ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.877544 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.993329ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.877875 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0512 12:11:09.896881 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=1.390862ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:09.897437 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.897491 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.897596 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.01867ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:09.913238 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.913284 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.913469 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.392863ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.917317 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.977527ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.917512 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0512 12:11:09.936665 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.194061ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.958862 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.071575ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.959156 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0512 12:11:09.977073 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=1.528167ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.997320 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.892581ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:09.997620 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0512 12:11:09.998842 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:09.998864 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:09.999019 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.533348ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:10.013860 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.013890 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.014017 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.207745ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.018740 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=3.482659ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.038812 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.222108ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.039265 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0512 12:11:10.056847 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=1.292194ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.078914 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.005331ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.079403 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0512 12:11:10.097092 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=1.4618ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.097661 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.097709 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.097766 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.219588ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:10.113708 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.113776 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.113945 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.762127ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.117513 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.165743ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.117744 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0512 12:11:10.136868 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=1.327322ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.157403 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.945072ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.157698 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0512 12:11:10.176616 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=1.179597ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.198616 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.977035ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.199125 921399 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0512 12:11:10.201786 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.201807 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.201879 921399 httplog.go:90] verb="GET" URI="/healthz" latency=5.328214ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:10.212760 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.212779 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.212924 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.127106ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.216358 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=1.119905ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.221868 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=4.909004ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.238057 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.463349ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.238413 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0512 12:11:10.257032 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=1.357355ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.258796 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.13778ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.278879 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=3.011117ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.279326 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0512 12:11:10.297701 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=2.040427ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.298241 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.298305 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.298428 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.772265ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:10.299982 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.205436ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.313258 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.313288 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.313440 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.406853ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.317292 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.883376ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.317579 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0512 12:11:10.336734 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=1.155027ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.338582 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.1319ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.357407 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.892232ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.357735 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0512 12:11:10.377706 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency=1.956589ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.380923 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.143622ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.397561 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.121235ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.397859 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0512 12:11:10.398880 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.398911 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.398977 921399 httplog.go:90] verb="GET" URI="/healthz" latency=2.595755ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:10.413332 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.413354 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.413502 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.381261ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.416350 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=1.110506ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.417972 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.110668ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.437184 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.760473ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.437525 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0512 12:11:10.456750 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=1.24096ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.458861 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.353369ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.478910 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=3.064912ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.479248 921399 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0512 12:11:10.496821 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.196942ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.497447 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.497467 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.497584 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.03109ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:10.498797 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.316965ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.513151 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.513207 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.513348 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.396121ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.517767 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.371825ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.518084 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0512 12:11:10.536628 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=1.314182ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.538324 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.229127ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.557234 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.774602ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.557571 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0512 12:11:10.578097 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=1.972505ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.580458 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.504023ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.598721 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.004548ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:10.599000 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0512 12:11:10.601800 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.601825 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.601895 921399 httplog.go:90] verb="GET" URI="/healthz" latency=5.210726ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
I0512 12:11:10.613973 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.614047 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.614206 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.966318ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.616524 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=1.100839ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.618394 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.355632ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.637326 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.922382ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.637588 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0512 12:11:10.656772 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=1.235202ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.658327 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.096828ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.678946 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.169564ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.679474 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0512 12:11:10.697814 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=1.957943ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.698250 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.698320 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.698460 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.751843ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48720":
I0512 12:11:10.699695 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.093769ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.713214 921399 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0512 12:11:10.713262 921399 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0512 12:11:10.713394 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.279389ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.716861 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.485643ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.717080 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0512 12:11:10.742445 921399 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=6.6224ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.744397 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.294957ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.757463 921399 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency=1.950727ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.757810 921399 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0512 12:11:10.797606 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.009941ms resp=200 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:48724":
W0512 12:11:10.798426 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.798498 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.798514 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.798543 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0512 12:11:10.799062 921399 factory.go:221] Creating scheduler from algorithm provider 'DefaultProvider'
I0512 12:11:10.799095 921399 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0512 12:11:10.799118 921399 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0512 12:11:10.799265 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.804020 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.807812 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0512 12:11:10.808656 921399 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0512 12:11:10.808724 921399 shared_informer.go:240] Waiting for caches to sync for scheduler
I0512 12:11:10.809024 921399 reflector.go:207] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/util/util.go:404
I0512 12:11:10.809048 921399 reflector.go:243] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/util/util.go:404
I0512 12:11:10.810280 921399 httplog.go:90] verb="GET" URI="/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0" latency=816.416µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.812942 921399 httplog.go:90] verb="GET" URI="/healthz" latency=1.176797ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.814474 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default" latency=1.085576ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.816776 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.719419ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.820414 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency=3.13669ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.822049 921399 get.go:251] Starting watch for /api/v1/pods, rv=2 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m29s
I0512 12:11:10.825618 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/services" latency=4.689078ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.827252 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.030264ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.829943 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/endpoints" latency=2.23003ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.831299 921399 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=848.547µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.833857 921399 httplog.go:90] verb="POST" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices" latency=2.145734ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.909106 921399 shared_informer.go:270] caches populated
I0512 12:11:10.909150 921399 shared_informer.go:247] Caches are synced for scheduler
I0512 12:11:10.910016 921399 reflector.go:207] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910080 921399 reflector.go:243] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910178 921399 reflector.go:207] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910231 921399 reflector.go:243] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910303 921399 reflector.go:207] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910356 921399 reflector.go:243] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910527 921399 reflector.go:207] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910534 921399 reflector.go:207] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910582 921399 reflector.go:243] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910589 921399 reflector.go:207] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910602 921399 reflector.go:243] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910628 921399 reflector.go:243] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910634 921399 reflector.go:207] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.910667 921399 reflector.go:243] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0512 12:11:10.911960 921399 httplog.go:90] verb="GET" URI="/api/v1/nodes?limit=500&resourceVersion=0" latency=1.084198ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:10.912511 921399 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0" latency=855.665µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48732":
I0512 12:11:10.912670 921399 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0" latency=978.863µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48738":
I0512 12:11:10.912697 921399 httplog.go:90] verb="GET" URI="/api/v1/services?limit=500&resourceVersion=0" latency=1.105975ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48734":
I0512 12:11:10.912725 921399 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?limit=500&resourceVersion=0" latency=1.129681ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48736":
I0512 12:11:10.912792 921399 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0" latency=1.319226ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48730":
I0512 12:11:10.912923 921399 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0" latency=1.104991ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48740":
I0512 12:11:10.914337 921399 get.go:251] Starting watch for /api/v1/services, rv=119 labels= fields= timeout=5m1s
I0512 12:11:10.916160 921399 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=2 labels= fields= timeout=6m22s
I0512 12:11:10.917846 921399 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=2 labels= fields= timeout=9m16s
I0512 12:11:10.918028 921399 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=2 labels= fields= timeout=5m32s
I0512 12:11:10.918588 921399 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=2 labels= fields= timeout=6m17s
I0512 12:11:10.918710 921399 get.go:251] Starting watch for /api/v1/nodes, rv=2 labels= fields= timeout=9m56s
I0512 12:11:10.922935 921399 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=2 labels= fields= timeout=9m37s
I0512 12:11:11.010064 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010132 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010147 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010165 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010179 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010190 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010203 921399 shared_informer.go:270] caches populated
I0512 12:11:11.010462 921399 shared_informer.go:270] caches populated
I0512 12:11:11.016952 921399 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=5.529834ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.018033 921399 node_tree.go:86] Added node "node1" in group "" to NodeTree
I0512 12:11:11.018073 921399 eventhandlers.go:104] add event for node "node1"
I0512 12:11:11.025570 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=7.743746ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.026229 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.026333 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.026346 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.026604 921399 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0", node "node1"
I0512 12:11:11.026622 921399 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0", node "node1": all PVCs bound and nothing to do
I0512 12:11:11.026767 921399 default_binder.go:51] Attempting to bind preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0 to node1
I0512 12:11:11.029590 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=3.402986ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.029675 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-0/binding" latency=2.143032ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48756":
I0512 12:11:11.029908 921399 scheduler.go:740] pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I0512 12:11:11.030170 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.030208 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.030222 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.030291 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.030311 921399 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1", node "node1"
I0512 12:11:11.030328 921399 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1", node "node1": all PVCs bound and nothing to do
I0512 12:11:11.030349 921399 eventhandlers.go:229] add event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.030387 921399 default_binder.go:51] Attempting to bind preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1 to node1
I0512 12:11:11.031565 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=1.443717ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48756":
I0512 12:11:11.031952 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.031987 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.031999 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.032108 921399 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2", node "node1"
I0512 12:11:11.032135 921399 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2", node "node1": all PVCs bound and nothing to do
I0512 12:11:11.032200 921399 default_binder.go:51] Attempting to bind preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2 to node1
I0512 12:11:11.032457 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-1/binding" latency=1.624524ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.032655 921399 scheduler.go:740] pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I0512 12:11:11.032790 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.032800 921399 eventhandlers.go:229] add event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.035527 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=3.370888ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48756":
I0512 12:11:11.035705 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-2/binding" latency=3.045281ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48760":
I0512 12:11:11.035726 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.035758 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.035785 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.035885 921399 scheduler_binder.go:323] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3", node "node1"
I0512 12:11:11.035895 921399 scheduler.go:740] pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I0512 12:11:11.035915 921399 scheduler_binder.go:333] AssumePodVolumes for pod "preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3", node "node1": all PVCs bound and nothing to do
I0512 12:11:11.035992 921399 default_binder.go:51] Attempting to bind preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3 to node1
I0512 12:11:11.036040 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.036076 921399 eventhandlers.go:229] add event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.036302 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=5.885059ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.036870 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=3.861101ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.038304 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=2.064175ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48760":
I0512 12:11:11.038436 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-3/binding" latency=2.14642ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48756":
I0512 12:11:11.038657 921399 scheduler.go:740] pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3 is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I0512 12:11:11.038827 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.038836 921399 eventhandlers.go:229] add event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.040231 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.23887ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.138790 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-0" latency=2.073178ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.242547 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-1" latency=2.561762ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.346532 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-2" latency=2.463768ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.450437 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-3" latency=2.582536ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.452765 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=1.605263ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.453144 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.453196 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.453218 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.457068 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=3.513927ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.457423 921399 generic_scheduler.go:1043] Node node1 is a potential node for preemption.
I0512 12:11:11.457637 921399 scheduler.go:818] Setting nominated node name for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority to "node1"
I0512 12:11:11.462904 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority/status" latency=4.45651ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.466133 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-2" latency=2.564952ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.468062 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.413044ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.471225 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-1" latency=4.724478ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.473244 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.502368ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.474057 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-0" latency=2.470786ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.474305 921399 factory.go:457] Unable to schedule preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0512 12:11:11.474390 921399 scheduler.go:785] Updating pod condition for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority to (PodScheduled==False, Reason=Unschedulable)
I0512 12:11:11.476765 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=2.131512ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
E0512 12:11:11.477110 921399 factory.go:494] pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority is already present in the active queue
I0512 12:11:11.477466 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=2.825598ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.478774 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=3.741005ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48762":
I0512 12:11:11.479454 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority/status" latency=4.277386ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48764":
I0512 12:11:11.479907 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.479929 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.481605 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=1.369869ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.481877 921399 factory.go:457] Unable to schedule preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0512 12:11:11.481935 921399 scheduler.go:785] Updating pod condition for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority to (PodScheduled==False, Reason=Unschedulable)
I0512 12:11:11.483098 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=931.616µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.483520 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.252727ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.556629 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=2.340893ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.565044 921399 httplog.go:90] verb="POST" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=7.249265ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.565610 921399 eventhandlers.go:173] add event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.565673 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.565707 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.567236 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=1.133734ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.567535 921399 generic_scheduler.go:1043] Node node1 is a potential node for preemption.
I0512 12:11:11.567734 921399 scheduler.go:818] Setting nominated node name for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority to "node1"
I0512 12:11:11.570067 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority/status" latency=1.825634ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.573686 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-2" latency=3.01205ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.575109 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/lpod-1" latency=1.011647ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.575289 921399 scheduler.go:818] Setting nominated node name for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority to ""
I0512 12:11:11.575507 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.226068ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.576803 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.156995ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.577790 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority/status" latency=1.922318ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.578070 921399 factory.go:457] Unable to schedule preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0512 12:11:11.578252 921399 scheduler.go:785] Updating pod condition for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority to (PodScheduled==False, Reason=Unschedulable)
I0512 12:11:11.580343 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=1.824233ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.580416 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.786783ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
E0512 12:11:11.580607 921399 factory.go:494] pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority is already present in the active queue
I0512 12:11:11.581307 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority/status" latency=2.363898ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48766":
I0512 12:11:11.581631 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.581653 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.582849 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=928.962µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.583092 921399 factory.go:457] Unable to schedule preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0512 12:11:11.583139 921399 scheduler.go:785] Updating pod condition for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority to (PodScheduled==False, Reason=Unschedulable)
I0512 12:11:11.584284 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=894.219µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
E0512 12:11:11.584587 921399 factory.go:494] pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority is already present in the active queue
I0512 12:11:11.584705 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.18175ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.585497 921399 httplog.go:90] verb="PATCH" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority/status" latency=1.700902ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48768":
I0512 12:11:11.585829 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.585850 921399 scheduler.go:578] Attempting to schedule pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.586946 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=829.53µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.587241 921399 factory.go:457] Unable to schedule preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0512 12:11:11.587297 921399 scheduler.go:785] Updating pod condition for preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority to (PodScheduled==False, Reason=Unschedulable)
I0512 12:11:11.588358 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=806.27µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.588758 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=1.140277ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.675818 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/high-priority" latency=9.557469ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.780073 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods/medium-priority" latency=2.554875ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.786384 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.786456 921399 scheduler.go:766] Skip schedule deleting pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.789061 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/high-priority
I0512 12:11:11.789455 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=2.578634ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.797936 921399 eventhandlers.go:278] delete event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-0
I0512 12:11:11.800856 921399 eventhandlers.go:278] delete event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-1
I0512 12:11:11.809001 921399 eventhandlers.go:278] delete event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-2
I0512 12:11:11.812051 921399 eventhandlers.go:278] delete event for scheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/lpod-3
I0512 12:11:11.813701 921399 scheduling_queue.go:808] About to try and schedule pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.813738 921399 scheduler.go:766] Skip schedule deleting pod: preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.819422 921399 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/events" latency=5.353615ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48758":
I0512 12:11:11.820457 921399 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=39.341179ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:11.820556 921399 eventhandlers.go:205] delete event for unscheduled pod preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/medium-priority
I0512 12:11:11.914023 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:11.917066 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:11.917066 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:11.917541 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:11.917860 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:11.922142 921399 reflector.go:368] k8s.io/client-go/informers/factory.go:135: forcing resync
I0512 12:11:12.823695 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/preemptiona2fd381c-9b90-4d7b-995b-258b50881b54/pods" latency=2.104078ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.824499 921399 reflector.go:213] Stopping reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824533 921399 reflector.go:213] Stopping reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824560 921399 reflector.go:213] Stopping reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824602 921399 reflector.go:213] Stopping reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824636 921399 reflector.go:213] Stopping reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/util/util.go:404
I0512 12:11:12.824648 921399 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=2&timeout=6m17s&timeoutSeconds=377&watch=true" latency=1.9064081s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48748":
I0512 12:11:12.824654 921399 reflector.go:213] Stopping reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824694 921399 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=2&timeout=5m32s&timeoutSeconds=332&watch=true" latency=1.907134896s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48746":
I0512 12:11:12.824650 921399 reflector.go:213] Stopping reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824647 921399 reflector.go:213] Stopping reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0512 12:11:12.824829 921399 httplog.go:90] verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=2&timeoutSeconds=329&watch=true" latency=2.003015584s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48720":
I0512 12:11:12.824744 921399 httplog.go:90] verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2&timeout=9m56s&timeoutSeconds=596&watch=true" latency=1.906521045s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48750":
I0512 12:11:12.824727 921399 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=2&timeout=6m22s&timeoutSeconds=382&watch=true" latency=1.908819165s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48724":
I0512 12:11:12.824917 921399 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=2&timeout=9m37s&timeoutSeconds=577&watch=true" latency=1.902344651s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48752":
I0512 12:11:12.824985 921399 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=2&timeout=9m16s&timeoutSeconds=556&watch=true" latency=1.907491661s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48744":
I0512 12:11:12.825073 921399 httplog.go:90] verb="GET" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=119&timeout=5m1s&timeoutSeconds=301&watch=true" latency=1.911181263s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48732":
I0512 12:11:12.828955 921399 httplog.go:90] verb="DELETE" URI="/api/v1/nodes" latency=4.421044ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.829137 921399 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0512 12:11:12.836507 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=7.046262ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.838843 921399 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.613813ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.840494 921399 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=1.090563ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.842665 921399 httplog.go:90] verb="PUT" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=1.547169ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48754":
I0512 12:11:12.843107 921399 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0512 12:11:12.843124 921399 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0512 12:11:12.843280 921399 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=2&timeout=8m34s&timeoutSeconds=514&watch=true" latency=5.543613178s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:48718":
--- PASS: TestNominatedNodeCleanUp (5.78s)
PASS
ok k8s.io/kubernetes/test/integration/scheduler 5.875s
+++ [0512 12:11:12] Cleaning up etcd
+++ [0512 12:11:12] Integration test cleanup complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment