Skip to content

Instantly share code, notes, and snippets.

@xavierzwirtz
Created January 27, 2020 01:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xavierzwirtz/04c47d8e89efea05265ba7d71e4e0dec to your computer and use it in GitHub Desktop.
Save xavierzwirtz/04c47d8e89efea05265ba7d71e4e0dec to your computer and use it in GitHub Desktop.
hung1
these derivations will be built:
/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv
/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv
/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv
/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv
/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv
/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv
/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv
/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv
/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv
/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv
/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv
/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv
/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv
these paths will be fetched (112.93 MiB download, 449.40 MiB unpacked):
/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook
/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0
/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9
/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9
/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux
/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625
/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2
/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4
/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin
/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev
/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev
/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook
/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin
/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum
/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36
/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01
/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2
/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev
/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1
/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0
/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source
/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3
/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1
/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin
/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1
/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3
/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source
/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12
/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux
/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib
/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver
/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev
/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev
/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0
/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin
/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3
/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev
/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4
/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2
copying path '/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3' from 'https://cache.nixos.org'...
copying path '/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4' from 'https://cache.nixos.org'...
copying path '/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1' from 'https://cache.nixos.org'...
copying path '/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9' from 'https://cache.nixos.org'...
copying path '/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625' from 'https://cache.nixos.org'...
copying path '/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01' from 'https://cache.nixos.org'...
copying path '/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36' from 'https://cache.nixos.org'...
copying path '/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2' from 'https://cache.nixos.org'...
copying path '/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0' from 'https://cache.nixos.org'...
copying path '/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2' from 'https://cache.nixos.org'...
copying path '/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2' from 'https://cache.nixos.org'...
copying path '/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4' from 'https://cache.nixos.org'...
copying path '/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12' from 'https://cache.nixos.org'...
copying path '/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source' from 'https://cache.nixos.org'...
copying path '/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source' from 'https://cache.nixos.org'...
copying path '/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source' from 'https://cache.nixos.org'...
copying path '/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12' from 'https://cache.nixos.org'...
copying path '/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9' from 'https://cache.nixos.org'...
copying path '/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3' from 'https://cache.nixos.org'...
copying path '/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5' from 'https://cache.nixos.org'...
copying path '/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib' from 'https://cache.nixos.org'...
copying path '/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1' from 'https://cache.nixos.org'...
building '/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv'...
building '/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv'...
building '/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv'...
copying path '/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin' from 'https://cache.nixos.org'...
building '/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv'...
building '/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv'...
building '/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv'...
copying path '/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3' from 'https://cache.nixos.org'...
copying path '/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1' from 'https://cache.nixos.org'...
copying path '/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver' from 'https://cache.nixos.org'...
building '/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv'...
building '/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv'...
copying path '/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum' from 'https://cache.nixos.org'...
building '/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv'...
building '/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv'...
Packing layer...
Computing layer checksum...
Creating layer #1 for /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27
Creating layer #2 for /nix/store/xvxsbvbi7ckccz4pz2j6np7czadgjy2x-zlib-1.2.11
Creating layer #3 for /nix/store/n55nxs8xxdwkwv4kqh99pdnyqxp0d1zg-libpng-apng-1.6.37
Creating layer #4 for /nix/store/0ykbl0k34cfh80gvawqy5f8v1yq7pph8-bzip2-1.0.6.0.1
Creating layer #5 for /nix/store/s7j9n1wccws4kgigknl4rfqpyjxy544y-libjpeg-turbo-2.0.3
Creating layer #6 for /nix/store/w4snc9q1ns3rqg8zykkh9ric1d92akwd-dejavu-fonts-minimal-2.37
Creating layer #7 for /nix/store/nzb33937sf9031ik3v7c8d039lnviglk-freetype-2.10.1
Creating layer #8 for /nix/store/784rh7jrfhagbkydjfrv68h9x3g4gqmk-gcc-8.3.0-lib
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #9 for /nix/store/blykn8wlxh1n91dzxizyxvkygmd911cx-xz-5.2.4
tar: Removing leading `/' from member names
Creating layer #10 for /nix/store/lp6xmsg44yflzd3rv2qc4dc0m9y0qr2n-expat-2.2.7
tar: Removing leading `/' from member names
Creating layer #11 for /nix/store/9r9px061ymn6r8wdzgdhbm7sdb5b0dri-fontconfig-2.12.6
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #12 for /nix/store/yydyda5cz2x74pqp643q2r3p6ipy6d9b-giflib-5.1.4
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #13 for /nix/store/nl4l9vkbvpp5jblr7kycx2qqchbnn98a-libtiff-4.0.10
Creating layer #14 for /nix/store/5zvqxjp62ahwvgqm4y4x9p9ym112hljj-libxml2-2.9.9
tar: tar: Removing leading `/' from member namesRemoving leading `/' from member names
Creating layer #15 for /nix/store/6mhw8asq3ciinkky6mqq6qn6sfxrkgks-fontconfig-2.12.6-lib
tar: Removing leading `/' from member names
Creating layer #16 for /nix/store/vwydn02iqfg7xp1a6rhpyhs8vl9v2b6d-libwebp-1.0.3
tar: Removing leading `/' from member namestar:
Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #17 for /nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #18 for /nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
Creating layer #19 for /nix/store/g42rl3xfqml0yrh5yjdfy4rfdpk1cc7y-libxslt-1.1.33
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #20 for /nix/store/z9vsvmll45kjdf7j9h0vlxjjya6yxgc0-openssl-1.1.1d
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #21 for /nix/store/6p4kq0v91y90jv5zqb4gri38c47wxglj-pcre-8.43
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #22 for /nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #23 for /nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source /nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source /nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1 /nix/store/27hpjxyy26v0bpp7x8g72nddcv6nv3hw-bulk-layers /nix/store/gskazlyrm0f1bbcngy04f8m07lm2wsqf-nginx-config.json /nix/store/n8w8r7z1z962scfcc1h7rsdqnaf5xncc-closure
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Finished building layer 'nginx-granular-docker-layers'
building '/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv'...
Cooking the image...
Finished.
building '/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv'...
building '/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv'...
starting VDE switch for network 1
running the VM test script
starting all VMs
kube: starting vm
kube# Formatting '/build/vm-state-kube/kube.qcow2', fmt=qcow2 size=4294967296 cluster_size=65536 lazy_refcounts=off refcount_bits=16
kube: QEMU running (pid 9)
(0.06 seconds)
kube: waiting for success: kubectl get node kube.my.xzy | grep -w Ready
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube: waiting for the VM to finish booting
kube# cSeaBIOS (version rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org)
kube#
kube#
kube# iPXE (http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+7FF90FD0+7FEF0FD0 C980
kube#
kube#
kube#
kube#
kube# iPXE (http://ipxe.org) 00:08.0 CA80 PCI2.10 PnP PMM 7FF90FD0 7FEF0FD0 CA80
kube#
kube#
kube# Booting from ROM...
kube# Probing EDD (edd=off to disable)... oc[ 0.000000] Linux version 4.19.95 (nixbld@localhost) (gcc version 8.3.0 (GCC)) #1-NixOS SMP Sun Jan 12 11:17:30 UTC 2020
kube# [ 0.000000] Command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.000000] x86/fpu: x87 FPU will use FXSAVE
kube# [ 0.000000] BIOS-provided physical RAM map:
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
kube# [ 0.000000] NX (Execute Disable) protection: active
kube# [ 0.000000] SMBIOS 2.8 present.
kube# [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
kube# [ 0.000000] Hypervisor detected: KVM
kube# [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
kube# [ 0.000000] kvm-clock: cpu 0, msr 2655f001, primary cpu clock
kube# [ 0.000000] kvm-clock: using sched offset of 523908244 cycles
kube# [ 0.000001] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
kube# [ 0.000002] tsc: Detected 3499.998 MHz processor
kube# [ 0.000953] last_pfn = 0x7ffdc max_arch_pfn = 0x400000000
kube# [ 0.000990] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
kube# [ 0.002699] found SMP MP-table at [mem 0x000f5980-0x000f598f]
kube# [ 0.002796] Scanning 1 areas for low memory corruption
kube# [ 0.002890] RAMDISK: [mem 0x7f63e000-0x7ffcffff]
kube# [ 0.002901] ACPI: Early table checksum verification disabled
kube# [ 0.002928] ACPI: RSDP 0x00000000000F5940 000014 (v00 BOCHS )
kube# [ 0.002930] ACPI: RSDT 0x000000007FFE152E 000030 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
kube# [ 0.002932] ACPI: FACP 0x000000007FFE1392 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
kube# [ 0.002935] ACPI: DSDT 0x000000007FFDFA80 001912 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
kube# [ 0.002937] ACPI: FACS 0x000000007FFDFA40 000040
kube# [ 0.002938] ACPI: APIC 0x000000007FFE1406 0000F0 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
kube# [ 0.002940] ACPI: HPET 0x000000007FFE14F6 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
kube# [ 0.003142] No NUMA configuration found
kube# [ 0.003143] Faking a node at [mem 0x0000000000000000-0x000000007ffdbfff]
kube# [ 0.003145] NODE_DATA(0) allocated [mem 0x7ffd8000-0x7ffdbfff]
kube# [ 0.003156] Zone ranges:
kube# [ 0.003157] DMA [mem 0x0000000000001000-0x0000000000ffffff]
kube# [ 0.003158] DMA32 [mem 0x0000000001000000-0x000000007ffdbfff]
kube# [ 0.003158] Normal empty
kube# [ 0.003159] Movable zone start for each node
kube# [ 0.003160] Early memory node ranges
kube# [ 0.003160] node 0: [mem 0x0000000000001000-0x000000000009efff]
kube# [ 0.003161] node 0: [mem 0x0000000000100000-0x000000007ffdbfff]
kube# [ 0.003363] Reserved but unavailable: 98 pages
kube# [ 0.003364] Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff]
kube# [ 0.013255] ACPI: PM-Timer IO Port: 0x608
kube# [ 0.013265] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
kube# [ 0.013285] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
kube# [ 0.013287] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
kube# [ 0.013288] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
kube# [ 0.013288] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
kube# [ 0.013289] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
kube# [ 0.013290] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
kube# [ 0.013293] Using ACPI (MADT) for SMP configuration information
kube# [ 0.013294] ACPI: HPET id: 0x8086a201 base: 0xfed00000
kube# [ 0.013300] smpboot: Allowing 16 CPUs, 0 hotplug CPUs
kube# [ 0.013315] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
kube# [ 0.013316] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
kube# [ 0.013317] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
kube# [ 0.013317] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
kube# [ 0.013319] [mem 0x80000000-0xfeffbfff] available for PCI devices
kube# [ 0.013319] Booting paravirtualized kernel on KVM
kube# [ 0.013322] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
kube# [ 0.066330] random: get_random_bytes called from start_kernel+0x93/0x4ca with crng_init=0
kube# [ 0.066337] setup_percpu: NR_CPUS:384 nr_cpumask_bits:384 nr_cpu_ids:16 nr_node_ids:1
kube# [ 0.066846] percpu: Embedded 44 pages/cpu s142424 r8192 d29608 u262144
kube# [ 0.066868] KVM setup async PF for cpu 0
kube# [ 0.066872] kvm-stealtime: cpu 0, msr 7d016180
kube# [ 0.066877] Built 1 zonelists, mobility grouping on. Total pages: 515941
kube# [ 0.066878] Policy zone: DMA32
kube# [ 0.066879] Kernel command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.070402] Memory: 2028748K/2096616K available (10252K kernel code, 1140K rwdata, 1904K rodata, 1448K init, 764K bss, 67868K reserved, 0K cma-reserved)
kube# [ 0.070653] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1
kube# [ 0.070658] ftrace: allocating 28577 entries in 112 pages
kube# [ 0.077616] rcu: Hierarchical RCU implementation.
kube# [ 0.077617] rcu: RCU event tracing is enabled.
kube# [ 0.077617] rcu: RCU restricting CPUs from NR_CPUS=384 to nr_cpu_ids=16.
kube# [ 0.077618] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16
kube# [ 0.079034] NR_IRQS: 24832, nr_irqs: 552, preallocated irqs: 16
kube# [ 0.082963] Console: colour VGA+ 80x25
kube# [ 0.132309] console [ttyS0] enabled
kube# [ 0.132644] ACPI: Core revision 20180810
kube# [ 0.133152] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
kube# [ 0.134048] APIC: Switch to symmetric I/O mode setup
kube# [ 0.134644] x2apic enabled
kube# [ 0.135016] Switched APIC routing to physical x2apic.
kube# [ 0.136140] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
kube# [ 0.136717] tsc: Marking TSC unstable due to TSCs unsynchronized
kube# [ 0.137296] Calibrating delay loop (skipped) preset value.. 6999.99 BogoMIPS (lpj=3499998)
kube# [ 0.138290] pid_max: default: 32768 minimum: 301
kube# [ 0.138744] Security Framework initialized
kube# [ 0.139125] Yama: becoming mindful.
kube# [ 0.139306] AppArmor: AppArmor initialized
kube# [ 0.139987] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
kube# [ 0.140491] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
kube# [ 0.141295] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.142291] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.143154] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
kube# [ 0.143289] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
kube# [ 0.144290] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
kube# [ 0.145074] Spectre V2 : Mitigation: Full AMD retpoline
kube# [ 0.145289] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
kube# [ 0.146401] Freeing SMP alternatives memory: 28K
kube# [ 0.250249] smpboot: CPU0: AMD Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
kube# [ 0.250288] Performance Events: AMD PMU driver.
kube# [ 0.250291] ... version: 0
kube# [ 0.250673] ... bit width: 48
kube# [ 0.251289] ... generic registers: 4
kube# [ 0.251662] ... value mask: 0000ffffffffffff
kube# [ 0.252155] ... max period: 00007fffffffffff
kube# [ 0.252289] ... fixed-purpose events: 0
kube# [ 0.252665] ... event mask: 000000000000000f
kube# [ 0.253325] rcu: Hierarchical SRCU implementation.
kube# [ 0.254394] smp: Bringing up secondary CPUs ...
kube# [ 0.254873] x86: Booting SMP configuration:
kube# [ 0.255265] .... node #0, CPUs: #1
kube# [ 0.056125] kvm-clock: cpu 1, msr 2655f041, secondary cpu clock
kube# [ 0.255978] KVM setup async PF for cpu 1
kube# [ 0.256205] kvm-stealtime: cpu 1, msr 7d056180
kube# [ 0.257334] #2
kube# [ 0.056125] kvm-clock: cpu 2, msr 2655f081, secondary cpu clock
kube# [ 0.257874] KVM setup async PF for cpu 2
kube# [ 0.258225] kvm-stealtime: cpu 2, msr 7d096180
kube# [ 0.259356] #3
kube# [ 0.056125] kvm-clock: cpu 3, msr 2655f0c1, secondary cpu clock
kube# [ 0.259852] KVM setup async PF for cpu 3
kube# [ 0.260203] kvm-stealtime: cpu 3, msr 7d0d6180
kube# [ 0.261512] #4
kube# [ 0.056125] kvm-clock: cpu 4, msr 2655f101, secondary cpu clock
kube# [ 0.262013] KVM setup async PF for cpu 4
kube# [ 0.262205] kvm-stealtime: cpu 4, msr 7d116180
kube# [ 0.263348] #5
kube# [ 0.056125] kvm-clock: cpu 5, msr 2655f141, secondary cpu clock
kube# [ 0.263842] KVM setup async PF for cpu 5
kube# [ 0.264195] kvm-stealtime: cpu 5, msr 7d156180
kube# [ 0.265340] #6
kube# [ 0.056125] kvm-clock: cpu 6, msr 2655f181, secondary cpu clock
kube# [ 0.265834] KVM setup async PF for cpu 6
kube# [ 0.266207] kvm-stealtime: cpu 6, msr 7d196180
kube# [ 0.266337] #7
kube# [ 0.056125] kvm-clock: cpu 7, msr 2655f1c1, secondary cpu clock
kube# [ 0.267872] KVM setup async PF for cpu 7
kube# [ 0.268253] kvm-stealtime: cpu 7, msr 7d1d6180
kube# [ 0.269333] #8
kube# [ 0.056125] kvm-clock: cpu 8, msr 2655f201, secondary cpu clock
kube# [ 0.269842] KVM setup async PF for cpu 8
kube# [ 0.270227] kvm-stealtime: cpu 8, msr 7d216180
kube# [ 0.271289] #9
kube# [ 0.056125] kvm-clock: cpu 9, msr 2655f241, secondary cpu clock
kube# [ 0.271769] KVM setup async PF for cpu 9
kube# [ 0.272222] kvm-stealtime: cpu 9, msr 7d256180
kube# [ 0.272329] #10
kube# [ 0.056125] kvm-clock: cpu 10, msr 2655f281, secondary cpu clock
kube# [ 0.273691] KVM setup async PF for cpu 10
kube# [ 0.274217] kvm-stealtime: cpu 10, msr 7d296180
kube# [ 0.274329] #11
kube# [ 0.056125] kvm-clock: cpu 11, msr 2655f2c1, secondary cpu clock
kube# [ 0.275603] KVM setup async PF for cpu 11
kube# [ 0.276288] kvm-stealtime: cpu 11, msr 7d2d6180
kube# [ 0.276332] #12
kube# [ 0.056125] kvm-clock: cpu 12, msr 2655f301, secondary cpu clock
kube# [ 0.277725] KVM setup async PF for cpu 12
kube# [ 0.278257] kvm-stealtime: cpu 12, msr 7d316180
kube# [ 0.278340] #13
kube# [ 0.056125] kvm-clock: cpu 13, msr 2655f341, secondary cpu clock
kube# [ 0.279730] KVM setup async PF for cpu 13
kube# [ 0.280260] kvm-stealtime: cpu 13, msr 7d356180
kube# [ 0.280338] #14
kube# [ 0.056125] kvm-clock: cpu 14, msr 2655f381, secondary cpu clock
kube# [ 0.281744] KVM setup async PF for cpu 14
kube# [ 0.282256] kvm-stealtime: cpu 14, msr 7d396180
kube# [ 0.282332] #15
kube# [ 0.056125] kvm-clock: cpu 15, msr 2655f3c1, secondary cpu clock
kube# [ 0.283748] KVM setup async PF for cpu 15
kube# [ 0.284288] kvm-stealtime: cpu 15, msr 7d3d6180
kube# [ 0.285293] smp: Brought up 1 node, 16 CPUs
kube# [ 0.285696] smpboot: Max logical packages: 16
kube# [ 0.286131] smpboot: Total of 16 processors activated (111999.93 BogoMIPS)
kube# [ 0.287520] devtmpfs: initialized
kube# [ 0.288328] x86/mm: Memory block size: 128MB
kube# [ 0.289306] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
kube# [ 0.290230] futex hash table entries: 4096 (order: 6, 262144 bytes)
kube# [ 0.290408] pinctrl core: initialized pinctrl subsystem
kube# [ 0.291484] NET: Registered protocol family 16
kube# [ 0.291966] audit: initializing netlink subsys (disabled)
kube# [ 0.292309] audit: type=2000 audit(1580088307.867:1): state=initialized audit_enabled=0 res=1
kube# [ 0.293293] cpuidle: using governor menu
kube# [ 0.294413] ACPI: bus type PCI registered
kube# [ 0.294818] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
kube# [ 0.295356] PCI: Using configuration type 1 for base access
kube# [ 0.296679] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
kube# [ 0.297557] ACPI: Added _OSI(Module Device)
kube# [ 0.298290] ACPI: Added _OSI(Processor Device)
kube# [ 0.298747] ACPI: Added _OSI(3.0 _SCP Extensions)
kube# [ 0.299291] ACPI: Added _OSI(Processor Aggregator Device)
kube# [ 0.299828] ACPI: Added _OSI(Linux-Dell-Video)
kube# [ 0.300290] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
kube# [ 0.301326] ACPI: 1 ACPI AML tables successfully acquired and loaded
kube# [ 0.302411] ACPI: Interpreter enabled
kube# [ 0.302803] ACPI: (supports S0 S3 S4 S5)
kube# [ 0.303291] ACPI: Using IOAPIC for interrupt routing
kube# [ 0.303805] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
kube# [ 0.304361] ACPI: Enabled 2 GPEs in block 00 to 0F
kube# [ 0.307005] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
kube# [ 0.307293] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
kube# [ 0.308294] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
kube# [ 0.308961] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
kube# [ 0.309362] acpiphp: Slot [3] registered
kube# [ 0.310311] acpiphp: Slot [4] registered
kube# [ 0.310739] acpiphp: Slot [5] registered
kube# [ 0.311312] acpiphp: Slot [6] registered
kube# [ 0.311721] acpiphp: Slot [7] registered
kube# [ 0.312140] acpiphp: Slot [8] registered
kube# [ 0.312312] acpiphp: Slot [9] registered
kube# [ 0.312741] acpiphp: Slot [10] registered
kube# [ 0.313312] acpiphp: Slot [11] registered
kube# [ 0.313730] acpiphp: Slot [12] registered
kube# [ 0.314318] acpiphp: Slot [13] registered
kube# [ 0.314727] acpiphp: Slot [14] registered
kube# [ 0.315162] acpiphp: Slot [15] registered
kube# [ 0.315312] acpiphp: Slot [16] registered
kube# [ 0.315747] acpiphp: Slot [17] registered
kube# [ 0.316311] acpiphp: Slot [18] registered
kube# [ 0.316744] acpiphp: Slot [19] registered
kube# [ 0.317317] acpiphp: Slot [20] registered
kube# [ 0.317729] acpiphp: Slot [21] registered
kube# [ 0.318163] acpiphp: Slot [22] registered
kube# [ 0.318311] acpiphp: Slot [23] registered
kube# [ 0.318744] acpiphp: Slot [24] registered
kube# [ 0.319312] acpiphp: Slot [25] registered
kube# [ 0.319729] acpiphp: Slot [26] registered
kube# [ 0.320294] acpiphp: Slot [27] registered
kube# [ 0.320706] acpiphp: Slot [28] registered
kube# [ 0.321143] acpiphp: Slot [29] registered
kube# [ 0.321313] acpiphp: Slot [30] registered
kube# [ 0.321835] acpiphp: Slot [31] registered
kube# [ 0.322297] PCI host bridge to bus 0000:00
kube# [ 0.322699] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
kube# [ 0.323289] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
kube# [ 0.323944] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
kube# [ 0.324289] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
kube# [ 0.325291] pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
kube# [ 0.326291] pci_bus 0000:00: root bus resource [bus 00-ff]
kube# [ 0.330722] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
kube# [ 0.331290] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
kube# [ 0.331923] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
kube# [ 0.332289] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
kube# [ 0.337052] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI
kube# [ 0.337295] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
kube# [ 0.409847] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
kube# [ 0.410340] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
kube# [ 0.411335] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
kube# [ 0.412336] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
kube# [ 0.413315] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
kube# [ 0.414546] pci 0000:00:02.0: vgaarb: setting as boot VGA device
kube# [ 0.414888] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
kube# [ 0.415293] pci 0000:00:02.0: vgaarb: bridge control possible
kube# [ 0.416289] vgaarb: loaded
kube# [ 0.416629] PCI: Using ACPI for IRQ routing
kube# [ 0.417478] NetLabel: Initializing
kube# [ 0.417818] NetLabel: domain hash size = 128
kube# [ 0.418289] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
kube# [ 0.418844] NetLabel: unlabeled traffic allowed by default
kube# [ 0.419324] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
kube# [ 0.420020] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
kube# [ 0.420290] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
kube# [ 0.424832] clocksource: Switched to clocksource kvm-clock
kube# [ 0.429800] VFS: Disk quotas dquot_6.6.0
kube# [ 0.430203] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
kube# [ 0.430946] AppArmor: AppArmor Filesystem Enabled
kube# [ 0.431422] pnp: PnP ACPI init
kube# [ 0.431923] pnp: PnP ACPI: found 6 devices
kube# [ 0.438433] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
kube# [ 0.439306] NET: Registered protocol family 2
kube# [ 0.439805] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes)
kube# [ 0.440576] TCP established hash table entries: 16384 (order: 5, 131072 bytes)
kube# [ 0.441285] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
kube# [ 0.441936] TCP: Hash tables configured (established 16384 bind 16384)
kube# [ 0.442584] UDP hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.443179] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.443842] NET: Registered protocol family 1
kube# [ 0.444278] pci 0000:00:01.0: PIIX3: Enabling Passive Release
kube# [ 0.444847] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
kube# [ 0.445431] pci 0000:00:01.0: Activating ISA DMA hang workarounds
kube# [ 0.454471] PCI Interrupt Link [LNKD] enabled at IRQ 11
kube# [ 0.463772] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x6c3 took 17324 usecs
kube# [ 0.464508] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
kube# [ 0.465396] Trying to unpack rootfs image as initramfs...
kube# [ 0.545498] Freeing initrd memory: 9800K
kube# [ 0.545981] Scanning for low memory corruption every 60 seconds
kube# [ 0.546952] Initialise system trusted keyrings
kube# [ 0.547583] workingset: timestamp_bits=40 max_order=19 bucket_order=0
kube# [ 0.548982] zbud: loaded
kube# [ 0.550041] Key type asymmetric registered
kube# [ 0.550472] Asymmetric key parser 'x509' registered
kube# [ 0.550956] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
kube# [ 0.551764] io scheduler noop registered
kube# [ 0.552154] io scheduler deadline registered
kube# [ 0.552602] io scheduler cfq registered (default)
kube# [ 0.553059] io scheduler mq-deadline registered
kube# [ 0.553516] io scheduler kyber registered
kube# [ 0.554403] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
kube# [ 0.578019] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
kube# [ 0.580746] brd: module loaded
kube# [ 0.581698] mce: Using 10 MCE banks
kube# [ 0.582064] sched_clock: Marking stable (526559578, 55125337)->(584923197, -3238282)
kube# [ 0.583302] registered taskstats version 1
kube# [ 0.583720] Loading compiled-in X.509 certificates
kube# [ 0.584200] zswap: loaded using pool lzo/zbud
kube# [ 0.584997] AppArmor: AppArmor sha1 policy hashing enabled
kube# [ 0.587227] Freeing unused kernel image memory: 1448K
kube# [ 0.596298] Write protecting the kernel read-only data: 14336k
kube# [ 0.597447] Freeing unused kernel image memory: 2012K
kube# [ 0.598026] Freeing unused kernel image memory: 144K
kube# [ 0.598539] Run /init as init process
kube#
kube# <<< NixOS Stage 1 >>>
kube#
kube# loading module virtio_balloon...
kube# loading module virtio_console...
kube# loading module virtio_rng...
kube# loading module dm_mod...
kube# [ 0.622184] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: dm-devel@redhat.com
kube# running udev...
kube# [ 0.625852] systemd-udevd[181]: Starting version 243
kube# [ 0.626604] systemd-udevd[182]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 0.627834] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 0.629251] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 0.641136] rtc_cmos 00:00: RTC can wake from S4
kube# [ 0.642396] rtc_cmos 00:00: registered as rtc0
kube# [ 0.643002] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
kube# [ 0.644588] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
kube# [ 0.646324] serio: i8042 KBD port at 0x60,0x64 irq 1
kube# [ 0.646924] serio: i8042 AUX port at 0x60,0x64 irq 12
kube# [ 0.650036] SCSI subsystem initialized
kube# [ 0.651941] ACPI: bus type USB registered
kube# [ 0.652386] usbcore: registered new interface driver usbfs
kube# [ 0.652937] usbcore: registered new interface driver hub
kube# [ 0.653545] usbcore: registered new device driver usb
kube# [ 0.655929] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
kube# [ 0.656669] PCI Interrupt Link [LNKC] enabled at IRQ 10
kube# [ 0.657569] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
kube# [ 0.660925] uhci_hcd: USB Universal Host Controller Interface driver
kube# [ 0.662513] scsi host0: ata_piix
kube# [ 0.662955] scsi host1: ata_piix
kube# [ 0.663307] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1c0 irq 14
kube# [ 0.663950] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1c8 irq 15
kube# [ 0.676450] uhci_hcd 0000:00:01.2: UHCI Host Controller
kube# [ 0.677272] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
kube# [ 0.678074] uhci_hcd 0000:00:01.2: detected 2 ports
kube# [ 0.678602] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c0c0
kube# [ 0.679238] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19
kube# [ 0.679376] random: fast init done
kube# [ 0.680108] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
kube# [ 0.680575] random: crng init done
kube# [ 0.681331] usb usb1: Product: UHCI Host Controller
kube# [ 0.681334] usb usb1: Manufacturer: Linux 4.19.95 uhci_hcd
kube# [ 0.681336] usb usb1: SerialNumber: 0000:00:01.2
kube# [ 0.683701] hub 1-0:1.0: USB hub found
kube# [ 0.684064] hub 1-0:1.0: 2 ports detected
kube# [ 0.687217] PCI Interrupt Link [LNKA] enabled at IRQ 10
kube# [ 0.697123] PCI Interrupt Link [LNKB] enabled at IRQ 11
kube# [ 0.751997] virtio_blk virtio8: [vda] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB)
kube# [ 0.754681] 9pnet: Installing 9P2000 support
kube# [ 0.822068] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
kube# [ 0.823496] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
kube# [ 0.847757] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
kube# [ 0.848426] cdrom: Uniform CD-ROM driver Revision: 3.20
kube# [ 1.007317] usb 1-1: new full-speed USB device number 2 using uhci_hcd
kube# kbd_mode: KDSKBMODE: Inappropriate ioctl for device
kube# %Gstarting device mapper and LVM...
kube# [ 1.116146] clocksource: Switched to clocksource acpi_pm
kube# mke2fs 1.45.3 (14-Jul-2019)
kube# Creating filesystem with 1048576 4k blocks and 262144 inodes
kube# Filesystem UUID: 920e3c81-6c0f-46f5-a720-e7242061a165
kube# Superblock backups stored on blocks:
kube# 32768, 98304, 163840, 229376, 294912, 819200, 884736
kube#
kube# Allocating group tables: 0/32 done
kube# Writing inode tables: 0/32 done
kube# Creating journal (16384 blocks): done
kube# Writing superblocks and filesystem accounting information: 0/32[ 1.176924] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
kube# [ 1.178080] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
kube# [ 1.178917] usb 1-1: Product: QEMU USB Tablet
kube# [ 1.179383] usb 1-1: Manufacturer: QEMU
kube# [ 1.179767] usb 1-1: SerialNumber: 28754-0000:00:01.2-1
kube# [ 1.187691] hidraw: raw HID events driver (C) Jiri Kosina
kube# [ 1.193967] usbcore: registered new interface driver usbhid
kube# [ 1.194564] usbhid: USB HID core driver
kube# [ 1.195868] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2
kube# [ 1.197050] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
kube# done
kube#
kube# checking /dev/vda...
kube# fsck (busybox 1.30.1)
kube# [fsck.ext4 (1) -- /mnt-root/] fsck.ext4 -a /dev/vda
kube# /dev/vda: clean, 11/262144 files, 36942/1048576 blocks
kube# mounting /dev/vda on /...
kube# [ 1.300406] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null)
kube# mounting store on /nix/.ro-store...
kube# [ 1.313224] FS-Cache: Loaded
kube# [ 1.316145] 9p: Installing v9fs 9p2000 file system support
kube# [ 1.316747] FS-Cache: Netfs '9p' registered for caching
kube# mounting tmpfs on /nix/.rw-store...
kube# mounting shared on /tmp/shared...
kube# mounting xchg on /tmp/xchg...
kube# mounting overlay filesystem on /nix/store...
kube#
kube# <<< NixOS Stage 2 >>>
kube#
kube# [ 1.463886] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 1.464950] booting system configuration /nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62
kube# running activation script...
kube# setting up /etc...
kube# starting systemd...
kube# [ 2.611612] systemd[1]: Inserted module 'autofs4'
kube# [ 2.635727] NET: Registered protocol family 10
kube# [ 2.636495] Segment Routing with IPv6
kube# [ 2.647907] systemd[1]: systemd 243 running in system mode. (+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT +UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
kube# [ 2.650064] systemd[1]: Detected virtualization kvm.
kube# [ 2.650580] systemd[1]: Detected architecture x86-64.
kube# [ 2.658318] systemd[1]: Set hostname to <kube>.
kube# [ 2.660330] systemd[1]: Initializing machine ID from random generator.
kube# [ 2.707885] systemd-fstab-generator[616]: Checking was requested for "store", but it is not a device.
kube# [ 2.711197] systemd-fstab-generator[616]: Checking was requested for "shared", but it is not a device.
kube# [ 2.712717] systemd-fstab-generator[616]: Checking was requested for "xchg", but it is not a device.
kube# [ 2.928782] systemd[1]: /nix/store/0vscs3kafrn5z3g1bwdgabsdnii8kszz-unit-cfssl.service/cfssl.service:16: StateDirectory= path is absolute, ignoring: /var/lib/cfssl
kube# [ 2.942222] systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
kube# [ 2.943976] systemd[1]: Created slice kubernetes.slice.
kube# [ 2.945330] systemd[1]: Created slice system-getty.slice.
kube# [ 2.946339] systemd[1]: Created slice User and Session Slice.
kube# [ 2.986506] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 2.991363] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
kube# [ 3.004730] tun: Universal TUN/TAP device driver, 1.6
kube# [ 3.011491] loop: module loaded
kube# [ 3.016878] Bridge firewalling registered
kube# [ 3.201684] audit: type=1325 audit(1580088310.084:2): table=filter family=2 entries=12
kube# [ 3.215676] audit: type=1325 audit(1580088310.091:3): table=filter family=10 entries=12
kube# [ 3.216566] audit: type=1300 audit(1580088310.091:3): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=29 a2=40 a3=14fbfa0 items=0 ppid=638 pid=676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.220014] audit: type=1327 audit(1580088310.091:3): proctitle=6970367461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D7000746370002D2D73796E002D6A004C4F47002D2D6C6F672D6C6576656C00696E666F002D2D6C6F672D707265666978007265667573656420636F6E6E656374696F6E3A20
kube# [ 3.233959] audit: type=1325 audit(1580088310.116:4): table=filter family=2 entries=13
kube# [ 3.234836] audit: type=1300 audit(1580088310.116:4): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=707850 items=0 ppid=638 pid=678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.238225] audit: type=1327 audit(1580088310.116:4): proctitle=69707461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D6D00706B74747970650000002D2D706B742D7479706500756E6963617374002D6A006E69786F732D66772D726566757365
kube# [ 3.244847] audit: type=1325 audit(1580088310.127:5): table=filter family=10 entries=13
kube# [ 3.245774] audit: type=1300 audit(1580088310.127:5): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=29 a2=40 a3=240fc60 items=0 ppid=638 pid=680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.195114] systemd-modules-load[628]: Failed to find module 'gcov-proc'
kube# [ 3.197017] systemd-modules-load[628]: Inserted module 'bridge'
kube# [ 3.198205] systemd-modules-load[628]: Inserted module 'macvlan'
kube# [ 3.199358] systemd-modules-load[628]: Inserted module 'tap'
kube# [ 3.200667] systemd-modules-load[628]: Inserted module 'tun'
kube# [ 3.202489] systemd-modules-load[628]: Inserted module 'loop'
kube# [ 3.203806] systemd-modules-load[628]: Inserted module 'br_netfilter'
kube# [ 3.205273] systemd-udevd[635]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 3.206860] systemd-udevd[635]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 3.208799] systemd-udevd[635]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 3.211041] systemd[1]: Starting Flush Journal to Persistent Storage...
kube# [ 3.273867] systemd-journald[627]: Received client request to flush runtime journal.
kube# [ 3.264405] systemd[1]: Started udev Kernel Device Manager.
kube# [ 3.320719] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
kube# [ 3.322606] ACPI: Power Button [PWRF]
kube# [ 3.268319] systemd[1]: Started Flush Journal to Persistent Storage.
kube# [ 3.270511] systemd[1]: Starting Create Volatile Files and Directories...
kube# [ 3.287511] systemd[1]: Started Create Volatile Files and Directories.
kube# [ 3.288716] systemd[1]: Starting Rebuild Journal Catalog...
kube# [ 3.289830] systemd[1]: Starting Update UTMP about System Boot/Shutdown...
kube# [ 3.305636] systemd[1]: Started Update UTMP about System Boot/Shutdown.
kube# [ 3.315032] systemd[1]: Started Rebuild Journal Catalog.
kube# [ 3.316063] systemd[1]: Starting Update is Completed...
kube# [ 3.327293] systemd[1]: Started Update is Completed.
kube# [ 3.445767] parport_pc 00:04: reported by Plug and Play ACPI
kube# [ 3.446739] parport0: PC-style at 0x378, irq 7 [PCSPP(,...)]
kube# [ 3.483541] Floppy drive(s): fd0 is 2.88M AMI BIOS
kube# [ 3.495706] FDC 0 is a S82078B
kube# [ 3.538978] Linux agpgart interface v0.103
kube# [ 3.543168] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
kube# [ 3.530658] systemd-udevd[709]: Using default interface naming scheme 'v243'.
kube# [ 3.534649] systemd-udevd[709]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.544162] systemd-udevd[698]: Using default interface naming scheme 'v243'.
kube# [ 3.545292] systemd-udevd[698]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.605921] mousedev: PS/2 mouse device common for all mice
kube# [ 3.563668] systemd[1]: Found device Virtio network device.
kube# [ 3.602335] systemd[1]: Found device /dev/ttyS0.
kube# [ 3.672679] powernow_k8: Power state transitions not supported
kube# [ 3.673867] powernow_k8: Power state transitions not supported
kube# [ 3.674696] powernow_k8: Power state transitions not supported
kube# [ 3.675482] powernow_k8: Power state transitions not supported
kube# [ 3.676674] powernow_k8: Power state transitions not supported
kube# [ 3.677813] powernow_k8: Power state transitions not supported
kube# [ 3.678458] powernow_k8: Power state transitions not supported
kube# [ 3.679319] powernow_k8: Power state transitions not supported
kube# [ 3.680273] powernow_k8: Power state transitions not supported
kube# [ 3.681522] powernow_k8: Power state transitions not supported
kube# [ 3.682707] powernow_k8: Power state transitions not supported
kube# [ 3.683586] powernow_k8: Power state transitions not supported
kube# [ 3.684426] powernow_k8: Power state transitions not supported
kube# [ 3.685527] powernow_k8: Power state transitions not supported
kube# [ 3.686458] powernow_k8: Power state transitions not supported
kube# [ 3.687400] powernow_k8: Power state transitions not supported
kube# [ 3.700856] [drm] Found bochs VGA, ID 0xb0c0.
kube# [ 3.701712] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebd0000.
kube# [ 3.702676] [TTM] Zone kernel: Available graphics memory: 1021090 kiB
kube# [ 3.703691] [TTM] Initializing pool allocator
kube# [ 3.704430] [TTM] Initializing DMA pool allocator
kube# [ 3.678209] systemd[1]: Started Firewall.
kube# [ 3.750436] fbcon: bochsdrmfb (fb0) is primary device
kube# [ 3.839063] Console: switching to colour frame buffer device 128x48
kube# [ 3.925674] bochs-drm 0000:00:02.0: fb0: bochsdrmfb frame buffer device
kube# [ 3.932301] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0
kube# [ 3.936366] powernow_k8: Power state transitions not supported
kube# [ 3.937002] powernow_k8: Power state transitions not supported
kube# [ 3.937637] powernow_k8: Power state transitions not supported
kube# [ 3.938296] powernow_k8: Power state transitions not supported
kube# [ 3.938944] powernow_k8: Power state transitions not supported
kube# [ 3.939666] powernow_k8: Power state transitions not supported
kube# [ 3.940565] powernow_k8: Power state transitions not supported
kube# [ 3.941269] powernow_k8: Power state transitions not supported
kube# [ 3.941387] EDAC MC: Ver: 3.0.0
kube# [ 3.941912] powernow_k8: Power state transitions not supported
kube# [ 3.941921] powernow_k8: Power state transitions not supported
kube# [ 3.943695] powernow_k8: Power state transitions not supported
kube# [ 3.944317] powernow_k8: Power state transitions not supported
kube# [ 3.945080] powernow_k8: Power state transitions not supported
kube# [ 3.945759] powernow_k8: Power state transitions not supported
kube# [ 3.946397] powernow_k8: Power state transitions not supported
kube# [ 3.946977] powernow_k8: Power state transitions not supported
kube# [ 3.965186] MCE: In-kernel MCE decoding enabled.
kube# [ 3.965239] powernow_k8: Power state transitions not supported
kube# [ 3.966652] powernow_k8: Power state transitions not supported
kube# [ 3.967261] powernow_k8: Power state transitions not supported
kube# [ 3.967889] powernow_k8: Power state transitions not supported
kube# [ 3.968933] powernow_k8: Power state transitions not supported
kube# [ 3.969681] powernow_k8: Power state transitions not supported
kube# [ 3.970440] powernow_k8: Power state transitions not supported
kube# [ 3.971123] powernow_k8: Power state transitions not supported
kube# [ 3.971735] powernow_k8: Power state transitions not supported
kube# [ 3.972359] powernow_k8: Power state transitions not supported
kube# [ 3.972966] powernow_k8: Power state transitions not supported
kube# [ 3.973607] powernow_k8: Power state transitions not supported
kube# [ 3.974376] powernow_k8: Power state transitions not supported
kube# [ 3.974980] powernow_k8: Power state transitions not supported
kube# [ 3.975616] powernow_k8: Power state transitions not supported
kube# [ 3.976261] powernow_k8: Power state transitions not supported
kube# [ 4.018581] powernow_k8: Power state transitions not supported
kube# [ 4.019217] powernow_k8: Power state transitions not supported
kube# [ 4.019924] powernow_k8: Power state transitions not supported
kube# [ 4.020534] powernow_k8: Power state transitions not supported
kube# [ 4.021170] powernow_k8: Power state transitions not supported
kube# [ 4.021945] powernow_k8: Power state transitions not supported
kube# [ 4.022559] powernow_k8: Power state transitions not supported
kube# [ 4.023190] powernow_k8: Power state transitions not supported
kube# [ 4.023806] powernow_k8: Power state transitions not supported
kube# [ 4.024428] powernow_k8: Power state transitions not supported
kube# [ 4.025063] powernow_k8: Power state transitions not supported
kube# [ 4.025856] powernow_k8: Power state transitions not supported
kube# [ 4.026489] powernow_k8: Power state transitions not supported
kube# [ 4.027325] powernow_k8: Power state transitions not supported
kube# [ 4.028026] powernow_k8: Power state transitions not supported
kube# [ 4.028620] powernow_k8: Power state transitions not supported
kube# [ 4.072097] powernow_k8: Power state transitions not supported
kube# [ 4.072849] powernow_k8: Power state transitions not supported
kube# [ 4.073464] powernow_k8: Power state transitions not supported
kube# [ 4.074069] powernow_k8: Power state transitions not supported
kube# [ 4.074696] powernow_k8: Power state transitions not supported
kube# [ 4.075307] powernow_k8: Power state transitions not supported
kube# [ 4.075937] powernow_k8: Power state transitions not supported
kube# [ 4.076546] powernow_k8: Power state transitions not supported
kube# [ 4.077143] powernow_k8: Power state transitions not supported
kube# [ 4.077790] powernow_k8: Power state transitions not supported
kube# [ 4.078396] powernow_k8: Power state transitions not supported
kube# [ 4.079171] powernow_k8: Power state transitions not supported
kube# [ 4.079800] powernow_k8: Power state transitions not supported
kube# [ 4.080442] powernow_k8: Power state transitions not supported
kube# [ 4.081141] powernow_k8: Power state transitions not supported
kube# [ 4.081727] powernow_k8: Power state transitions not supported
kube# [ 4.108987] powernow_k8: Power state transitions not supported
kube# [ 4.109741] powernow_k8: Power state transitions not supported
kube# [ 4.110367] powernow_k8: Power state transitions not supported
kube# [ 4.110994] powernow_k8: Power state transitions not supported
kube# [ 4.111610] powernow_k8: Power state transitions not supported
kube# [ 4.112232] powernow_k8: Power state transitions not supported
kube# [ 4.112860] powernow_k8: Power state transitions not supported
kube# [ 4.113479] powernow_k8: Power state transitions not supported
kube# [ 4.114090] powernow_k8: Power state transitions not supported
kube# [ 4.114686] powernow_k8: Power state transitions not supported
kube# [ 4.115281] powernow_k8: Power state transitions not supported
kube# [ 4.116034] powernow_k8: Power state transitions not supported
kube# [ 4.116638] powernow_k8: Power state transitions not supported
kube# [ 4.117269] powernow_k8: Power state transitions not supported
kube# [ 4.117947] powernow_k8: Power state transitions not supported
kube# [ 4.118622] powernow_k8: Power state transitions not supported
kube# [ 4.148890] powernow_k8: Power state transitions not supported
kube# [ 4.149502] powernow_k8: Power state transitions not supported
kube# [ 4.150105] powernow_k8: Power state transitions not supported
kube# [ 4.150733] powernow_k8: Power state transitions not supported
kube# [ 4.151364] powernow_k8: Power state transitions not supported
kube# [ 4.151960] powernow_k8: Power state transitions not supported
kube# [ 4.152578] powernow_k8: Power state transitions not supported
kube# [ 4.153160] powernow_k8: Power state transitions not supported
kube# [ 4.153749] powernow_k8: Power state transitions not supported
kube# [ 4.154395] powernow_k8: Power state transitions not supported
kube# [ 4.154974] powernow_k8: Power state transitions not supported
kube# [ 4.155757] powernow_k8: Power state transitions not supported
kube# [ 4.156368] powernow_k8: Power state transitions not supported
kube# [ 4.157018] powernow_k8: Power state transitions not supported
kube# [ 4.157717] powernow_k8: Power state transitions not supported
kube# [ 4.158395] powernow_k8: Power state transitions not supported
kube# [ 4.181771] powernow_k8: Power state transitions not supported
kube# [ 4.182610] powernow_k8: Power state transitions not supported
kube# [ 4.183264] powernow_k8: Power state transitions not supported
kube# [ 4.183907] powernow_k8: Power state transitions not supported
kube# [ 4.184525] powernow_k8: Power state transitions not supported
kube# [ 4.185149] powernow_k8: Power state transitions not supported
kube# [ 4.185795] powernow_k8: Power state transitions not supported
kube# [ 4.186399] powernow_k8: Power state transitions not supported
kube# [ 4.186997] powernow_k8: Power state transitions not supported
kube# [ 4.187603] powernow_k8: Power state transitions not supported
kube# [ 4.188205] powernow_k8: Power state transitions not supported
kube# [ 4.188985] powernow_k8: Power state transitions not supported
kube# [ 4.189587] powernow_k8: Power state transitions not supported
kube# [ 4.190219] powernow_k8: Power state transitions not supported
kube# [ 4.190844] powernow_k8: Power state transitions not supported
kube# [ 4.191440] powernow_k8: Power state transitions not supported
kube# [ 4.213669] powernow_k8: Power state transitions not supported
kube# [ 4.214307] powernow_k8: Power state transitions not supported
kube# [ 4.215000] powernow_k8: Power state transitions not supported
kube# [ 4.215769] powernow_k8: Power state transitions not supported
kube# [ 4.216496] powernow_k8: Power state transitions not supported
kube# [ 4.217114] powernow_k8: Power state transitions not supported
kube# [ 4.217864] powernow_k8: Power state transitions not supported
kube# [ 4.218550] powernow_k8: Power state transitions not supported
kube# [ 4.219302] powernow_k8: Power state transitions not supported
kube# [ 4.220001] powernow_k8: Power state transitions not supported
kube# [ 4.220710] powernow_k8: Power state transitions not supported
kube# [ 4.221481] powernow_k8: Power state transitions not supported
kube# [ 4.222085] powernow_k8: Power state transitions not supported
kube# [ 4.222860] powernow_k8: Power state transitions not supported
kube# [ 4.223630] powernow_k8: Power state transitions not supported
kube# [ 4.224339] powernow_k8: Power state transitions not supported
kube# [ 4.183421] systemd-udevd[704]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 4.248643] powernow_k8: Power state transitions not supported
kube# [ 4.249270] powernow_k8: Power state transitions not supported
kube# [ 4.249923] powernow_k8: Power state transitions not supported
kube# [ 4.250633] powernow_k8: Power state transitions not supported
kube# [ 4.251302] powernow_k8: Power state transitions not supported
kube# [ 4.251939] powernow_k8: Power state transitions not supported
kube# [ 4.252628] powernow_k8: Power state transitions not supported
kube# [ 4.253321] powernow_k8: Power state transitions not supported
kube# [ 4.254003] powernow_k8: Power state transitions not supported
kube# [ 4.254665] powernow_k8: Power state transitions not supported
kube# [ 4.255375] powernow_k8: Power state transitions not supported
kube# [ 4.256212] powernow_k8: Power state transitions not supported
kube# [ 4.256845] powernow_k8: Power state transitions not supported
kube# [ 4.257479] powernow_k8: Power state transitions not supported
kube# [ 4.258173] powernow_k8: Power state transitions not supported
kube# [ 4.258771] powernow_k8: Power state transitions not supported
kube# [ 4.205375] systemd[1]: Found device /dev/hvc0.
kube# [ 4.270322] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
kube# [ 4.286777] powernow_k8: Power state transitions not supported
kube# [ 4.287785] powernow_k8: Power state transitions not supported
kube# [ 4.287807] powernow_k8: Power state transitions not supported
kube# [ 4.287813] powernow_k8: Power state transitions not supported
kube# [ 4.290339] powernow_k8: Power state transitions not supported
kube# [ 4.290355] powernow_k8: Power state transitions not supported
kube# [ 4.291855] powernow_k8: Power state transitions not supported
kube# [ 4.291882] powernow_k8: Power state transitions not supported
kube# [ 4.293506] powernow_k8: Power state transitions not supported
kube# [ 4.294701] powernow_k8: Power state transitions not supported
kube# [ 4.295783] powernow_k8: Power state transitions not supported
kube# [ 4.296990] powernow_k8: Power state transitions not supported
kube# [ 4.297928] powernow_k8: Power state transitions not supported
kube# [ 4.299007] powernow_k8: Power state transitions not supported
kube# [ 4.300103] powernow_k8: Power state transitions not supported
kube# [ 4.300919] powernow_k8: Power state transitions not supported
kube# [ 4.302195] ppdev: user-space parallel port driver
kube# [ 4.252402] udevadm[637]: systemd-udev-settle.service is deprecated.
kube# [ 4.333183] powernow_k8: Power state transitions not supported
kube# [ 4.334440] powernow_k8: Power state transitions not supported
kube# [ 4.335510] powernow_k8: Power state transitions not supported
kube# [ 4.336124] powernow_k8: Power state transitions not supported
kube# [ 4.337182] powernow_k8: Power state transitions not supported
kube# [ 4.338424] powernow_k8: Power state transitions not supported
kube# [ 4.339567] powernow_k8: Power state transitions not supported
kube# [ 4.340450] powernow_k8: Power state transitions not supported
kube# [ 4.341633] powernow_k8: Power state transitions not supported
kube# [ 4.342788] powernow_k8: Power state transitions not supported
kube# [ 4.343746] powernow_k8: Power state transitions not supported
kube# [ 4.343751] powernow_k8: Power state transitions not supported
kube# [ 4.343773] powernow_k8: Power state transitions not supported
kube# [ 4.343789] powernow_k8: Power state transitions not supported
kube# [ 4.343808] powernow_k8: Power state transitions not supported
kube# [ 4.343827] powernow_k8: Power state transitions not supported
kube# [ 4.532864] powernow_k8: Power state transitions not supported
kube# [ 4.533906] powernow_k8: Power state transitions not supported
kube# [ 4.534946] powernow_k8: Power state transitions not supported
kube# [ 4.535726] powernow_k8: Power state transitions not supported
kube# [ 4.537029] powernow_k8: Power state transitions not supported
kube# [ 4.537993] powernow_k8: Power state transitions not supported
kube# [ 4.538996] powernow_k8: Power state transitions not supported
kube# [ 4.539709] powernow_k8: Power state transitions not supported
kube# [ 4.540964] powernow_k8: Power state transitions not supported
kube# [ 4.541974] powernow_k8: Power state transitions not supported
kube# [ 4.542827] powernow_k8: Power state transitions not supported
kube# [ 4.543839] powernow_k8: Power state transitions not supported
kube# [ 4.544960] powernow_k8: Power state transitions not supported
kube# [ 4.546194] powernow_k8: Power state transitions not supported
kube# [ 4.547644] powernow_k8: Power state transitions not supported
kube# [ 4.549182] powernow_k8: Power state transitions not supported
kube# [ 4.573714] powernow_k8: Power state transitions not supported
kube# [ 4.574551] powernow_k8: Power state transitions not supported
kube# [ 4.575531] powernow_k8: Power state transitions not supported
kube# [ 4.576221] powernow_k8: Power state transitions not supported
kube# [ 4.577174] powernow_k8: Power state transitions not supported
kube# [ 4.578216] powernow_k8: Power state transitions not supported
kube# [ 4.579354] powernow_k8: Power state transitions not supported
kube# [ 4.580386] powernow_k8: Power state transitions not supported
kube# [ 4.581268] powernow_k8: Power state transitions not supported
kube# [ 4.582000] powernow_k8: Power state transitions not supported
kube# [ 4.582622] powernow_k8: Power state transitions not supported
kube# [ 4.583240] powernow_k8: Power state transitions not supported
kube# [ 4.583997] powernow_k8: Power state transitions not supported
kube# [ 4.584698] powernow_k8: Power state transitions not supported
kube# [ 4.584708] powernow_k8: Power state transitions not supported
kube# [ 4.586020] powernow_k8: Power state transitions not supported
kube# [ 4.606002] powernow_k8: Power state transitions not supported
kube# [ 4.606731] powernow_k8: Power state transitions not supported
kube# [ 4.607474] powernow_k8: Power state transitions not supported
kube# [ 4.608149] powernow_k8: Power state transitions not supported
kube# [ 4.608772] powernow_k8: Power state transitions not supported
kube# [ 4.609385] powernow_k8: Power state transitions not supported
kube# [ 4.609973] powernow_k8: Power state transitions not supported
kube# [ 4.610575] powernow_k8: Power state transitions not supported
kube# [ 4.611159] powernow_k8: Power state transitions not supported
kube# [ 4.611790] powernow_k8: Power state transitions not supported
kube# [ 4.612389] powernow_k8: Power state transitions not supported
kube# [ 4.612980] powernow_k8: Power state transitions not supported
kube# [ 4.613583] powernow_k8: Power state transitions not supported
kube# [ 4.614162] powernow_k8: Power state transitions not supported
kube# [ 4.615282] powernow_k8: Power state transitions not supported
kube# [ 4.615886] powernow_k8: Power state transitions not supported
kube# [ 4.639979] powernow_k8: Power state transitions not supported
kube# [ 4.640605] powernow_k8: Power state transitions not supported
kube# [ 4.641184] powernow_k8: Power state transitions not supported
kube# [ 4.641805] powernow_k8: Power state transitions not supported
kube# [ 4.642421] powernow_k8: Power state transitions not supported
kube# [ 4.643031] powernow_k8: Power state transitions not supported
kube# [ 4.643639] powernow_k8: Power state transitions not supported
kube# [ 4.644243] powernow_k8: Power state transitions not supported
kube# [ 4.644854] powernow_k8: Power state transitions not supported
kube# [ 4.645671] powernow_k8: Power state transitions not supported
kube# [ 4.646458] powernow_k8: Power state transitions not supported
kube# [ 4.647147] powernow_k8: Power state transitions not supported
kube# [ 4.647781] powernow_k8: Power state transitions not supported
kube# [ 4.648396] powernow_k8: Power state transitions not supported
kube# [ 4.648966] powernow_k8: Power state transitions not supported
kube# [ 4.649616] powernow_k8: Power state transitions not supported
kube# [ 4.664308] systemd[1]: Started udev Wait for Complete Device Initialization.
kube# [ 4.665494] systemd[1]: Reached target System Initialization.
kube# [ 4.666244] systemd[1]: Started Daily Cleanup of Temporary Directories.
kube# [ 4.666961] systemd[1]: Reached target Timers.
kube# [ 4.667579] systemd[1]: Listening on D-Bus System Message Bus Socket.
kube# [ 4.668410] systemd[1]: Starting Docker Socket for the API.
kube# [ 4.669049] systemd[1]: Listening on Nix Daemon Socket.
kube# [ 4.669947] systemd[1]: Listening on Docker Socket for the API.
kube# [ 4.670649] systemd[1]: Reached target Sockets.
kube# [ 4.671215] systemd[1]: Reached target Basic System.
kube# [ 4.671907] systemd[1]: Starting Kernel Auditing...
kube# [ 4.672881] systemd[1]: Started backdoor.service.
kube# [ 4.674060] systemd[1]: Starting DHCP Client...
kube# [ 4.675592] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 4.677130] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.678589] systemd[1]: Starting resolvconf update...
kube# connecting to host...
kube# [ 4.686766] nscd[789]: 789 monitoring file `/etc/passwd` (1)
kube# [ 4.687093] nscd[789]: 789 monitoring directory `/etc` (2)
kube# [ 4.687774] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[784]: touch: cannot touch '/var/lib/kubernetes/secrets/apitoken.secret': No such file or directory
kube# [ 4.688128] nscd[789]: 789 monitoring file `/etc/group` (3)
kube# [ 4.688453] nscd[789]: 789 monitoring directory `/etc` (2)
kube# [ 4.688651] nscd[789]: 789 monitoring file `/etc/hosts` (4)
kube# [ 4.688952] nscd[789]: 789 monitoring directory `/etc` (2)
kube# [ 4.690939] nscd[789]: 789 disabled inotify-based monitoring for file `/etc/resolv.conf': No such file or directory
kube# [ 4.691325] nscd[789]: 789 stat failed for file `/etc/resolv.conf'; will try again later: No such file or directory
kube# [ 4.693029] nscd[789]: 789 monitoring file `/etc/services` (5)
kube# [ 4.693364] nscd[789]: 789 monitoring directory `/etc` (2)
kube# [ 4.693760] nscd[789]: 789 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.694100] nscd[789]: 789 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.696437] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[784]: /nix/store/s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start: line 16: /var/lib/kubernetes/secrets/ca.pem: No such file or directory
kube# [ 4.699825] 3j5xawpr21sl93gg17ng2xhw943msvhn-audit-disable[781]: No rules
kube# [ 4.702469] dhcpcd[783]: dev: loaded udev
kube# [ 4.705053] systemd[1]: Started Kernel Auditing.
kube# [ 4.767341] 8021q: 802.1Q VLAN Support v1.8
kube# [ 4.723533] systemd[1]: Started D-Bus System Message Bus.
kube# [ 4.729515] systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
kube# [ 4.741149] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[784]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 4.741584] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[784]: Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Couldn't connect to server
kube: connected to guest root shell
kube# sh: cannot set terminal process group (-1): Inappropriate ioctl for device
kube# sh: no job control in this shell
kube# [ 4.817544] cfg80211: Loading compiled-in X.509 certificates for regulatory database
kube# [ 4.772821] dbus-daemon[810]: dbus[810]: Unknown username "systemd-timesync" in message bus configuration file
kube: (connecting took 5.38 seconds)
(5.38 seconds)
kube# [ 4.833223] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
kube# [ 4.834966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
kube# [ 4.835905] cfg80211: failed to load regulatory.db
kube# [ 4.813098] systemd[1]: kube-certmgr-bootstrap.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 4.813696] systemd[1]: kube-certmgr-bootstrap.service: Failed with result 'exit-code'.
kube# [ 4.817359] systemd[1]: nscd.service: Succeeded.
kube# [ 4.817614] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.819297] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.827031] nscd[849]: 849 monitoring file `/etc/passwd` (1)
kube# [ 4.827334] nscd[849]: 849 monitoring directory `/etc` (2)
kube# [ 4.827660] nscd[849]: 849 monitoring file `/etc/group` (3)
kube# [ 4.827997] nscd[849]: 849 monitoring directory `/etc` (2)
kube# [ 4.828414] nscd[849]: 849 monitoring file `/etc/hosts` (4)
kube# [ 4.828709] nscd[849]: 849 monitoring directory `/etc` (2)
kube# [ 4.829045] nscd[849]: 849 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.829632] nscd[849]: 849 monitoring directory `/etc` (2)
kube# [ 4.829964] nscd[849]: 849 monitoring file `/etc/services` (6)
kube# [ 4.830504] nscd[849]: 849 monitoring directory `/etc` (2)
kube# [ 4.831672] nscd[849]: 849 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.831865] nscd[849]: 849 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.836511] systemd[1]: Started resolvconf update.
kube# [ 4.836828] systemd[1]: Reached target Network (Pre).
kube# [ 4.837961] systemd[1]: Starting Address configuration of eth1...
kube# [ 4.839254] systemd[1]: Starting Link configuration of eth1...
kube# [ 4.840766] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.842746] systemd[1]: Reached target Host and Network Name Lookups.
kube# [ 4.844538] systemd[1]: Reached target User and Group Name Lookups.
kube# [ 4.846554] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[852]: Configuring link...
kube# [ 4.848423] systemd[1]: Starting Login Service...
kube# [ 4.860238] mn1g2a6qvkb8wddq[ 4.915685] 8021q: adding VLAN 0 to HW filter on device eth1
kube# mf7bgnb00q634fh2-unit-script-network-addresses-eth1-start[851]: adding address 192.168.1.1/24... done
kube# [ 4.862893] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[852]: bringing up interface... done
kube# [ 4.864837] systemd[1]: Started Link configuration of eth1.
kube# [ 4.866072] systemd[1]: Reached target All Network Interfaces (deprecated).
kube# [ 4.871640] systemd[1]: Started Address configuration of eth1.
kube# [ 4.872895] systemd[1]: Starting Networking Setup...
kube# [ 4.918894] nscd[849]: 849 monitored file `/etc/resolv.conf` was written to
kube# [ 4.928953] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 4.939086] systemd[1]: Started Networking Setup.
kube# [ 4.940293] systemd[1]: Starting Extra networking commands....
kube# [ 4.941864] systemd[1]: nscd.service: Succeeded.
kube# [ 4.943277] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.944645] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.947160] systemd[1]: Started Extra networking commands..
kube# [ 4.948556] systemd[1]: Reached target Network.
kube# [ 4.949960] systemd[1]: Starting CFSSL CA API server...
kube# [ 4.952228] systemd[1]: Starting etcd key-value store...
kube# [ 4.955160] nscd[922]: 922 monitoring file `/etc/passwd` (1)
kube# [ 4.958600] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 4.961646] nscd[922]: 922 monitoring directory `/etc` (2)
kube# [ 4.965047] systemd[1]: Starting Kubernetes addon manager...
kube# [ 4.968506] nscd[922]: 922 monitoring file `/etc/group` (3)
kube# [ 4.972877] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 4.975970] nscd[922]: 922 monitoring directory `/etc` (2)
kube# [ 4.978963] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 4.981747] nscd[922]: 922 monitoring file `/etc/hosts` (4)
kube# [ 4.984840] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 4.987401] nscd[922]: 922 monitoring directory `/etc` (2)
kube# [ 4.990623] systemd[1]: Starting Permit User Sessions...
kube# [ 4.993430] nscd[922]: 922 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.996867] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.999446] nscd[922]: 922 monitoring directory `/etc` (2)
kube# [ 5.002326] systemd[1]: Started Permit User Sessions.
kube# [ 5.004651] nscd[922]: 922 monitoring file `/etc/services` (6)
kube# [ 5.008578] systemd[1]: Started Getty on tty1.
kube# [ 5.011085] nscd[922]: 922 monitoring directory `/etc` (2)
kube# [ 5.013539] systemd[1]: Reached target Login Prompts.
kube# [ 5.015889] nscd[922]: 922 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 5.018373] nscd[922]: 922 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 5.044615] systemd[863]: systemd-logind.service: Executable /sbin/modprobe missing, skipping: No such file or directory
kube# [ 5.239921] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] generate received request
kube# [ 5.242890] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] received CSR
kube# [ 5.245788] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] generating key: rsa-2048
kube# [ 5.285288] systemd-logind[958]: New seat seat0.
kube# [ 5.291604] systemd-logind[958]: Watching system buttons on /dev/input/event2 (Power Button)
kube# [ 5.294317] systemd-logind[958]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
kube# [ 5.296788] Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# systemd[1]: Started Login Service.
kube# [ 5.304257] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube: exit status 1
kube# [ 5.306610] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[924]: Error in configuration:
(5.92 seconds)
kube# [ 5.308588] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[924]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 5.311569] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[924]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 5.314367] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[924]: * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 5.316972] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 5.318903] systemd[1]: Failed to start Kubernetes addon manager.
kube# [ 5.368440] etcd[921]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 5.370780] etcd[921]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.372919] etcd[921]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 5.374682] etcd[921]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 5.376501] etcd[921]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 5.378533] etcd[921]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 5.380525] etcd[921]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 5.382254] etcd[921]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 5.384246] etcd[921]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.386227] etcd[921]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 5.388119] etcd[921]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 5.390035] etcd[921]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 5.391757] etcd[921]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.393629] etcd[921]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.395456] etcd[921]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.397294] etcd[921]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.399284] etcd[921]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 5.400910] etcd[921]: etcd Version: 3.3.13
kube#
kube# [ 5.402369] etcd[921]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 5.404065] etcd[921]: Go Version: go1.12.9
kube#
kube# [ 5.405436] etcd[921]: Go OS/Arch: linux/amd64
kube#
kube# [ 5.406673] etcd[921]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 5.408323] etcd[921]: failed to detect default host (could not find default route)
kube# [ 5.409878] etcd[921]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 5.412299] etcd[921]: open /var/lib/kubernetes/secrets/etcd.pem: no such file or directory
kube# [ 5.414102] systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 5.437237] systemd[1]: etcd.service: Failed with result 'exit-code'.
kube# [ 5.438638] systemd[1]: Failed to start etcd key-value store.
kube# [ 5.536024] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] encoded CSR
kube# [ 5.541035] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] signed certificate with serial number 494375870388767261237973923547941375095114901807
kube# [ 5.550151] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] generate received request
kube# [ 5.553102] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] received CSR
kube# [ 5.554879] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] generating key: rsa-2048
kube# [ 5.575673] kube-proxy[926]: W0127 01:25:12.514401 926 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 5.593527] kube-proxy[926]: W0127 01:25:12.532579 926 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.597806] kube-proxy[926]: W0127 01:25:12.536888 926 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.600295] kube-proxy[926]: W0127 01:25:12.537177 926 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.602831] kube-proxy[926]: W0127 01:25:12.537554 926 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.605667] kube-proxy[926]: W0127 01:25:12.539139 926 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.608983] kube-proxy[926]: W0127 01:25:12.539394 926 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.617326] kube-proxy[926]: F0127 01:25:12.556406 926 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 5.622869] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 5.624462] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 5.660874] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] encoded CSR
kube# [ 5.662702] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [INFO] signed certificate with serial number 632362128417837698787394804291242333943669690952
kube# [ 5.664951] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: 2020/01/27 01:25:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
kube# [ 5.667473] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: websites. For more information see the Baseline Requirements for the Issuance and Management
kube# [ 5.670083] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
kube# [ 5.672663] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[920]: specifically, section 10.2.3 ("Information Requirements").
kube# [ 5.684327] systemd[1]: Started CFSSL CA API server.
kube# [ 5.692932] cfssl[1040]: 2020/01/27 01:25:12 [INFO] Initializing signer
kube# [ 5.694390] cfssl[1040]: 2020/01/27 01:25:12 [WARNING] couldn't initialize ocsp signer: open : no such file or directory
kube# [ 5.695909] cfssl[1040]: 2020/01/27 01:25:12 [WARNING] endpoint '/' is disabled: could not locate box "static"
kube# [ 5.697495] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/gencrl' is enabled
kube# [ 5.699008] cfssl[1040]: 2020/01/27 01:25:12 [INFO] setting up key / CSR generator
kube# [ 5.700423] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/newkey' is enabled
kube# [ 5.701963] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/scaninfo' is enabled
kube# [ 5.703479] cfssl[1040]: 2020/01/27 01:25:12 [WARNING] endpoint 'revoke' is disabled: cert db not configured (missing -db-config)
kube# [ 5.705119] cfssl[1040]: 2020/01/27 01:25:12 [WARNING] endpoint 'crl' is disabled: cert db not configured (missing -db-config)
kube# [ 5.706764] cfssl[1040]: 2020/01/27 01:25:12 [INFO] bundler API ready
kube# [ 5.708022] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/bundle' is enabled
kube# [ 5.709440] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/info' is enabled
kube# [ 5.710752] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/newcert' is enabled
kube# [ 5.712044] cfssl[1040]: 2020/01/27 01:25:12 [WARNING] endpoint 'ocspsign' is disabled: signer not initialized
kube# [ 5.713619] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/sign' is enabled
kube# [ 5.715001] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/authsign' is enabled
kube# [ 5.716459] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/certinfo' is enabled
kube# [ 5.717832] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/init_ca' is enabled
kube# [ 5.719275] cfssl[1040]: 2020/01/27 01:25:12 [INFO] endpoint '/api/v1/cfssl/scan' is enabled
kube# [ 5.720616] cfssl[1040]: 2020/01/27 01:25:12 [INFO] Handler set up complete.
kube# [ 5.721821] cfssl[1040]: 2020/01/27 01:25:12 [INFO] Now listening on https://0.0.0.0:8888
kube# [ 5.892727] 8021q: adding VLAN 0 to HW filter on device eth0
kube# [ 5.839416] dhcpcd[783]: eth0: waiting for carrier
kube# [ 5.841234] dhcpcd[783]: eth0: carrier acquired
kube# [ 5.846330] dhcpcd[783]: DUID 00:01:00:01:25:c0:f8:78:52:54:00:12:34:56
kube# [ 5.847586] dhcpcd[783]: eth0: IAID 00:12:34:56
kube# [ 5.848468] dhcpcd[783]: eth0: adding address fe80::5054:ff:fe12:3456
kube# [ 5.931922] dhcpcd[783]: eth0: soliciting a DHCP lease
kube# [ 5.995792] NET: Registered protocol family 17
kube# [ 5.949589] dhcpcd[783]: eth0: offered 10.0.2.15 from 10.0.2.2
kube# [ 5.950900] dhcpcd[783]: eth0: leased 10.0.2.15 for 86400 seconds
kube# [ 5.952067] dhcpcd[783]: eth0: adding route to 10.0.2.0/24
kube# [ 5.953275] dhcpcd[783]: eth0: adding default route via 10.0.2.2
kube# [ 6.002391] nscd[922]: 922 monitored file `/etc/resolv.conf` was written to
kube# [ 6.012821] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 6.028864] systemd[1]: nscd.service: Succeeded.
kube# [ 6.030576] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 6.032338] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 6.039989] nscd[1122]: 1122 monitoring file `/etc/passwd` (1)
kube# [ 6.041840] nscd[1122]: 1122 monitoring directory `/etc` (2)
kube# [ 6.043413] dhcpcd[783]: Failed to reload-or-try-restart ntpd.service: Unit ntpd.service not found.
kube# [ 6.045249] dhcpcd[783]: Failed to reload-or-try-restart openntpd.service: Unit openntpd.service not found.
kube# [ 6.046819] dhcpcd[783]: Failed to reload-or-try-restart chronyd.service: Unit chronyd.service not found.
kube# [ 6.048414] nscd[1122]: 1122 monitoring file `/etc/group` (3)
kube# [ 6.049734] systemd[1]: Started Name Service Cache Daemon.
kube# [ 6.051020] nscd[1122]: 1122 monitoring directory `/etc` (2)
kube# [ 6.052235] nscd[1122]: 1122 monitoring file `/etc/hosts` (4)
kube# [ 6.053436] nscd[1122]: 1122 monitoring directory `/etc` (2)
kube# [ 6.054591] nscd[1122]: 1122 monitoring file `/etc/resolv.conf` (5)
kube# [ 6.055820] nscd[1122]: 1122 monitoring directory `/etc` (2)
kube# [ 6.057015] nscd[1122]: 1122 monitoring file `/etc/services` (6)
kube# [ 6.058304] nscd[1122]: 1122 monitoring directory `/etc` (2)
kube# [ 6.059578] nscd[1122]: 1122 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 6.061334] systemd[1]: Started DHCP Client.
kube# [ 6.062829] nscd[1122]: 1122 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 6.064973] systemd[1]: Reached target Network is Online.
kube# [ 6.066516] dhcpcd[783]: forked to background, child pid 1123
kube# [ 6.067965] systemd[1]: Starting certmgr...
kube# [ 6.069680] kube-controller-manager[925]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 6.071930] dhcpcd[1123]: eth0: soliciting an IPv6 router
kube# [ 6.073642] systemd[1]: Starting Docker Application Container Engine...
kube# [ 6.093046] kube-scheduler[927]: I0127 01:25:13.031672 927 serving.go:319] Generated self-signed cert in-memory
kube# [ 6.115269] kube-apiserver[923]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 6.117310] kube-apiserver[923]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 6.119130] kube-apiserver[923]: I0127 01:25:13.053930 923 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 6.121162] kube-apiserver[923]: I0127 01:25:13.057206 923 server.go:147] Version: v1.15.6
kube# [ 6.122827] kube-apiserver[923]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 6.130943] kube-apiserver[923]: Usage:
kube# [ 6.132213] kube-apiserver[923]: kube-apiserver [flags]
kube# [ 6.133475] kube-apiserver[923]: Generic flags:
kube# [ 6.134679] kube-apiserver[923]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 6.138323] kube-apiserver[923]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 6.140903] kube-apiserver[923]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 6.143793] kube-apiserver[923]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.146630] kube-apiserver[923]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.149540] kube-apiserver[923]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 6.151962] kube-apiserver[923]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 6.153853] kube-apiserver[923]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 6.155768] kube-apiserver[923]: APIListChunking=true|false (BETA - default=true)
kube# [ 6.157310] kube-apiserver[923]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 6.158866] kube-apiserver[923]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 6.160482] kube-apiserver[923]: AppArmor=true|false (BETA - default=true)
kube# [ 6.161934] kube-apiserver[923]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 6.163527] kube-apiserver[923]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 6.165220] kube-apiserver[923]: BlockVolume=true|false (BETA - default=true)
kube# [ 6.166733] kube-apiserver[923]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 6.168362] kube-apiserver[923]: CPUManager=true|false (BETA - default=true)
kube# [ 6.169853] kube-apiserver[923]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 6.171467] kube-apiserver[923]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 6.172969] kube-apiserver[923]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 6.174701] kube-apiserver[923]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 6.176477] kube-apiserver[923]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 6.178138] kube-apiserver[923]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 6.179811] kube-apiserver[923]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 6.181402] kube-apiserver[923]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 6.182931] kube-apiserver[923]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 6.184538] kube-apiserver[923]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 6.186096] kube-apiserver[923]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 6.187694] kube-apiserver[923]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 6.189315] kube-apiserver[923]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 6.191040] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.192721] kube-apiserver[923]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 6.194697] kube-apiserver[923]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 6.196828] kube-apiserver[923]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 6.198754] kube-apiserver[923]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 6.200445] kube-apiserver[923]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 6.202030] kube-apiserver[923]: DevicePlugins=true|false (BETA - default=true)
kube# [ 6.203524] kube-apiserver[923]: DryRun=true|false (BETA - default=true)
kube# [ 6.205014] kube-apiserver[923]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 6.206663] kube-apiserver[923]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 6.208255] kube-apiserver[923]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 6.209790] kube-apiserver[923]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 6.211468] kube-apiserver[923]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 6.213053] kube-apiserver[923]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 6.214761] kube-apiserver[923]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 6.216514] kube-apiserver[923]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 6.218035] kube-apiserver[923]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 6.219647] kube-apiserver[923]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 6.221326] kube-apiserver[923]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 6.223057] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 6.224404] kube-apiserver[923]: MountContainers=true|false (ALPHA - default=false)
kube# [ 6.226003] kube-apiserver[923]: NodeLease=true|false (BETA - default=true)
kube# [ 6.227494] kube-apiserver[923]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 6.229024] kube-apiserver[923]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 6.230906] kube-apiserver[923]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 6.232583] kube-apiserver[923]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 6.234044] kube-apiserver[923]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 6.235605] kube-apiserver[923]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 6.237117] kube-apiserver[923]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 6.238692] kube-apiserver[923]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 6.240317] kube-apiserver[923]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 6.241857] kube-apiserver[923]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 6.243502] kube-apiserver[923]: RunAsGroup=true|false (BETA - default=true)
kube# [ 6.244924] kube-apiserver[923]: RuntimeClass=true|false (BETA - default=true)
kube# [ 6.246432] kube-apiserver[923]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 6.247928] kube-apiserver[923]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 6.249454] kube-apiserver[923]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 6.250973] kube-apiserver[923]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 6.252580] kube-apiserver[923]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 6.254156] kube-apiserver[923]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 6.255681] kube-apiserver[923]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 6.257245] kube-apiserver[923]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 6.258725] kube-apiserver[923]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 6.260259] kube-apiserver[923]: Sysctls=true|false (BETA - default=true)
kube# [ 6.261718] kube-apiserver[923]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 6.263284] kube-apiserver[923]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 6.264804] kube-apiserver[923]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 6.266315] kube-apiserver[923]: TokenRequest=true|false (BETA - default=true)
kube# [ 6.267797] kube-apiserver[923]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 6.269311] kube-apiserver[923]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 6.270796] kube-apiserver[923]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 6.272339] kube-apiserver[923]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 6.273873] kube-apiserver[923]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 6.275413] kube-apiserver[923]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 6.276882] kube-apiserver[923]: WinDSR=true|false (ALPHA - default=false)
kube# [ 6.278301] kube-apiserver[923]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 6.279773] kube-apiserver[923]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 6.281336] kube-apiserver[923]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 6.283223] kube-apiserver[923]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 6.285322] kube-apiserver[923]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 6.287471] kube-apiserver[923]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 6.290348] kube-apiserver[923]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 6.293017] kube-apiserver[923]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 6.294804] kube-apiserver[923]: Etcd flags:
kube# [ 6.295900] kube-apiserver[923]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 6.298000] kube-apiserver[923]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 6.299796] kube-apiserver[923]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 6.301687] kube-apiserver[923]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 6.303513] kube-apiserver[923]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 6.305039] kube-apiserver[923]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 6.306564] kube-apiserver[923]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 6.308398] kube-apiserver[923]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 6.310198] kube-apiserver[923]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 6.311619] kube-apiserver[923]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 6.313341] kube-apiserver[923]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 6.314920] kube-apiserver[923]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 6.317386] kube-apiserver[923]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 6.319292] kube-apiserver[923]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 6.322041] kube-apiserver[923]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 6.324317] kube-apiserver[923]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 6.329117] kube-apiserver[923]: Secure serving flags:
kube# [ 6.330308] kube-apiserver[923]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 6.333702] kube-apiserver[923]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 6.336643] kube-apiserver[923]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 6.339113] kube-apiserver[923]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 6.341413] kube-apiserver[923]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 6.345229] kube-apiserver[923]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 6.352529] kube-apiserver[923]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 6.354606] kube-apiserver[923]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 6.356394] kube-apiserver[923]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 6.361321] kube-apiserver[923]: Insecure serving flags:
kube# [ 6.362536] kube-apiserver[923]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 6.365009] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1134]: 2020/01/27 01:25:13 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 6.367129] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1134]: 2020/01/27 01:25:13 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 6.369094] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1134]: 2020/01/27 01:25:13 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 6.371026] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1134]: 2020/01/27 01:25:13 [ERROR] cert: failed to fetch remote CA: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.372988] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1134]: Failed: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.374762] systemd[1]: certmgr.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 6.376021] kube-apiserver[923]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.378237] kube-apiserver[923]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.380023] kube-apiserver[923]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 6.382094] kube-apiserver[923]: Auditing flags:
kube# [ 6.383134] kube-apiserver[923]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 6.384984] kube-apiserver[923]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube#
kube: exit status 1
(0.08 seconds)
kube# [ 6.390538] kube-apiserver[923]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 6.392128] kube-apiserver[923]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 6.394029] kube-apiserver[923]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 6.395879] kube-apiserver[923]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 6.397462] kube-apiserver[923]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 6.399002] kube-apiserver[923]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 6.401214] kube-apiserver[923]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 6.403065] systemd[1]: certmgr.service: Failed with result 'exit-code'.
kube# [ 6.404265] kube-apiserver[923]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 6.405744] kube-apiserver[923]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 6.407312] kube-apiserver[923]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 6.409918] kube-apiserver[923]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 6.411617] kube-apiserver[923]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.413036] kube-apiserver[923]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.415582] kube-apiserver[923]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.418158] kube-apiserver[923]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 6.419904] kube-apiserver[923]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 6.421406] kube-apiserver[923]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 6.423251] kube-apiserver[923]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 6.424871] kube-scheduler[927]: W0127 01:25:13.319257 927 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 6.427283] kube-scheduler[927]: W0127 01:25:13.319302 927 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 6.429775] kube-scheduler[927]: W0127 01:25:13.319335 927 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 6.431467] kube-scheduler[927]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.435991] systemd[1]: Failed to start certmgr.
kube# [ 6.436947] kube-apiserver[923]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 6.438876] kube-apiserver[923]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 6.440793] kube-apiserver[923]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 6.442397] kube-apiserver[923]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 6.443957] kube-apiserver[923]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 6.445630] kube-apiserver[923]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 6.447238] kube-apiserver[923]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 6.449714] kube-apiserver[923]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.451083] kube-apiserver[923]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.453573] kube-apiserver[923]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.456208] kube-apiserver[923]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 6.457894] kube-apiserver[923]: Features flags:
kube# [ 6.458873] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.460255] kube-apiserver[923]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 6.461567] kube-apiserver[923]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 6.462995] kube-apiserver[923]: Authentication flags:
kube# [ 6.463936] kube-apiserver[923]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 6.466840] kube-apiserver[923]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 6.469928] kube-apiserver[923]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 6.471595] kube-apiserver[923]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 6.473802] kube-apiserver[923]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 6.475753] kube-apiserver[923]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 6.478335] kube-apiserver[923]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 6.480529] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube# [ 6.481914] kube-apiserver[923]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 6.484244] kube-apiserver[923]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 6.486024] kube-apiserver[923]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 6.488724] kube-apiserver[923]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 6.490697] kube-apiserver[923]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 6.492723] kube-apiserver[923]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 6.495092] kube-apiserver[923]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 6.497642] kube-apiserver[923]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 6.500306] kube-controller-manager[925]: I0127 01:25:13.419642 925 serving.go:319] Generated self-signed cert in-memory
kube# [ 6.501774] kube-controller-manager[925]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-controller-manager-client.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.506803] systemd[1]: kube-controller-manager.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.508089] kube-apiserver[923]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 6.510524] kube-apiserver[923]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 6.513054] kube-apiserver[923]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 6.515770] kube-apiserver[923]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 6.517373] kube-apiserver[923]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 6.518933] kube-apiserver[923]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 6.520527] kube-apiserver[923]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 6.522559] kube-apiserver[923]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 6.525733] kube-apiserver[923]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 6.527588] systemd[1]: kube-controller-manager.service: Failed with result 'exit-code'.
kube# [ 6.528784] kube-apiserver[923]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 6.531442] kube-apiserver[923]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 6.533260] kube-apiserver[923]: Authorization flags:
kube# [ 6.534156] kube-apiserver[923]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 6.536392] kube-apiserver[923]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 6.538310] kube-apiserver[923]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 6.539990] kube-apiserver[923]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 6.541719] kube-apiserver[923]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 6.544062] kube-apiserver[923]: Cloud provider flags:
kube# [ 6.545041] kube-apiserver[923]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 6.546726] kube-apiserver[923]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 6.548074] kube-apiserver[923]: Api enablement flags:
kube# [ 6.549087] kube-apiserver[923]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 6.552568] kube-apiserver[923]: Admission flags:
kube# [ 6.553468] kube-apiserver[923]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 6.561407] kube-apiserver[923]: --admission-control-config-file string File with admission control configuration.
kube# [ 6.562776] kube-apiserver[923]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.570549] kube-apiserver[923]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.578255] kube-apiserver[923]: Misc flags:
kube# [ 6.579059] kube-apiserver[923]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 6.580548] kube-apiserver[923]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 6.582565] kube-apiserver[923]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 6.584120] kube-apiserver[923]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 6.585668] kube-apiserver[923]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 6.587016] kube-apiserver[923]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 6.588426] kube-apiserver[923]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 6.589836] kube-apiserver[923]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 6.591143] kube-apiserver[923]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 6.592573] kube-apiserver[923]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 6.594480] kube-apiserver[923]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 6.595786] kube-apiserver[923]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 6.597116] kube-apiserver[923]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 6.599414] kube-apiserver[923]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 6.601225] kube-apiserver[923]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 6.605603] kube-apiserver[923]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 6.608235] kube-apiserver[923]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 6.610733] kube-apiserver[923]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 6.612918] kube-apiserver[923]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 6.615086] kube-apiserver[923]: Global flags:
kube# [ 6.616045] kube-apiserver[923]: --alsologtostderr log to standard error as well as files
kube# [ 6.617399] kube-apiserver[923]: -h, --help help for kube-apiserver
kube# [ 6.618663] kube-apiserver[923]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 6.620246] kube-apiserver[923]: --log-dir string If non-empty, write log files in this directory
kube# [ 6.621587] kube-apiserver[923]: --log-file string If non-empty, use this log file
kube# [ 6.622734] kube-apiserver[923]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 6.624563] kube-apiserver[923]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 6.625876] kube-apiserver[923]: --logtostderr log to standard error instead of files (default true)
kube# [ 6.627219] kube-apiserver[923]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 6.628516] kube-apiserver[923]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 6.629834] kube-apiserver[923]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 6.631159] kube-apiserver[923]: -v, --v Level number for the log level verbosity
kube# [ 6.632362] kube-apiserver[923]: --version version[=true] Print version information and quit
kube# [ 6.633539] kube-apiserver[923]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 6.635029] kube-apiserver[923]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 7.343313] dhcpcd[1123]: eth0: Router Advertisement from fe80::2
kube# [ 7.344424] dhcpcd[1123]: eth0: adding address fec0::5054:ff:fe12:3456/64
kube# [ 7.345515] dhcpcd[1123]: eth0: adding route to fec0::/64
kube# [ 7.346498] dhcpcd[1123]: eth0: adding default route via fe80::2
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 7.475308] dockerd[1135]: time="2020-01-27T01:25:14.414066314Z" level=info msg="Starting up"
kube# [ 7.485803] dockerd[1135]: time="2020-01-27T01:25:14.424836398Z" level=info msg="libcontainerd: started new containerd process" pid=1198
kube# [ 7.487507] dockerd[1135]: time="2020-01-27T01:25:14.425368589Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 7.488900] dockerd[1135]: time="2020-01-27T01:25:14.425393452Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 7.490560] dockerd[1135]: time="2020-01-27T01:25:14.425424462Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 7.492452] dockerd[1135]: time="2020-01-27T01:25:14.425444576Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.031487] dockerd[1135]: time="2020-01-27T01:25:14.970559667Z" level=info msg="starting containerd" revision=.m version=
kube# [ 8.033234] dockerd[1135]: time="2020-01-27T01:25:14.970759134Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
kube# [ 8.034901] dockerd[1135]: time="2020-01-27T01:25:14.970815007Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.036604] dockerd[1135]: time="2020-01-27T01:25:14.970926753Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.038974] dockerd[1135]: time="2020-01-27T01:25:14.970944633Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
kube# [ 8.043239] dockerd[1135]: time="2020-01-27T01:25:14.982361167Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.045499] dockerd[1135]: time="2020-01-27T01:25:14.982380444Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
kube# [ 8.047136] dockerd[1135]: time="2020-01-27T01:25:14.982469002Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.048816] dockerd[1135]: time="2020-01-27T01:25:14.982586615Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.050369] dockerd[1135]: time="2020-01-27T01:25:14.982691098Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.052668] dockerd[1135]: time="2020-01-27T01:25:14.982707859Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
kube# [ 8.054198] dockerd[1135]: time="2020-01-27T01:25:14.982743898Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.056622] dockerd[1135]: time="2020-01-27T01:25:14.982755631Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.058904] dockerd[1135]: time="2020-01-27T01:25:14.982765688Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.077374] dockerd[1135]: time="2020-01-27T01:25:15.016481426Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
kube# [ 8.079246] dockerd[1135]: time="2020-01-27T01:25:15.016503216Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
kube# [ 8.080977] dockerd[1135]: time="2020-01-27T01:25:15.016532550Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
kube# [ 8.082936] dockerd[1135]: time="2020-01-27T01:25:15.016546518Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
kube# [ 8.084630] dockerd[1135]: time="2020-01-27T01:25:15.016560207Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
kube# [ 8.086221] dockerd[1135]: time="2020-01-27T01:25:15.016574175Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
kube# [ 8.087932] dockerd[1135]: time="2020-01-27T01:25:15.016587305Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
kube# [ 8.089605] dockerd[1135]: time="2020-01-27T01:25:15.016606302Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
kube# [ 8.091324] dockerd[1135]: time="2020-01-27T01:25:15.016622784Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
kube# [ 8.093293] dockerd[1135]: time="2020-01-27T01:25:15.016655191Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
kube# [ 8.094729] dockerd[1135]: time="2020-01-27T01:25:15.016737324Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
kube# [ 8.096252] dockerd[1135]: time="2020-01-27T01:25:15.016783140Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
kube# [ 8.097780] dockerd[1135]: time="2020-01-27T01:25:15.018803788Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
kube# [ 8.099471] dockerd[1135]: time="2020-01-27T01:25:15.018830607Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
kube# [ 8.101096] dockerd[1135]: time="2020-01-27T01:25:15.018874188Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
kube# [ 8.102655] dockerd[1135]: time="2020-01-27T01:25:15.018905756Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
kube# [ 8.104302] dockerd[1135]: time="2020-01-27T01:25:15.018926429Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
kube# [ 8.105841] dockerd[1135]: time="2020-01-27T01:25:15.018946543Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
kube# [ 8.107327] dockerd[1135]: time="2020-01-27T01:25:15.018963864Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
kube# [ 8.108877] dockerd[1135]: time="2020-01-27T01:25:15.018976994Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
kube# [ 8.110531] dockerd[1135]: time="2020-01-27T01:25:15.018989007Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
kube# [ 8.112199] dockerd[1135]: time="2020-01-27T01:25:15.019006328Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
kube# [ 8.113857] dockerd[1135]: time="2020-01-27T01:25:15.019029236Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
kube# [ 8.115645] dockerd[1135]: time="2020-01-27T01:25:15.019106061Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
kube# [ 8.117576] dockerd[1135]: time="2020-01-27T01:25:15.019123940Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
kube# [ 8.119363] dockerd[1135]: time="2020-01-27T01:25:15.019144055Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
kube# [ 8.121266] dockerd[1135]: time="2020-01-27T01:25:15.019164169Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
kube# [ 8.122912] dockerd[1135]: time="2020-01-27T01:25:15.023904716Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
kube# [ 8.124482] dockerd[1135]: time="2020-01-27T01:25:15.023942709Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
kube# [ 8.126036] dockerd[1135]: time="2020-01-27T01:25:15.023962823Z" level=info msg="containerd successfully booted in 0.053979s"
kube# [ 8.127561] dockerd[1135]: time="2020-01-27T01:25:15.057859310Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.128776] dockerd[1135]: time="2020-01-27T01:25:15.057913787Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.130408] dockerd[1135]: time="2020-01-27T01:25:15.058345406Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.132312] dockerd[1135]: time="2020-01-27T01:25:15.058384237Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.133747] dockerd[1135]: time="2020-01-27T01:25:15.059330168Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.135296] dockerd[1135]: time="2020-01-27T01:25:15.059438841Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.136859] dockerd[1135]: time="2020-01-27T01:25:15.059477952Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.138767] dockerd[1135]: time="2020-01-27T01:25:15.059505050Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.166714] dockerd[1135]: time="2020-01-27T01:25:15.105784116Z" level=warning msg="Your kernel does not support cgroup rt period"
kube# [ 8.168017] dockerd[1135]: time="2020-01-27T01:25:15.105817081Z" level=warning msg="Your kernel does not support cgroup rt runtime"
kube# [ 8.169265] dockerd[1135]: time="2020-01-27T01:25:15.105918212Z" level=info msg="Loading containers: start."
kube# [ 8.313858] Initializing XFRM netlink socket
kube# [ 8.296133] dockerd[1135]: time="2020-01-27[ 8.351787] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
kube# T01:25:15.235163130Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
kube# [ 8.299269] systemd-udevd[694]: Using default interface naming scheme 'v243'.
kube# [ 8.300429] systemd-udevd[694]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 8.333088] dockerd[1135]: time="2020-01-27T01:25:15.272165039Z" level=info msg="Loading containers: done."
kube# [ 8.346527] dhcpcd[1123]: docker0: waiting for carrier
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 8.511478] dockerd[1135]: time="2020-01-27T01:25:15.450225100Z" level=info msg="Docker daemon" commit=633a0ea838f10e000b7c6d6eed1623e6e988b5bc graphdriver(s)=overlay2 version=19.03.5
kube# [ 8.513100] dockerd[1135]: time="2020-01-27T01:25:15.450314217Z" level=info msg="Daemon has completed initialization"
kube# [ 8.570456] systemd[1]: Started Docker Application Container Engine.
kube# [ 8.571887] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 8.573223] dockerd[1135]: time="2020-01-27T01:25:15.508969717Z" level=info msg="API listen on /run/docker.sock"
kube# [ 8.576970] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 9.616046] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: Loaded image: pause:latest
kube# [ 9.617628] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 10.645461] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: Loaded image: coredns/coredns:1.5.0
kube# [ 10.654597] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: rm: cannot remove '/opt/cni/bin/*': No such file or directory
kube# [ 10.656128] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1294]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 10.663598] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 10.664714] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 1.
kube# [ 10.666340] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 10.667605] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 10.758290] kube-proxy[1457]: W0127 01:25:17.647401 1457 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 10.760262] kube-proxy[1457]: W0127 01:25:17.661312 1457 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.762944] kube-proxy[1457]: W0127 01:25:17.661591 1457 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.765261] kube-proxy[1457]: W0127 01:25:17.661761 1457 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.767441] kube-proxy[1457]: W0127 01:25:17.661931 1457 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.769575] kube-proxy[1457]: W0127 01:25:17.662143 1457 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.772558] kube-proxy[1457]: W0127 01:25:17.662380 1457 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.774633] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 10.775724] kube-proxy[1457]: F0127 01:25:17.669861 1457 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 10.780148] systemd[1]: Reached target Kubernetes.
kube# [ 10.781377] systemd[1]: Reached target Multi-User System.
kube# [ 10.782431] systemd[1]: Startup finished in 2.525s (kernel) + 8.140s (userspace) = 10.665s.
kube# [ 10.783677] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 10.784869] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 11.388938] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.390359] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.391618] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.392913] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.394212] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 11.395396] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 11.396477] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 11.397984] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 11.452784] kube-apiserver[1493]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 11.454228] kube-apiserver[1493]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 11.455646] kube-apiserver[1493]: I0127 01:25:18.391634 1493 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 11.457069] kube-apiserver[1493]: I0127 01:25:18.391810 1493 server.go:147] Version: v1.15.6
kube# [ 11.458225] kube-apiserver[1493]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 11.459736] kube-apiserver[1493]: Usage:
kube# [ 11.460563] kube-apiserver[1493]: kube-apiserver [flags]
kube# [ 11.461498] kube-apiserver[1493]: Generic flags:
kube# [ 11.462325] kube-apiserver[1493]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 11.465007] kube-apiserver[1493]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 11.466988] kube-apiserver[1493]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 11.469221] kube-apiserver[1493]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.471347] kube-apiserver[1493]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.473518] kube-apiserver[1493]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 11.475395] kube-apiserver[1493]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 11.477067] kube-apiserver[1493]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 11.478907] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 11.480138] kube-apiserver[1493]: APIListChunking=true|false (BETA - default=true)
kube# [ 11.481654] kube-apiserver[1493]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 11.483096] kube-apiserver[1493]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 11.484516] kube-apiserver[1493]: AppArmor=true|false (BETA - default=true)
kube# [ 11.485822] kube-apiserver[1493]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 11.487266] kube-apiserver[1493]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 11.488723] kube-apiserver[1493]: BlockVolume=true|false (BETA - default=true)
kube# [ 11.490053] kube-apiserver[1493]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 11.491577] kube-apiserver[1493]: CPUManager=true|false (BETA - default=true)
kube# [ 11.492905] kube-apiserver[1493]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 11.494359] kube-apiserver[1493]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 11.495763] kube-apiserver[1493]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 11.497206] kube-apiserver[1493]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 11.498585] kube-apiserver[1493]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 11.499977] kube-apiserver[1493]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 11.501470] kube-apiserver[1493]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 11.502943] kube-apiserver[1493]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 11.504367] kube-apiserver[1493]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 11.505771] kube-apiserver[1493]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 11.507281] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 11.508408] kube-apiserver[1493]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 11.509791] kube-apiserver[1493]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 11.511224] kube-apiserver[1493]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 11.512677] kube-apiserver[1493]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 11.514240] kube-apiserver[1493]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 11.515729] kube-apiserver[1493]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 11.517139] kube-apiserver[1493]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 11.518576] kube-apiserver[1493]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 11.519907] kube-apiserver[1493]: DevicePlugins=true|false (BETA - default=true)
kube# [ 11.521290] kube-apiserver[1493]: DryRun=true|false (BETA - default=true)
kube# [ 11.522596] kube-apiserver[1493]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 11.523931] kube-apiserver[1493]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 11.525371] kube-apiserver[1493]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 11.526745] kube-apiserver[1493]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 11.528220] kube-apiserver[1493]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 11.529572] kube-apiserver[1493]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 11.530997] kube-apiserver[1493]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 11.532532] kube-apiserver[1493]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 11.533939] kube-apiserver[1493]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 11.535397] kube-apiserver[1493]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 11.536880] kube-apiserver[1493]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 11.538436] kube-apiserver[1493]: MountContainers=true|false (ALPHA - default=false)
kube# [ 11.539737] kube-apiserver[1493]: NodeLease=true|false (BETA - default=true)
kube# [ 11.541034] kube-apiserver[1493]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 11.542465] kube-apiserver[1493]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 11.543879] kube-apiserver[1493]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 11.545296] kube-apiserver[1493]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 11.546624] kube-apiserver[1493]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 11.547969] kube-apiserver[1493]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 11.549372] kube-apiserver[1493]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 11.550773] kube-apiserver[1493]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 11.552162] kube-apiserver[1493]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 11.553626] kube-apiserver[1493]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 11.555069] kube-apiserver[1493]: RunAsGroup=true|false (BETA - default=true)
kube# [ 11.556405] kube-apiserver[1493]: RuntimeClass=true|false (BETA - default=true)
kube# [ 11.557761] kube-apiserver[1493]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 11.559223] kube-apiserver[1493]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 11.560575] kube-apiserver[1493]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 11.561884] kube-apiserver[1493]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 11.563327] kube-apiserver[1493]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 11.564681] kube-apiserver[1493]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 11.566039] kube-apiserver[1493]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 11.567438] kube-apiserver[1493]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 11.568777] kube-apiserver[1493]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 11.570126] kube-apiserver[1493]: Sysctls=true|false (BETA - default=true)
kube# [ 11.571514] kube-apiserver[1493]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 11.572918] kube-apiserver[1493]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 11.574278] kube-apiserver[1493]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 11.575657] kube-apiserver[1493]: TokenRequest=true|false (BETA - default=true)
kube# [ 11.576965] kube-apiserver[1493]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 11.578371] kube-apiserver[1493]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 11.579945] kube-apiserver[1493]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 11.581443] kube-apiserver[1493]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 11.582930] kube-apiserver[1493]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 11.584409] kube-apiserver[1493]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 11.585880] kube-apiserver[1493]: WinDSR=true|false (ALPHA - default=false)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 11.587315] kube-apiserver[1493]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 11.588794] kube-apiserver[1493]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 11.590259] kube-apiserver[1493]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 11.592332] kube-apiserver[1493]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 11.594703] kube-apiserver[1493]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 11.597250] kube-apiserver[1493]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 11.600420] kube-apiserver[1493]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 11.603530] kube-apiserver[1493]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 11.605346] kube-apiserver[1493]: Etcd flags:
kube# [ 11.606295] kube-apiserver[1493]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 11.608227] kube-apiserver[1493]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 11.609970] kube-apiserver[1493]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 11.611890] kube-apiserver[1493]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 11.613724] kube-apiserver[1493]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 11.615203] kube-apiserver[1493]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 11.616705] kube-apiserver[1493]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 11.618591] kube-apiserver[1493]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 11.620386] kube-apiserver[1493]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 11.621783] kube-apiserver[1493]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 11.623276] kube-apiserver[1493]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 11.624752] kube-apiserver[1493]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 11.626736] kube-apiserver[1493]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 11.628354] kube-apiserver[1493]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 11.630655] kube-apiserver[1493]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 11.632037] kube-apiserver[1493]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 11.636441] kube-apiserver[1493]: Secure serving flags:
kube# [ 11.637322] kube-apiserver[1493]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 11.640273] kube-apiserver[1493]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube#
kube: exit status 1
(0.06 seconds)
kube# [ 11.645770] kube-apiserver[1493]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 11.647695] kube-apiserver[1493]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 11.649437] kube-apiserver[1493]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 11.652578] kube-apiserver[1493]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 11.659071] kube-apiserver[1493]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 11.660738] kube-apiserver[1493]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 11.662228] kube-apiserver[1493]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 11.666529] kube-apiserver[1493]: Insecure serving flags:
kube# [ 11.667441] kube-apiserver[1493]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 11.669838] kubelet[1456]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.672090] kubelet[1456]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.674399] kubelet[1456]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.676655] kubelet[1456]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.678784] kubelet[1456]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.680925] kubelet[1456]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.683119] kubelet[1456]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.685338] kubelet[1456]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.686556] kube-apiserver[1493]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.686938] kube-apiserver[1493]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.687296] kube-apiserver[1493]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 11.687578] kube-apiserver[1493]: Auditing flags:
kube# [ 11.687846] kube-apiserver[1493]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 11.688051] kube-apiserver[1493]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.688398] kube-apiserver[1493]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 11.688579] kube-apiserver[1493]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 11.688776] kube-apiserver[1493]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 11.688961] kube-apiserver[1493]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 11.689258] kube-apiserver[1493]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 11.689504] kube-apiserver[1493]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 11.689723] kube-apiserver[1493]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 11.689947] kubelet[1456]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.690282] kubelet[1456]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.690552] kubelet[1456]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.690725] kubelet[1456]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.690921] kubelet[1456]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.691143] kubelet[1456]: F0127 01:25:18.613794 1456 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 11.691643] kube-apiserver[1493]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 11.691870] kube-apiserver[1493]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 11.692059] kube-apiserver[1493]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 11.692251] kube-apiserver[1493]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 11.717040] kube-apiserver[1493]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.717421] kube-apiserver[1493]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.717689] kube-apiserver[1493]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.717873] kube-apiserver[1493]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 11.718057] kube-apiserver[1493]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 11.718383] kube-apiserver[1493]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.718589] kube-apiserver[1493]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 11.718834] kube-apiserver[1493]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 11.719064] kube-apiserver[1493]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 11.719395] kube-apiserver[1493]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 11.719674] kube-apiserver[1493]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 11.719865] kube-apiserver[1493]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 11.720048] kube-apiserver[1493]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 11.720380] kube-apiserver[1493]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 11.720559] kube-apiserver[1493]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.720756] kube-apiserver[1493]: --audit-webhook-truncate-max-batch-size int M[ 11.792855] serial8250: too much work for irq4
kube# aximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.720942] kube-apiserver[1493]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.721150] kube-apiserver[1493]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 11.721384] kube-apiserver[1493]: Features flags:
kube# [ 11.721623] kube-apiserver[1493]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 11.721858] kube-apiserver[1493]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 11.722048] kube-apiserver[1493]: Authentication flags:
kube# [ 11.722387] kube-apiserver[1493]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 11.722561] kube-apiserver[1493]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 11.748438] kube-apiserver[1493]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 11.748788] kube-apiserver[1493]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 11.748968] kube-apiserver[1493]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 11.749424] kube-apiserver[1493]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 11.749679] kube-apiserver[1493]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 11.750044] kube-apiserver[1493]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 11.750411] kube-apiserver[1493]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 11.750595] kube-apiserver[1493]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 11.750808] kube-apiserver[1493]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 11.751085] kube-apiserver[1493]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 11.751409] kube-apiserver[1493]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 11.751713] kube-apiserver[1493]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 11.751895] kube-apiserver[1493]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 11.752137] kube-apiserver[1493]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 11.752433] kube-apiserver[1493]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 11.752614] kube-apiserver[1493]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 11.776970] kube-apiserver[1493]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 11.777388] kube-apiserver[1493]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 11.777663] kube-apiserver[1493]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 11.777854] kube-apiserver[1493]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 11.778045] kube-apiserver[1493]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 11.778372] kube-apiserver[1493]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 11.778657] kube-apiserver[1493]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 11.778865] kube-apiserver[1493]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 11.779057] kube-apiserver[1493]: Authorizat[ 11.845643] serial8250: too much work for irq4
kube# ion flags:
kube# [ 11.779388] kube-apiserver[1493]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 11.779571] kube-apiserver[1493]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 11.779787] kube-apiserver[1493]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 11.780057] kube-apiserver[1493]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 11.780363] kube-apiserver[1493]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 11.780564] kube-apiserver[1493]: Cloud provider flags:
kube# [ 11.780787] kube-apiserver[1493]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 11.781051] kube-apiserver[1493]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 11.781378] kube-apiserver[1493]: Api enablement flags:
kube# [ 11.781709] kube-apiserver[1493]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 11.781953] kube-apiserver[1493]: Admission flags:
kube# [ 11.782149] kube-apiserver[1493]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 11.807794] kube-apiserver[1493]: --admission-control-config-file string File with admission control configuration.
kube# [ 11.808316] kube-apiserver[1493]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.808719] kube-apiserver[1493]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.809012] kube-apiserver[1493]: Misc flags:
kube# [ 11.809871] kube-apiserver[1493]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 11.810114] kube-apiserver[1493]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 11.810471] kube-apiserver[1493]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 11.810671] kube-apiserver[1493]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 11.810892] kube-apiserver[1493]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 11.811024] kube-apiserver[1493]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 11.811382] kube-apiserver[1493]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 11.811639] kube-apiserver[1493]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 11.811920] kube-apiserver[1493]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 11.839605] kube-apiserver[1493]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 11.839816] kube-apiserver[1493]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 11.840030] kube-apiserver[1493]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 11.840402] kube-apiserver[1493]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 11.840623] kube-apiserver[1493]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 1[ 11.903512] serial8250: too much work for irq4
kube# 1.840843] kube-apiserver[1493]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 11.841056] kube-apiserver[1493]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 11.841520] kube-apiserver[1493]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 11.841785] kube-apiserver[1493]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 11.841997] kube-apiserver[1493]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 11.842311] kube-apiserver[1493]: Global flags:
kube# [ 11.842617] kube-apiserver[1493]: --alsologtostderr log to standard error as well as files
kube# [ 11.842820] kube-apiserver[1493]: -h, --help help for kube-apiserver
kube# [ 11.843051] kube-apiserver[1493]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 11.843431] kube-apiserver[1493]: --log-dir string If non-empty, write log files in this directory
kube# [ 11.843659] kube-apiserver[1493]: --log-file string If non-empty, use this log file
kube# [ 11.843879] kube-apiserver[1493]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 11.844108] kube-apiserver[1493]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 11.844479] kube-apiserver[1493]: --logtostderr log to standard error instead of files (default true)
kube# [ 11.844698] kube-apiserver[1493]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 11.844920] kube-apiserver[1493]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 11.845148] kube-apiserver[1493]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 11.845396] kube-apiserver[1493]: -v, --v Level number for the log level verbosity
kube# [ 11.845625] kube-apiserver[1493]: --version version[=true] Print version information and quit
kube# [ 11.845845] kube-apiserver[1493]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 11.846120] kube-apiserver[1493]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 11.874014] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 11.874243] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 11.901254] kube-scheduler[1494]: I0127 01:25:18.840028 1494 serving.go:319] Generated self-signed cert in-memory
kube# [ 12.312760] kube-scheduler[1494]: W0127 01:25:19.251836 1494 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 12.313070] kube-scheduler[1494]: W0127 01:25:19.251873 1494 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 12.313357] kube-scheduler[1494]: W0127 01:25:19.251890 1494 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 12.318575] kube-scheduler[1494]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 12.322699] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 12.322946] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 13.124530] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 13.124995] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
kube# [ 13.125580] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 13.127347] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 13.131503] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1564]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 13.413330] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1564]: Loaded image: pause:latest
kube# [ 13.416391] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1564]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 13.532912] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1564]: Loaded image: coredns/coredns:1.5.0
kube# [ 13.541454] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1564]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 13.548361] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 13.607750] kubelet[1624]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.607928] kubelet[1624]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.608347] kubelet[1624]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.608559] kubelet[1624]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.608764] kubelet[1624]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.608980] kubelet[1624]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.609277] kubelet[1624]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.609539] kubelet[1624]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.609759] kubelet[1624]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.610036] kubelet[1624]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.610479] kubelet[1624]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.610653] kubelet[1624]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.610854] kubelet[1624]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.611033] kubelet[1624]: F0127 01:25:20.546708 1624 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 13.632529] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 13.632806] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 14.883532] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 14.883830] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
kube# [ 14.884300] systemd[1]: kube-certmgr-bootstrap.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 14.884562] systemd[1]: kube-certmgr-bootstrap.service: Scheduled restart job, restart counter is at 1.
kube# [ 14.884905] systemd[1]: Stopped Kubernetes certmgr bootstrapper.
kube# [ 14.887501] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 14.887733] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 14.889363] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 14.894092] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1682]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 14.905776] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1681]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 14.906047] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1681]: Dload Upload Total Spent Left Speed
kube# [ 14.925124] cfssl[1040]: 2020/01/27 01:25:21 [INFO] 192.168.1.1:47488 - "POST /api/v1/cfssl/info" 200
100 1434 100 1432 100 2 75368 105 --:--:-- --:--:-- --:--:-- 75473-bootstrap-start[1681]:
kube# [ 14.931289] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 14.931630] systemd[1]: kube-certmgr-bootstrap.service: Consumed 19ms CPU time, received 3.5K IP traffic, sent 1.6K IP traffic.
kube# [ 15.168574] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1682]: Loaded image: pause:latest
kube# [ 15.171596] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1682]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 15.272330] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1682]: Loaded image: coredns/coredns:1.5.0
kube# [ 15.280306] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1682]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 15.286862] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 15.335435] kubelet[1747]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.335604] kubelet[1747]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.335888] kubelet[1747]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.336068] kubelet[1747]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.336390] kubelet[1747]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.336575] kubelet[1747]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.336781] kubelet[1747]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.336958] kubelet[1747]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.337215] kubelet[1747]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.337474] kubelet[1747]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.337648] kubelet[1747]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.337839] kubelet[1747]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.338073] kubelet[1747]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.369220] systemd[1]: kube-addon-manager.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 15.369429] systemd[1]: kube-addon-manager.service: Scheduled restart job, restart counter is at 1.
kube# [ 15.369963] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 15.371905] systemd[1]: Starting Kubernetes addon manager...
kube# [ 15.373490] systemd[1]: Started Kubernetes systemd probe.
kube# [ 15.377813] kubelet[1747]: I0127 01:25:22.316909 1747 server.go:425] Version: v1.15.6
kube# [ 15.378135] kubelet[1747]: I0127 01:25:22.317045 1747 plugins.go:103] No cloud provider specified.
kube# [ 15.384382] systemd[1]: run-ra598c198ddbd4fe498f34488154fe9df.scope: Succeeded.
kube# [ 15.385605] kubelet[1747]: F0127 01:25:22.324705 1747 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 15.391143] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 15.391485] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 15.414350] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1766]: Error in configuration:
kube# [ 15.414531] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1766]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 15.414863] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1766]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 15.419791] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 15.420038] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 15.420859] systemd[1]: Failed to start Kubernetes addon manager.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 15.983418] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 15.983670] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 2.
kube# [ 15.983978] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 15.986063] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 16.016715] kube-proxy[1805]: W0127 01:25:22.955470 1805 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 16.028405] kube-proxy[1805]: W0127 01:25:22.967513 1805 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.030918] kube-proxy[1805]: W0127 01:25:22.970025 1805 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.032763] kube-proxy[1805]: W0127 01:25:22.971884 1805 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.034596] kube-proxy[1805]: W0127 01:25:22.973719 1805 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.036602] kube-proxy[1805]: W0127 01:25:22.975708 1805 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.038523] kube-proxy[1805]: W0127 01:25:22.977624 1805 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.048472] kube-proxy[1805]: F0127 01:25:22.987584 1805 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory]
kube# [ 16.052583] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 16.052795] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 16.607355] systemd[1]: certmgr.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 16.607617] systemd[1]: certmgr.service: Scheduled restart job, restart counter is at 1.
kube# [ 16.608418] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 16.608881] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
kube# [ 16.609228] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 16.609489] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 2.
kube# [ 16.609768] systemd[1]: Stopped certmgr.
kube# [ 16.612314] systemd[1]: Starting certmgr...
kube# [ 16.613905] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 16.614158] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 16.614522] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 16.615973] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 16.617159] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 16.622453] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1829]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 16.633736] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 16.634937] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 16.635274] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 16.638306] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 16.657936] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47490 - "POST /api/v1/cfssl/info" 200
kube# [ 16.676888] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47492 - "POST /api/v1/cfssl/info" 200
kube# [ 16.681970] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 16.687242] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47494 - "POST /api/v1/cfssl/info" 200
kube# [ 16.692367] kube-apiserver[1828]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 16.692534] kube-apiserver[1828]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 16.692914] kube-apiserver[1828]: I0127 01:25:23.631230 1828 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 16.693151] kube-apiserver[1828]: I0127 01:25:23.631424 1828 server.go:147] Version: v1.15.6
kube# [ 16.693506] kube-apiserver[1828]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 16.697131] kube-apiserver[1828]: Usage:
kube# [ 16.697515] kube-apiserver[1828]: kube-apiserver [flags]
kube# [ 16.697601] kube-apiserver[1828]: Generic flags:
kube# [ 16.697813] kube-apiserver[1828]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 16.697997] kube-apiserver[1828]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 16.698272] kube-apiserver[1828]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 16.698544] kube-apiserver[1828]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.698724] kube-apiserver[1828]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.698913] kube-apiserver[1828]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 16.699160] kube-apiserver[1828]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 16.699452] kube-apiserver[1828]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 16.699642] kube-apiserver[1828]: APIListChunking=true|false (BETA - default=true)
kube# [ 16.699844] kube-apiserver[1828]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 16.700152] kube-apiserver[1828]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 16.700451] kube-apiserver[1828]: AppArmor=true|false (BETA - default=true)
kube# [ 16.700685] kube-apiserver[1828]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 16.700872] kube-apiserver[1828]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 16.701062] kube-apiserver[1828]: BlockVolume=true|false (BETA - default=true)
kube# [ 16.701410] kube-apiserver[1828]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 16.701592] kube-apiserver[1828]: CPUManager=true|false (BETA - default=true)
kube# [ 16.701798] kube-apiserver[1828]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 16.701980] kube-apiserver[1828]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 16.702249] kube-apiserver[1828]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 16.702516] kube-apiserver[1828]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 16.702711] kube-apiserver[1828]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 16.702902] kube-apiserver[1828]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 16.703099] kube-apiserver[1828]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 16.703439] kube-apiserver[1828]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 16.703629] kube-apiserver[1828]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 16.703832] kube-apiserver[1828]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 16.704154] kube-apiserver[1828]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 16.704356] kube-apiserver[1828]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 16.704541] kube-apiserver[1828]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 16.728157] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47496 - "POST /api/v1/cfssl/info" 200
kube# [ 16.728502] kube-apiserver[1828]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 16.728751] kube-apiserver[1828]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 16.728960] kube-apiserver[1828]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 16.729152] kube-apiserver[1828]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 16.729385] kube-apiserver[1828]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 16.729568] kube-apiserver[1828]: DevicePlugins=true|false (BETA - default=true)
kube# [ 16.729746] kube-apiserver[1828]: DryRun=true|false (BETA - default=true)
kube# [ 16.729935] kube-apiserver[1828]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 16.730145] kube-apiserver[1828]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 16.730376] kube-apiserver[1828]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 16.730540] kube-apiserver[1828]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 16.730725] kube-apiserver[1828]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 16.730921] kube-apiserver[1828]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 16.731130] kube-apiserver[1828]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 16.731451] kube-apiserver[1828]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 16.731630] kube-apiserver[1828]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 16.731820] kube-apiserver[1828]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 16.732013] kube-apiserver[1828]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 16.732305] kube-apiserver[1828]: MountContainers=true|false (ALPHA - default=false)
kube# [ 16.732514] kube-apiserver[1828]: [ 16.802921] serial8250: too much work for irq4
kube# NodeLease=true|false (BETA - default=true)
kube# [ 16.732697] kube-apiserver[1828]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 16.732884] kube-apiserver[1828]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 16.733088] kube-apiserver[1828]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 16.733413] kube-apiserver[1828]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 16.733592] kube-apiserver[1828]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 16.733776] kube-apiserver[1828]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 16.733965] kube-apiserver[1828]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 16.734211] kube-apiserver[1828]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 16.734395] kube-apiserver[1828]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 16.734574] kube-apiserver[1828]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 16.734763] kube-apiserver[1828]: RunAsGroup=true|false (BETA - default=true)
kube# [ 16.734954] kube-apiserver[1828]: RuntimeClass=true|false (BETA - default=true)
kube# [ 16.735160] kube-apiserver[1828]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 16.735378] kube-apiserver[1828]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 16.759797] kube-apiserver[1828]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 16.760057] kube-apiserver[1828]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 16.760386] kube-apiserver[1828]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 16.760707] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 16.761081] kube-apiserver[1828]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 16.761404] kube-apiserver[1828]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 16.761582] kube-apiserver[1828]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 16.761767] kube-apiserver[1828]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 16.761956] kube-apiserver[1828]: Sysctls=true|false (BETA - default=true)
kube# [ 16.762233] kube-apiserver[1828]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 16.762484] kube-apiserver[1828]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 16.762690] kube-apiserver[1828]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 16.762884] kube-apiserver[1828]: TokenRequest=true|false (BETA - default=true)
kube# [ 16.763281] kube-apiserver[1828]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 16.763511] kube-apiserver[1828]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 16.763795] kube-apiserver[1828]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 16.764084] kube-apiserver[1828]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 16.764389] kube-apiserver[1828]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 16.764495] kube-apiserver[1828]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 16.764691] kube-apiserver[1828]: WinDSR=true|false (ALPHA - default=false)
kube# [ 16.764876] kube-apiserver[1828]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 16.765069] kube-apiserver[1828]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 16.765494] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 16.765805] kube-apiserver[1828]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 16.766017] kube-apiserver[1828]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 16.766250] kube-apiserver[1828]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 16.766525] kube-apiserver[1828]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 16.766699] kube-apiserver[1828]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 16.766877] kube-apiserver[1828]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 16.767085] kube-apiserver[1828]: Etcd flags:
kube# [ 16.767410] kube-apiserver[1828]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 16.791442] kube-apiserver[1828]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 16.791698] kube-apiserver[1828]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 16.791860] kube-apiserver[1828]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 16.792114] kube-apiserver[1828]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 16.792357] kube-apiserver[1828]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 16.792545] kube-apiserver[1828]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from ap[ 16.854262] serial8250: too much work for irq4
kube# iserver is disabled. (default 5m0s)
kube# [ 16.792740] kube-apiserver[1828]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 16.792951] kube-apiserver[1828]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 16.793150] kube-apiserver[1828]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 16.793369] kube-apiserver[1828]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 16.793551] kube-apiserver[1828]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 16.793744] kube-apiserver[1828]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 16.793942] kube-apiserver[1828]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 16.794139] kube-apiserver[1828]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 16.794402] kube-apiserver[1828]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 16.794652] kube-apiserver[1828]: Secure serving flags:
kube# [ 16.794832] kube-apiserver[1828]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 16.795038] kube-apiserver[1828]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 16.795255] kube-apiserver[1828]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 16.795429] kube-apiserver[1828]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 16.795618] kube-apiserver[1828]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 16.820156] kube-apiserver[1828]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 16.820519] kube-apiserver[1828]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 16.820699] kube-apiserver[1828]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 16.820929] kube-apiserver[1828]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 16.821238] kube-apiserver[1828]: Insecure serving flags:
kube# [ 16.821497] kube-apiserver[1828]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 16.821715] kube-apiserver[1828]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 16.821963] kube-apiserver[1828]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 16.822161] kube-apiserver[1828]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 16.822375] kube-apiserver[1828]: Auditing flags:
kube# [ 16.822566] kube-apiserver[1828]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 16.822747] kube-apiserver[1828]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 16.822941] kube-apiserver[1828]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 16.823135] kube-apiserver[1828]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 16.823361] kube-apiserver[1828]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 16.823549] kube-apiserver[1828]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 16.823736] kube-apiserver[1828]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 16.823919] kube-apiserver[1828]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 16.847750] k[ 16.905187] serial8250: too much work for irq4
kube# ube-apiserver[1828]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 16.847977] kube-apiserver[1828]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 16.848338] kube-apiserver[1828]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 16.848522] kube-apiserver[1828]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 16.848721] kube-apiserver[1828]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 16.848906] kube-apiserver[1828]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 16.849094] kube-apiserver[1828]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 16.849408] kube-apiserver[1828]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 16.849592] kube-apiserver[1828]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 16.849781] kube-apiserver[1828]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 16.849969] kube-apiserver[1828]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 16.850212] kube-apiserver[1828]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 16.851017] kube-apiserver[1828]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 16.851398] kube-apiserver[1828]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 16.851587] kube-apiserver[1828]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 16.851783] kube-apiserver[1828]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 16.851990] kube-apiserver[1828]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 16.852161] kube-apiserver[1828]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 16.852379] kube-apiserver[1828]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 16.852567] kube-apiserver[1828]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 16.852751] kube-apiserver[1828]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 16.852952] kube-apiserver[1828]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 16.877001] kube-apiserver[1828]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 16.877226] kube-apiserver[1828]: Features flags:
kube# [ 16.877533] kube-apiserver[1828]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 16.877701] kube-apiserver[1828]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 16.877895] kube-apiserver[1828]: Authentication flags:
kube# [ 16.878098] kube-apiserver[1828]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 16.878410] kube-apiserver[1828]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 16.878590] kube-apiserver[1828]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 16.878780] kube-apiserver[1828]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 16.878974] kube-apiserver[1828]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 16.879243] kube-apiserver[1828]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 16.879480] kube-apiserver[1828]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 16.879727] kube-apiserver[1828]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 16.879969] kube-apiserver[1828]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 16.880152] kube-apiserver[1828]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 16.880363] kube-apiserver[1828]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authenticati[ 16.956256] serial8250: too much work for irq4
kube# on strategies.
kube# [ 16.880550] kube-apiserver[1828]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 16.880738] kube-apiserver[1828]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 16.880925] kube-apiserver[1828]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 16.905114] kube-apiserver[1828]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 16.905565] kube-apiserver[1828]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 16.905827] kube-apiserver[1828]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 16.906006] kube-apiserver[1828]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 16.906252] kube-apiserver[1828]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 16.906516] kube-apiserver[1828]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 16.906695] kube-apiserver[1828]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 16.906890] kube-apiserver[1828]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 16.907079] kube-apiserver[1828]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 16.907402] kube-apiserver[1828]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 16.907623] kube-apiserver[1828]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 16.907872] kube-apiserver[1828]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 16.908069] kube-apiserver[1828]: Authorization flags:
kube# [ 16.908400] kube-apiserver[1828]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 16.908584] kube-apiserver[1828]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 16.908784] kube-apiserver[1828]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 16.908968] kube-apiserver[1828]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 16.909193] kube-apiserver[1828]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 16.932706] kube-apiserver[1828]: Cloud provider flags:
kube# [ 16.932875] kube-apiserver[1828]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 16.933064] kube-apiserver[1828]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 16.933368] kube-apiserver[1828]: Api enablement flags:
kube# [ 16.933602] kube-apiserver[1828]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 16.933847] kube-apiserver[1828]: Admission flags:
kube# [ 16.934046] kube-apiserver[1828]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 16.934458] kube-apiserver[1828]: --admission-control-config-file string File with admission control configuration.
kube# [ 16.934656] kube-apiserver[1828]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged,[ 17.006806] serial8250: too much work for irq4
kube# EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 16.935041] kube-apiserver[1828]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 16.959467] kube-apiserver[1828]: Misc flags:
kube# [ 16.959747] kube-apiserver[1828]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 16.959909] kube-apiserver[1828]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 16.960109] kube-apiserver[1828]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 16.960434] kube-apiserver[1828]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 16.960614] kube-apiserver[1828]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 16.960807] kube-apiserver[1828]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 16.961003] kube-apiserver[1828]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 16.961374] kube-apiserver[1828]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 16.961603] kube-apiserver[1828]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 16.961785] kube-apiserver[1828]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 16.961971] kube-apiserver[1828]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 16.962164] kube-apiserver[1828]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 16.962495] kube-apiserver[1828]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 16.962662] kube-apiserver[1828]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 16.962851] kube-apiserver[1828]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 16.963047] kube-apiserver[1828]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 16.963401] kube-apiserver[1828]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 16.963640] kube-apiserver[1828]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 16.963822] kube-apiserver[1828]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 16.964016] kube-apiserver[1828]: Global flags:
kube# [ 16.964245] kube-apiserver[1828]: --alsologtostderr log to standard error as well as files
kube# [ 16.987449] kube-apiserver[1828]: -h, --help help for kube-apiserver
kube# [ 16.987670] kube-apiserver[1828]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 16.987834] kube-apiserver[1828]: --log-dir string If non-empty, write log files in this directory
kube# [ 16.988014] kube-apiserver[1828]: --log-file string If non-empty, use this log file
kube# [ 16.988279] kube-apiserver[1828]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 16.988533] kube-apiserver[1828]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 16.988712] kube-apiserver[1828]: --logtostderr log to standard error instead of files (default true)
kube# [ 16.988910] kube-apiserver[1828]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 16.989098] kube-apiserver[1828]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 16.989405] kube-apiserver[1828]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 16.989583] kube-apiserver[1828]: -v, --v Level number for the log level verbosity
kube# [ 16.989778] kube-apiserver[1828]: --version version[=true] Print version information and quit
kube# [ 16.989972] kube-apiserver[1828]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 16.990295] kube-apiserver[1828]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 17.006627] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 17.011895] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47498 - "POST /api/v1/cfssl/info" 200
kube# [ 17.023006] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47500 - "POST /api/v1/cfssl/info" 200
kube# [ 17.027676] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 17.032568] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47502 - "POST /api/v1/cfssl/info" 200
kube# [ 17.043604] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47504 - "POST /api/v1/cfssl/info" 200
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube: exit status 1
kube# [ 17.050585] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:23 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.json
(0.23 seconds)
kube# [ 17.055441] cfssl[1040]: 2020/01/27 01:25:23 [INFO] 192.168.1.1:47506 - "POST /api/v1/cfssl/info" 200
kube# [ 17.069941] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47508 - "POST /api/v1/cfssl/info" 200
kube# [ 17.073992] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube# [ 17.078713] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47510 - "POST /api/v1/cfssl/info" 200
kube# [ 17.089508] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47512 - "POST /api/v1/cfssl/info" 200
kube# [ 17.090510] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 17.094987] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47514 - "POST /api/v1/cfssl/info" 200
kube# [ 17.105877] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47516 - "POST /api/v1/cfssl/info" 200
kube# [ 17.110470] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 17.115388] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47518 - "POST /api/v1/cfssl/info" 200
kube# [ 17.127977] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47520 - "POST /api/v1/cfssl/info" 200
kube# [ 17.132406] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 17.137195] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47522 - "POST /api/v1/cfssl/info" 200
kube# [ 17.149229] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47524 - "POST /api/v1/cfssl/info" 200
kube# [ 17.153527] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 17.158414] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47526 - "POST /api/v1/cfssl/info" 200
kube# [ 17.168315] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47528 - "POST /api/v1/cfssl/info" 200
kube# [ 17.172524] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 17.177141] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47530 - "POST /api/v1/cfssl/info" 200
kube# [ 17.189674] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47532 - "POST /api/v1/cfssl/info" 200
kube# [ 17.194564] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 17.199269] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47534 - "POST /api/v1/cfssl/info" 200
kube# [ 17.210013] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47536 - "POST /api/v1/cfssl/info" 200
kube# [ 17.214017] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 17.218923] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47538 - "POST /api/v1/cfssl/info" 200
kube# [ 17.226140] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1829]: Loaded image: pause:latest
kube# [ 17.229269] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47540 - "POST /api/v1/cfssl/info" 200
kube# [ 17.229423] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1829]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 17.235083] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 17.241761] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47542 - "POST /api/v1/cfssl/info" 200
kube# [ 17.254282] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47544 - "POST /api/v1/cfssl/info" 200
kube# [ 17.258615] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: 2020/01/27 01:25:24 [INFO] manager: watching 14 certificates
kube# [ 17.258783] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1826]: OK
kube# [ 17.263770] systemd[1]: Started certmgr.
kube# [ 17.268993] certmgr[1951]: 2020/01/27 01:25:24 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 17.269311] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 17.271746] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 17.278220] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47546 - "POST /api/v1/cfssl/info" 200
kube# [ 17.289351] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47548 - "POST /api/v1/cfssl/info" 200
kube# [ 17.294437] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 17.298598] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47550 - "POST /api/v1/cfssl/info" 200
kube# [ 17.309553] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47552 - "POST /api/v1/cfssl/info" 200
kube# [ 17.313804] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 17.317805] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47554 - "POST /api/v1/cfssl/info" 200
kube# [ 17.329927] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47556 - "POST /api/v1/cfssl/info" 200
kube# [ 17.334363] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 17.338473] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47558 - "POST /api/v1/cfssl/info" 200
kube# [ 17.356585] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47560 - "POST /api/v1/cfssl/info" 200
kube# [ 17.361261] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.json
kube# [ 17.365275] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47562 - "POST /api/v1/cfssl/info" 200
kube# [ 17.376157] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47564 - "POST /api/v1/cfssl/info" 200
kube# [ 17.381047] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube# [ 17.384867] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47566 - "POST /api/v1/cfssl/info" 200
kube# [ 17.395018] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47568 - "POST /api/v1/cfssl/info" 200
kube# [ 17.395440] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 17.399697] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47570 - "POST /api/v1/cfssl/info" 200
kube# [ 17.409220] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1829]: Loaded image: coredns/coredns:1.5.0
kube# [ 17.412025] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47572 - "POST /api/v1/cfssl/info" 200
kube# [ 17.416724] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 17.418844] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1829]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 17.421249] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47574 - "POST /api/v1/cfssl/info" 200
kube# [ 17.425942] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 17.426131] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 2.
kube# [ 17.426654] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 17.428377] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 17.428875] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 17.433951] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47576 - "POST /api/v1/cfssl/info" 200
kube# [ 17.438303] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 17.442440] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47578 - "POST /api/v1/cfssl/info" 200
kube# [ 17.452512] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47580 - "POST /api/v1/cfssl/info" 200
kube# [ 17.457272] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 17.461580] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47582 - "POST /api/v1/cfssl/info" 200
kube# [ 17.473837] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47584 - "POST /api/v1/cfssl/info" 200
kube# [ 17.478500] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 17.482834] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47586 - "POST /api/v1/cfssl/info" 200
kube# [ 17.495386] kubelet[1992]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.495522] kubelet[1992]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.495733] kubelet[1992]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.495937] kubelet[1992]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.496256] kubelet[1992]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.496463] kubelet[1992]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.496695] kubelet[1992]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.496960] kubelet[1992]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.497227] kubelet[1992]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.497488] kubelet[1992]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.497744] kubelet[1992]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.497928] kubelet[1992]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.498141] kubelet[1992]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.519595] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47588 - "POST /api/v1/cfssl/info" 200
kube# [ 17.524615] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 17.530047] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47590 - "POST /api/v1/cfssl/info" 200
kube# [ 17.533132] systemd[1]: Started Kubernetes systemd probe.
kube# [ 17.537921] kubelet[1992]: I0127 01:25:24.477010 1992 server.go:425] Version: v1.15.6
kube# [ 17.538213] kubelet[1992]: I0127 01:25:24.477211 1992 plugins.go:103] No cloud provider specified.
kube# [ 17.540703] kubelet[1992]: F0127 01:25:24.479782 1992 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 17.543422] systemd[1]: run-rd00439b361fb4aebae952475b028f803.scope: Succeeded.
kube# [ 17.545004] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 17.545196] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 17.546607] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47592 - "POST /api/v1/cfssl/info" 200
kube# [ 17.550698] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 17.554971] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47594 - "POST /api/v1/cfssl/info" 200
kube# [ 17.566939] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47596 - "POST /api/v1/cfssl/info" 200
kube# [ 17.570949] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 17.575265] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47598 - "POST /api/v1/cfssl/info" 200
kube# [ 17.588241] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47600 - "POST /api/v1/cfssl/info" 200
kube# [ 17.592762] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: watching 14 certificates
kube# [ 17.592883] certmgr[1951]: 2020/01/27 01:25:24 [WARNING] metrics: no prometheus address or port configured
kube# [ 17.593298] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: checking certificates
kube# [ 17.593616] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queue processor is ready
kube# [ 17.596518] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47602 - "POST /api/v1/cfssl/info" 200
kube# [ 17.596681] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.596942] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:kube-addon-manager because it isn't ready
kube# [ 17.597292] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:kube-addon-manager (attempt 1)
kube# [ 17.599314] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47604 - "POST /api/v1/cfssl/info" 200
kube# [ 17.600024] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.600380] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /kubernetes because it isn't ready
kube# [ 17.600582] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /kubernetes (attempt 1)
kube# [ 17.602579] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47606 - "POST /api/v1/cfssl/info" 200
kube# [ 17.608223] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.608304] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /etcd-client because it isn't ready
kube# [ 17.608547] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /etcd-client (attempt 1)
kube# [ 17.611604] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47608 - "POST /api/v1/cfssl/info" 200
kube# [ 17.611859] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.612131] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:kube-apiserver because it isn't ready
kube# [ 17.612422] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:kube-apiserver (attempt 1)
kube# [ 17.614493] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47610 - "POST /api/v1/cfssl/info" 200
kube# [ 17.614782] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.615021] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /front-proxy-client because it isn't ready
kube# [ 17.615413] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /front-proxy-client (attempt 1)
kube# [ 17.618263] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47612 - "POST /api/v1/cfssl/info" 200
kube# [ 17.618658] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.618970] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /cluster-admin/O=system:masters because it isn't ready
kube# [ 17.619456] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /cluster-admin/O=system:masters (attempt 1)
kube# [ 17.622714] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47614 - "POST /api/v1/cfssl/info" 200
kube# [ 17.623069] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.623427] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /kube-controller-manager because it isn't ready
kube# [ 17.624053] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /kube-controller-manager (attempt 1)
kube# [ 17.629798] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47616 - "POST /api/v1/cfssl/info" 200
kube# [ 17.630953] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.632195] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:kube-controller-manager because it isn't ready
kube# [ 17.633367] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:kube-controller-manager (attempt 1)
kube# [ 17.640502] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47618 - "POST /api/v1/cfssl/info" 200
kube# [ 17.641642] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.642876] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.643949] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.645141] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47620 - "POST /api/v1/cfssl/info" 200
kube# [ 17.646252] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.647481] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:kube-proxy because it isn't ready
kube# [ 17.648577] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:kube-proxy (attempt 1)
kube# [ 17.649815] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.651036] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.652088] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.653274] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47622 - "POST /api/v1/cfssl/info" 200
kube# [ 17.670913] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47624 - "POST /api/v1/cfssl/info" 200
kube# [ 17.672027] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.673272] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:node:kube.my.xzy/O=system:nodes because it isn't ready
kube# [ 17.674805] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:node:kube.my.xzy/O=system:nodes (attempt 1)
kube# [ 17.677420] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.678642] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:kube-scheduler because it isn't ready
kube# [ 17.679797] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:kube-scheduler (attempt 1)
kube# [ 17.681039] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47626 - "POST /api/v1/cfssl/info" 200
kube# [ 17.682039] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47628 - "POST /api/v1/cfssl/info" 200
kube# [ 17.683072] certmgr[1951]: 2020/01/27 01:25:24 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.684281] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: queueing /system:service-account-signer because it isn't ready
kube# [ 17.685426] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: processing certificate spec /system:service-account-signer (attempt 1)
kube# [ 17.728928] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.733339] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.736511] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 132556882212464270043612985622297930665403481835
kube# [ 17.737842] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.738635] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47630 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.746923] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver.pem
kube# [ 17.754341] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.760747] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.761935] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 226750073154865966173250753571195151762160191624
kube# [ 17.763593] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.764646] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47632 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.765931] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-scheduler-client.pem
kube# [ 17.767589] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.769110] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 17.770462] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.771995] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 17.773816] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 588037698398085273552075399961121328513838227959
kube# [ 17.775410] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.775645] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47634 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.775889] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager-client.pem
kube# [ 17.776324] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.790572] systemd[1]: Stopping Kubernetes Scheduler Service...
kube# [ 17.794806] systemd[1]: kube-scheduler.service: Succeeded.
kube# [ 17.795339] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 17.796450] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.797095] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 17.807099] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 17.807434] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.807769] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.809335] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 719262205801605112903001485721166698400968917876
kube# [ 17.809555] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.809809] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47636 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.810155] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 17.810649] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.810859] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-addon-manager.pem
kube# [ 17.815700] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.817670] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.819358] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.820710] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.822506] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 443394073296961190494759848242573693757698980815
kube# [ 17.822705] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.822967] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47640 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.823301] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager.pem
kube# [ 17.827478] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 176684779153579135422747515242893354195447611443
kube# [ 17.829581] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.830818] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47638 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.832383] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.833619] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/service-account.pem
kube# [ 17.835916] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.837867] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 401116946215387441345073461144005990268535850723
kube# [ 17.841061] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.842673] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47642 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.844529] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-proxy-client.pem
kube# [ 17.847474] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 17.849327] systemd[1]: Starting Kubernetes addon manager...
kube# [ 17.858059] systemd[1]: Stopping Kubernetes Controller Manager Service...
kube# [ 17.859874] systemd[1]: kube-controller-manager.service: Succeeded.
kube# [ 17.861286] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 17.863359] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 17.865668] kube-apiserver[2036]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 17.867828] kube-apiserver[2036]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 17.869474] kube-apiserver[2036]: I0127 01:25:24.801722 2036 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 17.871388] kube-apiserver[2036]: I0127 01:25:24.801951 2036 server.go:147] Version: v1.15.6
kube# [ 17.872767] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.874223] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.875726] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.877015] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 597566469019691762217803966472965217934834558771
kube# [ 17.878795] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.879974] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47644 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.882417] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/cluster-admin.pem
kube# [ 17.884821] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.886976] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.888936] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.890888] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.892298] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 524000858639046283602255501130118245047576343613
kube# [ 17.894630] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.896589] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47646 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.898220] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.899802] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 40225877184790482645081108624216216710326381685
kube# [ 17.901683] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.903403] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47648 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.905482] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 17.906906] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet.pem
kube# [ 17.909007] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.910527] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet-client.pem
kube# [ 17.912846] certmgr[1951]: 2020/01/27 01:25:24 [ERROR] manager: exit status 3
kube# [ 17.914557] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.916678] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 17.919481] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 17.920605] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 17.923864] systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
kube# [ 17.925520] systemd[1]: kubelet.service: Failed with result 'signal'.
kube# [ 17.927328] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 17.928971] systemd[1]: kubelet.service: Start request repeated too quickly.
kube# [ 17.930334] systemd[1]: kubelet.service: Failed with result 'signal'.
kube# [ 17.931715] certmgr[1951]: 2020/01/27 01:25:24 [ERROR] manager: exit status 1
kube# [ 17.932953] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.934148] certmgr[1951]: 2020/01/27 01:25:24 [ERROR] manager: exit status 1
kube# [ 17.935299] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: certificate successfully processed
kube# [ 17.936460] systemd[1]: Failed to start Kubernetes Kubelet Service.
kube# [ 17.939729] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.941516] kube-proxy[2089]: W0127 01:25:24.880292 2089 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 17.948638] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.950539] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 342208868435813219865310576992382933614101067411
kube# [ 17.950724] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.950914] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47652 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.951305] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-etcd-client.pem
kube# [ 17.963041] kube-proxy[2089]: W0127 01:25:24.902084 2089 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.966259] kube-proxy[2089]: W0127 01:25:24.905368 2089 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.968526] kube-proxy[2089]: W0127 01:25:24.907595 2089 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.970812] kube-proxy[2089]: W0127 01:25:24.909879 2089 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.973085] kube-proxy[2089]: W0127 01:25:24.912172 2089 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.975472] kube-proxy[2089]: W0127 01:25:24.914545 2089 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.983098] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 17.986565] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 17.988715] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 17.992755] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 110674414263934057108288564388235272016934478597
kube# [ 17.994226] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 17.994340] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47654 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.994609] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-kubelet-client.pem
kube# [ 18.002610] kube-proxy[2089]: W0127 01:25:24.941617 2089 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
kube# [ 18.010991] kube-controller-manager[2075]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 18.033154] certmgr[1951]: 2020/01/27 01:25:24 [INFO] encoded CSR
kube# [ 18.036157] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signature request received
kube# [ 18.038880] cfssl[1040]: 2020/01/27 01:25:24 [INFO] signed certificate with serial number 717043861021392675330223542049692799462913392323
kube# [ 18.039014] cfssl[1040]: 2020/01/27 01:25:24 [INFO] wrote response
kube# [ 18.039354] cfssl[1040]: 2020/01/27 01:25:24 [INFO] 192.168.1.1:47658 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.039827] certmgr[1951]: 2020/01/27 01:25:24 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/etcd.pem
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 18.057630] systemd[1]: Starting etcd key-value store...
kube# [ 18.070471] etcd[2124]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 18.070716] etcd[2124]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 18.070991] etcd[2124]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 18.071347] etcd[2124]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 18.071610] etcd[2124]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 18.071917] etcd[2124]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 18.072272] etcd[2124]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 18.072532] etcd[2124]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 18.072822] etcd[2124]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 18.073218] etcd[2124]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 18.073506] etcd[2124]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 18.073806] etcd[2124]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 18.074112] etcd[2124]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 18.074462] etcd[2124]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 18.074799] etcd[2124]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 18.075216] etcd[2124]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 18.075513] etcd[2124]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 18.075788] etcd[2124]: etcd Version: 3.3.13
kube#
kube# [ 18.076087] etcd[2124]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 18.080604] etcd[2124]: Go Version: go1.12.9
kube#
kube# [ 18.080892] etcd[2124]: Go OS/Arch: linux/amd64
kube#
kube# [ 18.081239] etcd[2124]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 18.081543] etcd[2124]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 18.088272] etcd[2124]: listening for peers on https://127.0.0.1:2380
kube# [ 18.088470] etcd[2124]: listening for client requests on 127.0.0.1:2379
kube# [ 18.106258] etcd[2124]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 18.106509] etcd[2124]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 18.106768] etcd[2124]: name = kube.my.xzy
kube# [ 18.107153] etcd[2124]: data dir = /var/lib/etcd
kube# [ 18.107566] etcd[2124]: member dir = /var/lib/etcd/member
kube# [ 18.107926] etcd[2124]: heartbeat = 100ms
kube# [ 18.108325] etcd[2124]: election = 1000ms
kube# [ 18.108714] etcd[2124]: snapshot count = 100000
kube# [ 18.109031] etcd[2124]: advertise client URLs = https://etcd.local:2379
kube# [ 18.109550] etcd[2124]: initial advertise peer URLs = https://etcd.local:2380
kube# [ 18.109865] etcd[2124]: initial cluster = kube.my.xzy=https://etcd.local:2380
kube# [ 18.115262] etcd[2124]: starting member d579d2a9b6a65847 in cluster cd74e8f1b6ca227e
kube# [ 18.115451] etcd[2124]: d579d2a9b6a65847 became follower at term 0
kube# [ 18.115800] etcd[2124]: newRaft d579d2a9b6a65847 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
kube# [ 18.116147] etcd[2124]: d579d2a9b6a65847 became follower at term 1
kube# [ 18.127646] etcd[2124]: simple token is not cryptographically signed
kube# [ 18.133199] etcd[2124]: starting server... [version: 3.3.13, cluster version: to_be_decided]
kube# [ 18.136802] etcd[2124]: d579d2a9b6a65847 as single-node; fast-forwarding 9 ticks (election ticks 10)
kube# [ 18.137021] etcd[2124]: added member d579d2a9b6a65847 [https://etcd.local:2380] to cluster cd74e8f1b6ca227e
kube# [ 18.141401] etcd[2124]: ClientTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = true, crl-file =
kube# [ 18.144131] certmgr[1951]: 2020/01/27 01:25:25 [INFO] encoded CSR
kube# [ 18.146955] cfssl[1040]: 2020/01/27 01:25:25 [INFO] signature request received
kube# [ 18.148814] cfssl[1040]: 2020/01/27 01:25:25 [INFO] signed certificate with serial number 657025126065697015416544308856975032666991263080
kube# [ 18.149008] cfssl[1040]: 2020/01/27 01:25:25 [INFO] wrote response
kube# [ 18.149480] cfssl[1040]: 2020/01/27 01:25:25 [INFO] 192.168.1.1:47662 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.149947] certmgr[1951]: 2020/01/27 01:25:25 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-proxy-client.pem
kube# [ 18.288723] kube-controller-manager[2075]: I0127 01:25:25.227746 2075 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.296601] kube-scheduler[2046]: I0127 01:25:25.235470 2046 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.365429] kube-apiserver[2036]: I0127 01:25:25.304498 2036 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.365594] kube-apiserver[2036]: I0127 01:25:25.304534 2036 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.370230] kube-apiserver[2036]: E0127 01:25:25.309329 2036 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.370327] kube-apiserver[2036]: E0127 01:25:25.309379 2036 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.370612] kube-apiserver[2036]: E0127 01:25:25.309397 2036 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.370935] kube-apiserver[2036]: E0127 01:25:25.309421 2036 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.371280] kube-apiserver[2036]: E0127 01:25:25.309445 2036 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.371541] kube-apiserver[2036]: E0127 01:25:25.309461 2036 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.371741] kube-apiserver[2036]: E0127 01:25:25.309488 2036 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.371935] kube-apiserver[2036]: E0127 01:25:25.309506 2036 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.372161] kube-apiserver[2036]: E0127 01:25:25.309548 2036 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.372438] kube-apiserver[2036]: E0127 01:25:25.309594 2036 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.372628] kube-apiserver[2036]: E0127 01:25:25.309618 2036 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.372827] kube-apiserver[2036]: E0127 01:25:25.309635 2036 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.373023] kube-apiserver[2036]: I0127 01:25:25.309654 2036 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.373333] kube-apiserver[2036]: I0127 01:25:25.309667 2036 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.393364] kube-apiserver[2036]: I0127 01:25:25.332395 2036 client.go:354] parsed scheme: ""
kube# [ 18.393490] kube-apiserver[2036]: I0127 01:25:25.332414 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.396617] kube-apiserver[2036]: I0127 01:25:25.335727 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.397969] kube-apiserver[2036]: I0127 01:25:25.337073 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.498660] kube-controller-manager[2075]: W0127 01:25:25.437727 2075 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.498929] kube-controller-manager[2075]: W0127 01:25:25.437768 2075 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.499274] kube-controller-manager[2075]: W0127 01:25:25.437787 2075 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.499548] kube-controller-manager[2075]: I0127 01:25:25.437804 2075 controllermanager.go:164] Version: v1.15.6
kube# [ 18.501504] kube-scheduler[2046]: W0127 01:25:25.440585 2046 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.501907] kube-scheduler[2046]: W0127 01:25:25.440634 2046 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.502267] kube-scheduler[2046]: W0127 01:25:25.440659 2046 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.510832] kube-controller-manager[2075]: I0127 01:25:25.449923 2075 secure_serving.go:116] Serving securely on 127.0.0.1:10252
kube# [ 18.510977] kube-controller-manager[2075]: I0127 01:25:25.449968 2075 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-controller-manager...
kube# [ 18.520023] kube-scheduler[2046]: I0127 01:25:25.459093 2046 server.go:142] Version: v1.15.6
kube# [ 18.520926] etcd[2124]: d579d2a9b6a65847 is starting a new election at term 1
kube# [ 18.521126] etcd[2124]: d579d2a9b6a65847 became candidate at term 2
kube# [ 18.521431] etcd[2124]: d579d2a9b6a65847 received MsgVoteResp from d579d2a9b6a65847 at term 2
kube# [ 18.521750] etcd[2124]: d579d2a9b6a65847 became leader at term 2
kube# [ 18.522028] etcd[2124]: raft.node: d579d2a9b6a65847 elected leader d579d2a9b6a65847 at term 2
kube# [ 18.524144] etcd[2124]: setting up the initial cluster version to 3.3
kube# [ 18.524942] kube-scheduler[2046]: I0127 01:25:25.464053 2046 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
kube# [ 18.525924] kube-scheduler[2046]: W0127 01:25:25.465030 2046 authorization.go:47] Authorization is disabled
kube# [ 18.526244] kube-scheduler[2046]: W0127 01:25:25.465051 2046 authentication.go:55] Authentication is disabled
kube# [ 18.526546] kube-scheduler[2046]: I0127 01:25:25.465073 2046 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
kube# [ 18.526878] kube-scheduler[2046]: I0127 01:25:25.465460 2046 secure_serving.go:116] Serving securely on [::]:10259
kube# [ 18.530886] etcd[2124]: set the initial cluster version to 3.3
kube# [ 18.531243] etcd[2124]: published {Name:kube.my.xzy ClientURLs:[https://etcd.local:2379]} to cluster cd74e8f1b6ca227e
kube# [ 18.531552] etcd[2124]: ready to serve client requests
kube# [ 18.531942] etcd[2124]: enabled capabilities for version 3.3
kube# [ 18.532669] systemd[1]: Started etcd key-value store.
kube# [ 18.535936] certmgr[1951]: 2020/01/27 01:25:25 [INFO] manager: certificate successfully processed
kube# [ 18.537142] etcd[2124]: serving client requests on 127.0.0.1:2379
kube# [ 18.549627] kube-apiserver[2036]: I0127 01:25:25.488653 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.549849] kube-apiserver[2036]: I0127 01:25:25.488816 2036 client.go:354] parsed scheme: ""
kube# [ 18.550150] kube-apiserver[2036]: I0127 01:25:25.488832 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.550394] kube-apiserver[2036]: I0127 01:25:25.488914 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.550640] kube-apiserver[2036]: I0127 01:25:25.488945 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.554633] kube-apiserver[2036]: I0127 01:25:25.493731 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.618658] kube-apiserver[2036]: I0127 01:25:25.557704 2036 master.go:233] Using reconciler: lease
kube# [ 18.618907] kube-apiserver[2036]: I0127 01:25:25.558001 2036 client.go:354] parsed scheme: ""
kube# [ 18.619318] kube-apiserver[2036]: I0127 01:25:25.558024 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.619581] kube-apiserver[2036]: I0127 01:25:25.558063 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.619868] kube-apiserver[2036]: I0127 01:25:25.558096 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.625114] kube-apiserver[2036]: I0127 01:25:25.564209 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.628273] kube-apiserver[2036]: I0127 01:25:25.567391 2036 client.go:354] parsed scheme: ""
kube# [ 18.628408] kube-apiserver[2036]: I0127 01:25:25.567410 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.631595] kube-apiserver[2036]: I0127 01:25:25.570711 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.631712] kube-apiserver[2036]: I0127 01:25:25.570744 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.640388] kube-apiserver[2036]: I0127 01:25:25.579480 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.646253] kube-apiserver[2036]: I0127 01:25:25.585268 2036 client.go:354] parsed scheme: ""
kube# [ 18.646430] kube-apiserver[2036]: I0127 01:25:25.585291 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.646827] kube-apiserver[2036]: I0127 01:25:25.585335 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.647050] kube-apiserver[2036]: I0127 01:25:25.585381 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.651451] kube-apiserver[2036]: I0127 01:25:25.590568 2036 client.go:354] parsed scheme: ""
kube# [ 18.651557] kube-apiserver[2036]: I0127 01:25:25.590585 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.651714] kube-apiserver[2036]: I0127 01:25:25.590608 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.651971] kube-apiserver[2036]: I0127 01:25:25.590655 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.652368] kube-apiserver[2036]: I0127 01:25:25.590721 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.656290] kube-apiserver[2036]: I0127 01:25:25.595406 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.658936] kube-apiserver[2036]: I0127 01:25:25.598053 2036 client.go:354] parsed scheme: ""
kube# [ 18.659035] kube-apiserver[2036]: I0127 01:25:25.598075 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.659499] kube-apiserver[2036]: I0127 01:25:25.598113 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.659740] kube-apiserver[2036]: I0127 01:25:25.598169 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.665503] kube-apiserver[2036]: I0127 01:25:25.604622 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.667336] kube-apiserver[2036]: I0127 01:25:25.606458 2036 client.go:354] parsed scheme: ""
kube# [ 18.667433] kube-apiserver[2036]: I0127 01:25:25.606473 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.667716] kube-apiserver[2036]: I0127 01:25:25.606517 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.667989] kube-apiserver[2036]: I0127 01:25:25.606545 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.676456] kube-apiserver[2036]: I0127 01:25:25.615493 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.678364] kube-apiserver[2036]: I0127 01:25:25.617484 2036 client.go:354] parsed scheme: ""
kube# [ 18.678470] kube-apiserver[2036]: I0127 01:25:25.617499 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.678754] kube-apiserver[2036]: I0127 01:25:25.617529 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.679025] kube-apiserver[2036]: I0127 01:25:25.617569 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.683442] kube-apiserver[2036]: I0127 01:25:25.622550 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.686879] kube-apiserver[2036]: I0127 01:25:25.625779 2036 client.go:354] parsed scheme: ""
kube# [ 18.687005] kube-apiserver[2036]: I0127 01:25:25.625797 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.687423] kube-apiserver[2036]: I0127 01:25:25.625834 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.687744] kube-apiserver[2036]: I0127 01:25:25.625877 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.691940] kube-apiserver[2036]: I0127 01:25:25.631046 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.692489] kube-apiserver[2036]: I0127 01:25:25.631600 2036 client.go:354] parsed scheme: ""
kube# [ 18.692763] kube-apiserver[2036]: I0127 01:25:25.631626 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.693040] kube-apiserver[2036]: I0127 01:25:25.631672 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.693436] kube-apiserver[2036]: I0127 01:25:25.631731 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.697700] kube-apiserver[2036]: I0127 01:25:25.636819 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.698051] kube-apiserver[2036]: I0127 01:25:25.637158 2036 client.go:354] parsed scheme: ""
kube# [ 18.698388] kube-apiserver[2036]: I0127 01:25:25.637176 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.698651] kube-apiserver[2036]: I0127 01:25:25.637211 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.698961] kube-apiserver[2036]: I0127 01:25:25.637280 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.703188] kube-apiserver[2036]: I0127 01:25:25.642271 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.703506] kube-apiserver[2036]: I0127 01:25:25.642621 2036 client.go:354] parsed scheme: ""
kube# [ 18.703749] kube-apiserver[2036]: I0127 01:25:25.642645 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.704014] kube-apiserver[2036]: I0127 01:25:25.642683 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.704403] kube-apiserver[2036]: I0127 01:25:25.642724 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.709566] kube-apiserver[2036]: I0127 01:25:25.648665 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.709839] kube-apiserver[2036]: I0127 01:25:25.648917 2036 client.go:354] parsed scheme: ""
kube# [ 18.710042] kube-apiserver[2036]: I0127 01:25:25.648963 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.710471] kube-apiserver[2036]: I0127 01:25:25.649011 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.710738] kube-apiserver[2036]: I0127 01:25:25.649067 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.715774] kube-apiserver[2036]: I0127 01:25:25.654872 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.716349] kube-apiserver[2036]: I0127 01:25:25.655434 2036 client.go:354] parsed scheme: ""
kube# [ 18.716658] kube-apiserver[2036]: I0127 01:25:25.655452 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.716941] kube-apiserver[2036]: I0127 01:25:25.655494 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.717296] kube-apiserver[2036]: I0127 01:25:25.655542 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.722136] kube-apiserver[2036]: I0127 01:25:25.661246 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.722679] kube-apiserver[2036]: I0127 01:25:25.661730 2036 client.go:354] parsed scheme: ""
kube# [ 18.722920] kube-apiserver[2036]: I0127 01:25:25.661780 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.723316] kube-apiserver[2036]: I0127 01:25:25.661815 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.723536] kube-apiserver[2036]: I0127 01:25:25.661863 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.727421] kube-apiserver[2036]: I0127 01:25:25.666488 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.727965] kube-apiserver[2036]: I0127 01:25:25.667053 2036 client.go:354] parsed scheme: ""
kube# [ 18.728376] kube-apiserver[2036]: I0127 01:25:25.667083 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.728626] kube-apiserver[2036]: I0127 01:25:25.667122 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.728947] kube-apiserver[2036]: I0127 01:25:25.667203 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.732981] kube-apiserver[2036]: I0127 01:25:25.672063 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.733412] kube-apiserver[2036]: I0127 01:25:25.672518 2036 client.go:354] parsed scheme: ""
kube# [ 18.733700] kube-apiserver[2036]: I0127 01:25:25.672540 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.733960] kube-apiserver[2036]: I0127 01:25:25.672577 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.734442] kube-apiserver[2036]: I0127 01:25:25.672843 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.740243] kube-apiserver[2036]: I0127 01:25:25.679360 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.740482] kube-apiserver[2036]: I0127 01:25:25.679605 2036 client.go:354] parsed scheme: ""
kube# [ 18.740722] kube-apiserver[2036]: I0127 01:25:25.679639 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.740985] kube-apiserver[2036]: I0127 01:25:25.679673 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.741264] kube-apiserver[2036]: I0127 01:25:25.679706 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.745017] kube-apiserver[2036]: I0127 01:25:25.684129 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.745351] kube-apiserver[2036]: I0127 01:25:25.684470 2036 client.go:354] parsed scheme: ""
kube# [ 18.745568] kube-apiserver[2036]: I0127 01:25:25.684490 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.745852] kube-apiserver[2036]: I0127 01:25:25.684521 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.746085] kube-apiserver[2036]: I0127 01:25:25.684573 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.750112] kube-apiserver[2036]: I0127 01:25:25.689209 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.805997] kube-apiserver[2036]: I0127 01:25:25.745063 2036 client.go:354] parsed scheme: ""
kube# [ 18.806244] kube-apiserver[2036]: I0127 01:25:25.745085 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.806561] kube-apiserver[2036]: I0127 01:25:25.745117 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.806850] kube-apiserver[2036]: I0127 01:25:25.745156 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.810953] kube-apiserver[2036]: I0127 01:25:25.750049 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.811240] kube-apiserver[2036]: I0127 01:25:25.750314 2036 client.go:354] parsed scheme: ""
kube# [ 18.811541] kube-apiserver[2036]: I0127 01:25:25.750385 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.811883] kube-apiserver[2036]: I0127 01:25:25.750525 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.812146] kube-apiserver[2036]: I0127 01:25:25.750598 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.819669] kube-apiserver[2036]: I0127 01:25:25.758788 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.820028] kube-apiserver[2036]: I0127 01:25:25.759134 2036 client.go:354] parsed scheme: ""
kube# [ 18.820581] kube-apiserver[2036]: I0127 01:25:25.759160 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.820876] kube-apiserver[2036]: I0127 01:25:25.759189 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.821086] kube-apiserver[2036]: I0127 01:25:25.759227 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.825365] kube-apiserver[2036]: I0127 01:25:25.764480 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.825697] kube-apiserver[2036]: I0127 01:25:25.764809 2036 client.go:354] parsed scheme: ""
kube# [ 18.825971] kube-apiserver[2036]: I0127 01:25:25.764831 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.826277] kube-apiserver[2036]: I0127 01:25:25.764862 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.826505] kube-apiserver[2036]: I0127 01:25:25.764909 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.830317] kube-apiserver[2036]: I0127 01:25:25.769416 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.830856] kube-apiserver[2036]: I0127 01:25:25.769961 2036 client.go:354] parsed scheme: ""
kube# [ 18.831466] kube-apiserver[2036]: I0127 01:25:25.769978 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.831780] kube-apiserver[2036]: I0127 01:25:25.770007 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.832071] kube-apiserver[2036]: I0127 01:25:25.770034 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.835903] kube-apiserver[2036]: I0127 01:25:25.774984 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.836389] kube-apiserver[2036]: I0127 01:25:25.775495 2036 client.go:354] parsed scheme: ""
kube# [ 18.836692] kube-apiserver[2036]: I0127 01:25:25.775525 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.836951] kube-apiserver[2036]: I0127 01:25:25.775584 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.837296] kube-apiserver[2036]: I0127 01:25:25.775637 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.845223] kube-apiserver[2036]: I0127 01:25:25.784308 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.846436] kube-apiserver[2036]: I0127 01:25:25.785540 2036 client.go:354] parsed scheme: ""
kube# [ 18.846607] kube-apiserver[2036]: I0127 01:25:25.785562 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.846955] kube-apiserver[2036]: I0127 01:25:25.785644 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.847277] kube-apiserver[2036]: I0127 01:25:25.785673 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.851081] kube-apiserver[2036]: I0127 01:25:25.790199 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.851567] kube-apiserver[2036]: I0127 01:25:25.790647 2036 client.go:354] parsed scheme: ""
kube# [ 18.851776] kube-apiserver[2036]: I0127 01:25:25.790680 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.852050] kube-apiserver[2036]: I0127 01:25:25.790739 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.852361] kube-apiserver[2036]: I0127 01:25:25.790775 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.856074] kube-apiserver[2036]: I0127 01:25:25.795178 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.856515] kube-apiserver[2036]: I0127 01:25:25.795605 2036 client.go:354] parsed scheme: ""
kube# [ 18.856843] kube-apiserver[2036]: I0127 01:25:25.795646 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.857253] kube-apiserver[2036]: I0127 01:25:25.795682 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.857459] kube-apiserver[2036]: I0127 01:25:25.795712 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.861196] kube-apiserver[2036]: I0127 01:25:25.800265 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.861835] kube-apiserver[2036]: I0127 01:25:25.800931 2036 client.go:354] parsed scheme: ""
kube# [ 18.862237] kube-apiserver[2036]: I0127 01:25:25.800952 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.862571] kube-apiserver[2036]: I0127 01:25:25.800995 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.862842] kube-apiserver[2036]: I0127 01:25:25.801050 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.868494] kube-apiserver[2036]: I0127 01:25:25.807122 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.874509] kube-apiserver[2036]: I0127 01:25:25.813616 2036 client.go:354] parsed scheme: ""
kube# [ 18.874706] kube-apiserver[2036]: I0127 01:25:25.813633 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.874967] kube-apiserver[2036]: I0127 01:25:25.813703 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.875390] kube-apiserver[2036]: I0127 01:25:25.813793 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.879247] kube-apiserver[2036]: I0127 01:25:25.818325 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.879711] kube-apiserver[2036]: I0127 01:25:25.818790 2036 client.go:354] parsed scheme: ""
kube# [ 18.880014] kube-apiserver[2036]: I0127 01:25:25.818811 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.880516] kube-apiserver[2036]: I0127 01:25:25.818862 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.880857] kube-apiserver[2036]: I0127 01:25:25.818932 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.885273] kube-apiserver[2036]: I0127 01:25:25.824381 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.885790] kube-apiserver[2036]: I0127 01:25:25.824816 2036 client.go:354] parsed scheme: ""
kube# [ 18.886119] kube-apiserver[2036]: I0127 01:25:25.824841 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.886462] kube-apiserver[2036]: I0127 01:25:25.824881 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.886730] kube-apiserver[2036]: I0127 01:25:25.824932 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.896865] kube-apiserver[2036]: I0127 01:25:25.835833 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.899243] kube-apiserver[2036]: I0127 01:25:25.838324 2036 client.go:354] parsed scheme: ""
kube# [ 18.899395] kube-apiserver[2036]: I0127 01:25:25.838343 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.899762] kube-apiserver[2036]: I0127 01:25:25.838862 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.900090] kube-apiserver[2036]: I0127 01:25:25.838896 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.905037] kube-apiserver[2036]: I0127 01:25:25.844154 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.905418] kube-apiserver[2036]: I0127 01:25:25.844538 2036 client.go:354] parsed scheme: ""
kube# [ 18.905664] kube-apiserver[2036]: I0127 01:25:25.844553 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.905931] kube-apiserver[2036]: I0127 01:25:25.844593 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.906120] kube-apiserver[2036]: I0127 01:25:25.844622 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.915404] kube-apiserver[2036]: I0127 01:25:25.854487 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.915840] kube-apiserver[2036]: I0127 01:25:25.854918 2036 client.go:354] parsed scheme: ""
kube# [ 18.916082] kube-apiserver[2036]: I0127 01:25:25.854953 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.916705] kube-apiserver[2036]: I0127 01:25:25.854990 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.917003] kube-apiserver[2036]: I0127 01:25:25.855052 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.921379] kube-apiserver[2036]: I0127 01:25:25.860492 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.921656] kube-apiserver[2036]: I0127 01:25:25.860775 2036 client.go:354] parsed scheme: ""
kube# [ 18.921989] kube-apiserver[2036]: I0127 01:25:25.860795 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.922319] kube-apiserver[2036]: I0127 01:25:25.860835 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.922562] kube-apiserver[2036]: I0127 01:25:25.860871 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.927140] kube-apiserver[2036]: I0127 01:25:25.866254 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.928882] kube-apiserver[2036]: I0127 01:25:25.867994 2036 client.go:354] parsed scheme: ""
kube# [ 18.929001] kube-apiserver[2036]: I0127 01:25:25.868016 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.929323] kube-apiserver[2036]: I0127 01:25:25.868048 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.931928] kube-apiserver[2036]: I0127 01:25:25.871030 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.937440] kube-apiserver[2036]: I0127 01:25:25.876549 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.937761] kube-apiserver[2036]: I0127 01:25:25.876866 2036 client.go:354] parsed scheme: ""
kube# [ 18.938159] kube-apiserver[2036]: I0127 01:25:25.876884 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.938547] kube-apiserver[2036]: I0127 01:25:25.876925 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.938836] kube-apiserver[2036]: I0127 01:25:25.876957 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.942454] kube-apiserver[2036]: I0127 01:25:25.881535 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.942803] kube-apiserver[2036]: I0127 01:25:25.881804 2036 client.go:354] parsed scheme: ""
kube# [ 18.943087] kube-apiserver[2036]: I0127 01:25:25.881822 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.943436] kube-apiserver[2036]: I0127 01:25:25.881846 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.943662] kube-apiserver[2036]: I0127 01:25:25.881873 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.947658] kube-apiserver[2036]: I0127 01:25:25.886741 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.947986] kube-apiserver[2036]: I0127 01:25:25.887028 2036 client.go:354] parsed scheme: ""
kube# [ 18.948379] kube-apiserver[2036]: I0127 01:25:25.887045 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.948637] kube-apiserver[2036]: I0127 01:25:25.887085 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.948850] kube-apiserver[2036]: I0127 01:25:25.887131 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.956703] kube-apiserver[2036]: I0127 01:25:25.895790 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.959866] kube-apiserver[2036]: I0127 01:25:25.898969 2036 client.go:354] parsed scheme: ""
kube# [ 18.960012] kube-apiserver[2036]: I0127 01:25:25.898986 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.960450] kube-apiserver[2036]: I0127 01:25:25.899017 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.960680] kube-apiserver[2036]: I0127 01:25:25.899063 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.967729] kube-apiserver[2036]: I0127 01:25:25.906830 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.967945] kube-apiserver[2036]: I0127 01:25:25.907056 2036 client.go:354] parsed scheme: ""
kube# [ 18.968304] kube-apiserver[2036]: I0127 01:25:25.907069 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.968592] kube-apiserver[2036]: I0127 01:25:25.907100 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.968834] kube-apiserver[2036]: I0127 01:25:25.907155 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.972571] kube-apiserver[2036]: I0127 01:25:25.911680 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.972894] kube-apiserver[2036]: I0127 01:25:25.911980 2036 client.go:354] parsed scheme: ""
kube# [ 18.973245] kube-apiserver[2036]: I0127 01:25:25.911999 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.973545] kube-apiserver[2036]: I0127 01:25:25.912036 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.973883] kube-apiserver[2036]: I0127 01:25:25.912134 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.977908] kube-apiserver[2036]: I0127 01:25:25.917016 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.978153] kube-apiserver[2036]: I0127 01:25:25.917241 2036 client.go:354] parsed scheme: ""
kube# [ 18.978386] kube-apiserver[2036]: I0127 01:25:25.917263 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.978625] kube-apiserver[2036]: I0127 01:25:25.917308 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.978867] kube-apiserver[2036]: I0127 01:25:25.917342 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.982688] kube-apiserver[2036]: I0127 01:25:25.921802 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.982994] kube-apiserver[2036]: I0127 01:25:25.922116 2036 client.go:354] parsed scheme: ""
kube# [ 18.983293] kube-apiserver[2036]: I0127 01:25:25.922133 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.983595] kube-apiserver[2036]: I0127 01:25:25.922164 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.983863] kube-apiserver[2036]: I0127 01:25:25.922212 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.987685] kube-apiserver[2036]: I0127 01:25:25.926801 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.987985] kube-apiserver[2036]: I0127 01:25:25.927055 2036 client.go:354] parsed scheme: ""
kube# [ 18.988304] kube-apiserver[2036]: I0127 01:25:25.927089 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.988602] kube-apiserver[2036]: I0127 01:25:25.927116 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.988924] kube-apiserver[2036]: I0127 01:25:25.927141 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.994823] kube-apiserver[2036]: I0127 01:25:25.933911 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.998943] kube-apiserver[2036]: I0127 01:25:25.938054 2036 client.go:354] parsed scheme: ""
kube# [ 18.999123] kube-apiserver[2036]: I0127 01:25:25.938076 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.999836] kube-apiserver[2036]: I0127 01:25:25.938111 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.000522] kube-apiserver[2036]: I0127 01:25:25.938159 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.010268] kube-apiserver[2036]: I0127 01:25:25.949309 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.015692] kube-apiserver[2036]: I0127 01:25:25.954789 2036 client.go:354] parsed scheme: ""
kube# [ 19.015909] kube-apiserver[2036]: I0127 01:25:25.954812 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.016431] kube-apiserver[2036]: I0127 01:25:25.954858 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.016738] kube-apiserver[2036]: I0127 01:25:25.954916 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.021153] kube-apiserver[2036]: I0127 01:25:25.960252 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.021596] kube-apiserver[2036]: I0127 01:25:25.960673 2036 client.go:354] parsed scheme: ""
kube# [ 19.021844] kube-apiserver[2036]: I0127 01:25:25.960703 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.022162] kube-apiserver[2036]: I0127 01:25:25.960742 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.024466] kube-apiserver[2036]: I0127 01:25:25.960801 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.026965] kube-apiserver[2036]: I0127 01:25:25.965906 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.027703] kube-apiserver[2036]: I0127 01:25:25.966262 2036 client.go:354] parsed scheme: ""
kube# [ 19.028007] kube-apiserver[2036]: I0127 01:25:25.966297 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.035617] kube-apiserver[2036]: I0127 01:25:25.974705 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.035875] kube-apiserver[2036]: I0127 01:25:25.974752 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.042277] kube-apiserver[2036]: I0127 01:25:25.981385 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.043316] kube-apiserver[2036]: I0127 01:25:25.982333 2036 client.go:354] parsed scheme: ""
kube# [ 19.043578] kube-apiserver[2036]: I0127 01:25:25.982401 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.043840] kube-apiserver[2036]: I0127 01:25:25.982437 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.044157] kube-apiserver[2036]: I0127 01:25:25.982500 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.048854] kube-apiserver[2036]: I0127 01:25:25.987962 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.050532] kube-apiserver[2036]: I0127 01:25:25.989645 2036 client.go:354] parsed scheme: ""
kube# [ 19.050660] kube-apiserver[2036]: I0127 01:25:25.989662 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.050920] kube-apiserver[2036]: I0127 01:25:25.989693 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.053302] kube-apiserver[2036]: I0127 01:25:25.992419 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.058825] kube-apiserver[2036]: I0127 01:25:25.997952 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.059270] kube-apiserver[2036]: I0127 01:25:25.998271 2036 client.go:354] parsed scheme: ""
kube# [ 19.059537] kube-apiserver[2036]: I0127 01:25:25.998311 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.059828] kube-apiserver[2036]: I0127 01:25:25.998397 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.060132] kube-apiserver[2036]: I0127 01:25:25.998439 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.066524] kube-apiserver[2036]: I0127 01:25:26.005631 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.068225] kube-apiserver[2036]: I0127 01:25:26.007330 2036 client.go:354] parsed scheme: ""
kube# [ 19.068348] kube-apiserver[2036]: I0127 01:25:26.007376 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.069880] kube-apiserver[2036]: I0127 01:25:26.009002 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.070018] kube-apiserver[2036]: I0127 01:25:26.009044 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.076453] kube-apiserver[2036]: I0127 01:25:26.015540 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.076745] kube-apiserver[2036]: I0127 01:25:26.015855 2036 client.go:354] parsed scheme: ""
kube# [ 19.076975] kube-apiserver[2036]: I0127 01:25:26.015876 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.077289] kube-apiserver[2036]: I0127 01:25:26.015917 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.077558] kube-apiserver[2036]: I0127 01:25:26.015959 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.081531] kube-apiserver[2036]: I0127 01:25:26.020644 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.081804] kube-apiserver[2036]: I0127 01:25:26.020892 2036 client.go:354] parsed scheme: ""
kube# [ 19.082098] kube-apiserver[2036]: I0127 01:25:26.020915 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.082506] kube-apiserver[2036]: I0127 01:25:26.020961 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.082798] kube-apiserver[2036]: I0127 01:25:26.021011 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.086604] kube-apiserver[2036]: I0127 01:25:26.025711 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.086778] kube-apiserver[2036]: I0127 01:25:26.025771 2036 client.go:354] parsed scheme: ""
kube# [ 19.087080] kube-apiserver[2036]: I0127 01:25:26.025785 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.087499] kube-apiserver[2036]: I0127 01:25:26.025811 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.087728] kube-apiserver[2036]: I0127 01:25:26.025843 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.092770] kube-apiserver[2036]: I0127 01:25:26.031876 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.093383] kube-apiserver[2036]: I0127 01:25:26.032468 2036 client.go:354] parsed scheme: ""
kube# [ 19.093657] kube-apiserver[2036]: I0127 01:25:26.032497 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.093967] kube-apiserver[2036]: I0127 01:25:26.032553 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.094430] kube-apiserver[2036]: I0127 01:25:26.032603 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.098479] kube-apiserver[2036]: I0127 01:25:26.037600 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.098828] kube-apiserver[2036]: I0127 01:25:26.037943 2036 client.go:354] parsed scheme: ""
kube# [ 19.099055] kube-apiserver[2036]: I0127 01:25:26.037962 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.099460] kube-apiserver[2036]: I0127 01:25:26.038012 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.099820] kube-apiserver[2036]: I0127 01:25:26.038060 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.103394] kube-apiserver[2036]: I0127 01:25:26.042512 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.103714] kube-apiserver[2036]: I0127 01:25:26.042802 2036 client.go:354] parsed scheme: ""
kube# [ 19.103994] kube-apiserver[2036]: I0127 01:25:26.042825 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.104280] kube-apiserver[2036]: I0127 01:25:26.042862 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.104529] kube-apiserver[2036]: I0127 01:25:26.042898 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.108253] kube-apiserver[2036]: I0127 01:25:26.047366 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.108522] kube-apiserver[2036]: I0127 01:25:26.047602 2036 client.go:354] parsed scheme: ""
kube# [ 19.108726] kube-apiserver[2036]: I0127 01:25:26.047623 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.108946] kube-apiserver[2036]: I0127 01:25:26.047658 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.109298] kube-apiserver[2036]: I0127 01:25:26.047697 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.113208] kube-apiserver[2036]: I0127 01:25:26.052304 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.113530] kube-apiserver[2036]: I0127 01:25:26.052630 2036 client.go:354] parsed scheme: ""
kube# [ 19.113791] kube-apiserver[2036]: I0127 01:25:26.052646 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.114060] kube-apiserver[2036]: I0127 01:25:26.052690 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.114363] kube-apiserver[2036]: I0127 01:25:26.052730 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.118109] kube-apiserver[2036]: I0127 01:25:26.057222 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.118339] kube-apiserver[2036]: I0127 01:25:26.057448 2036 client.go:354] parsed scheme: ""
kube# [ 19.118570] kube-apiserver[2036]: I0127 01:25:26.057465 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.118837] kube-apiserver[2036]: I0127 01:25:26.057491 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.119135] kube-apiserver[2036]: I0127 01:25:26.057559 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.123657] kube-apiserver[2036]: I0127 01:25:26.062775 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.126447] kube-apiserver[2036]: I0127 01:25:26.065567 2036 client.go:354] parsed scheme: ""
kube# [ 19.126631] kube-apiserver[2036]: I0127 01:25:26.065584 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.127043] kube-apiserver[2036]: I0127 01:25:26.065615 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.127370] kube-apiserver[2036]: I0127 01:25:26.065655 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.132425] kube-apiserver[2036]: I0127 01:25:26.071528 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.134082] kube-apiserver[2036]: I0127 01:25:26.073201 2036 client.go:354] parsed scheme: ""
kube# [ 19.134273] kube-apiserver[2036]: I0127 01:25:26.073219 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.134564] kube-apiserver[2036]: I0127 01:25:26.073251 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.136763] kube-apiserver[2036]: I0127 01:25:26.075881 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.142241] kube-apiserver[2036]: I0127 01:25:26.081306 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.142647] kube-apiserver[2036]: I0127 01:25:26.081749 2036 client.go:354] parsed scheme: ""
kube# [ 19.143031] kube-apiserver[2036]: I0127 01:25:26.081782 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.143451] kube-apiserver[2036]: I0127 01:25:26.081823 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.143726] kube-apiserver[2036]: I0127 01:25:26.081889 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.155825] kube-apiserver[2036]: I0127 01:25:26.094777 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.157609] kube-apiserver[2036]: I0127 01:25:26.095247 2036 client.go:354] parsed scheme: ""
kube# [ 19.158644] kube-apiserver[2036]: I0127 01:25:26.095264 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.159851] kube-apiserver[2036]: I0127 01:25:26.095353 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.161411] kube-apiserver[2036]: I0127 01:25:26.095452 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.162951] kube-apiserver[2036]: I0127 01:25:26.100091 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.165857] kube-apiserver[2036]: I0127 01:25:26.104970 2036 client.go:354] parsed scheme: ""
kube# [ 19.166941] kube-apiserver[2036]: I0127 01:25:26.104992 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.168240] kube-apiserver[2036]: I0127 01:25:26.105027 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.169629] kube-apiserver[2036]: I0127 01:25:26.105062 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.171089] kube-apiserver[2036]: I0127 01:25:26.109888 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.172626] kube-apiserver[2036]: I0127 01:25:26.110398 2036 client.go:354] parsed scheme: ""
kube# [ 19.173658] kube-apiserver[2036]: I0127 01:25:26.110423 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.174845] kube-apiserver[2036]: I0127 01:25:26.110453 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.176362] kube-apiserver[2036]: I0127 01:25:26.110477 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.177958] kube-apiserver[2036]: I0127 01:25:26.115081 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.179361] kube-apiserver[2036]: I0127 01:25:26.115714 2036 client.go:354] parsed scheme: ""
kube# [ 19.180304] kube-apiserver[2036]: I0127 01:25:26.115727 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.181547] kube-apiserver[2036]: I0127 01:25:26.115779 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.183132] kube-apiserver[2036]: I0127 01:25:26.115826 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.184600] kube-apiserver[2036]: I0127 01:25:26.120673 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.186018] kube-apiserver[2036]: I0127 01:25:26.121003 2036 client.go:354] parsed scheme: ""
kube# [ 19.187020] kube-apiserver[2036]: I0127 01:25:26.121017 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.188319] kube-apiserver[2036]: I0127 01:25:26.121044 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.189774] kube-apiserver[2036]: I0127 01:25:26.121076 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.191206] kube-apiserver[2036]: I0127 01:25:26.126154 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.266153] kube-apiserver[2036]: W0127 01:25:26.205199 2036 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
kube# [ 19.270900] kube-apiserver[2036]: W0127 01:25:26.210016 2036 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.272975] kube-apiserver[2036]: W0127 01:25:26.212093 2036 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.274468] kube-apiserver[2036]: W0127 01:25:26.212617 2036 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.275903] kube-apiserver[2036]: W0127 01:25:26.213909 2036 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.363731] kube-apiserver[2036]: I0127 01:25:26.302767 2036 client.go:354] parsed scheme: ""
kube# [ 19.364988] kube-apiserver[2036]: I0127 01:25:26.302791 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.366382] kube-apiserver[2036]: I0127 01:25:26.302823 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.368115] kube-apiserver[2036]: I0127 01:25:26.302852 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.370366] kube-apiserver[2036]: I0127 01:25:26.309227 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.727341] kube-apiserver[2036]: E0127 01:25:26.666429 2036 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.729311] kube-apiserver[2036]: E0127 01:25:26.667188 2036 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.731068] kube-apiserver[2036]: E0127 01:25:26.667229 2036 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.732786] kube-apiserver[2036]: E0127 01:25:26.667246 2036 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.735156] kube-apiserver[2036]: E0127 01:25:26.667270 2036 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.737435] kube-apiserver[2036]: E0127 01:25:26.667298 2036 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.739129] kube-apiserver[2036]: E0127 01:25:26.667324 2036 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.740803] kube-apiserver[2036]: E0127 01:25:26.667371 2036 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.742979] kube-apiserver[2036]: E0127 01:25:26.667451 2036 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.744732] kube-apiserver[2036]: E0127 01:25:26.667507 2036 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.746665] kube-apiserver[2036]: E0127 01:25:26.667533 2036 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.748529] kube-apiserver[2036]: E0127 01:25:26.667558 2036 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.750407] kube-apiserver[2036]: I0127 01:25:26.667590 2036 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 19.752987] kube-apiserver[2036]: I0127 01:25:26.667606 2036 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 19.755200] kube-apiserver[2036]: I0127 01:25:26.668668 2036 client.go:354] parsed scheme: ""
kube# [ 19.756296] kube-apiserver[2036]: I0127 01:25:26.668695 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.757637] kube-apiserver[2036]: I0127 01:25:26.668751 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.759188] kube-apiserver[2036]: I0127 01:25:26.668792 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.760651] kube-apiserver[2036]: I0127 01:25:26.674194 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.762145] kube-apiserver[2036]: I0127 01:25:26.674644 2036 client.go:354] parsed scheme: ""
kube# [ 19.763285] kube-apiserver[2036]: I0127 01:25:26.674660 2036 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.764626] kube-apiserver[2036]: I0127 01:25:26.674717 2036 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.766150] kube-apiserver[2036]: I0127 01:25:26.674747 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.767687] kube-apiserver[2036]: I0127 01:25:26.680446 2036 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.784811] kube-apiserver[2036]: I0127 01:25:27.723452 2036 secure_serving.go:116] Serving securely on [::]:443
kube# [ 20.786467] kube-apiserver[2036]: I0127 01:25:27.723512 2036 controller.go:176] Shutting down kubernetes service endpoint reconciler
kube# [ 20.788128] kube-apiserver[2036]: I0127 01:25:27.723775 2036 autoregister_controller.go:140] Starting autoregister controller
kube# [ 20.789517] kube-apiserver[2036]: I0127 01:25:27.723799 2036 cache.go:32] Waiting for caches to sync for autoregister controller
kube# [ 20.790780] kube-apiserver[2036]: I0127 01:25:27.723862 2036 apiservice_controller.go:94] Starting APIServiceRegistrationController
kube# [ 20.792021] kube-apiserver[2036]: I0127 01:25:27.723885 2036 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
kube# [ 20.793413] kube-apiserver[2036]: I0127 01:25:27.723901 2036 crdregistration_controller.go:112] Starting crd-autoregister controller
kube# [ 20.795518] kube-apiserver[2036]: I0127 01:25:27.723917 2036 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
kube# [ 20.797369] kube-apiserver[2036]: E0127 01:25:27.723945 2036 cache.go:35] Unable to sync caches for autoregister controller
kube# [ 20.799026] kube-apiserver[2036]: I0127 01:25:27.723974 2036 autoregister_controller.go:145] Shutting down autoregister controller
kube# [ 20.800858] kube-apiserver[2036]: E0127 01:25:27.724051 2036 controller_utils.go:1032] unable to sync caches for crd-autoregister controller
kube# [ 20.802823] kube-apiserver[2036]: I0127 01:25:27.725332 2036 crdregistration_controller.go:117] Shutting down crd-autoregister controller
kube# [ 20.804339] kube-apiserver[2036]: E0127 01:25:27.726442 2036 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller
kube# [ 20.805658] kube-apiserver[2036]: I0127 01:25:27.726498 2036 apiservice_controller.go:98] Shutting down APIServiceRegistrationController
kube# [ 20.807036] kube-apiserver[2036]: E0127 01:25:27.727603 2036 controller.go:179] StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.1, ResourceVersion: 0, AdditionalErrorMsg:
kube# [ 20.809028] certmgr[1951]: 2020/01/27 01:25:27 [INFO] manager: certificate successfully processed
kube# [ 20.810020] certmgr[1951]: 2020/01/27 01:25:27 [INFO] manager: certificate successfully processed
kube# [ 20.811004] certmgr[1951]: 2020/01/27 01:25:27 [INFO] manager: certificate successfully processed
kube# [ 20.812307] kube-scheduler[2046]: E0127 01:25:27.732900 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://192.168.1.1/api/v1/nodes?limit=500&resourceVersion=0: EOF
kube# [ 20.814231] kube-scheduler[2046]: E0127 01:25:27.733135 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://192.168.1.1/api/v1/persistentvolumes?limit=500&resourceVersion=0: EOF
kube# [ 20.816143] kube-scheduler[2046]: E0127 01:25:27.733136 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.1.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: EOF
kube# [ 20.818264] kube-scheduler[2046]: E0127 01:25:27.733207 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://192.168.1.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: EOF
kube# [ 20.820162] kube-scheduler[2046]: E0127 01:25:27.733266 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.1.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: EOF
kube# [ 20.822328] kube-scheduler[2046]: E0127 01:25:27.733335 2046 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://192.168.1.1/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: EOF
kube# [ 20.824918] systemd[1]: kube-apiserver.service: Succeeded.
kube# [ 20.825982] kube-controller-manager[2075]: E0127 01:25:27.733208 2075 leaderelection.go:324] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.1.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: EOF
kube# [ 20.828650] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 20.829479] systemd[1]: kube-apiserver.service: Consumed 3.418s CPU time, received 244.4K IP traffic, sent 226.0K IP traffic.
kube# [ 20.830837] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 20.852542] kube-apiserver[2212]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 20.853872] kube-apiserver[2212]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 20.855060] kube-apiserver[2212]: I0127 01:25:27.791398 2212 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 20.856405] kube-apiserver[2212]: I0127 01:25:27.791617 2212 server.go:147] Version: v1.15.6
kube# [ 21.319344] kube-apiserver[2212]: I0127 01:25:28.258418 2212 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 21.322706] kube-apiserver[2212]: I0127 01:25:28.258462 2212 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 21.328398] kube-apiserver[2212]: E0127 01:25:28.267508 2212 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.330404] kube-apiserver[2212]: E0127 01:25:28.267541 2212 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.332382] kube-apiserver[2212]: E0127 01:25:28.267574 2212 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.334193] kube-apiserver[2212]: E0127 01:25:28.267592 2212 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.335876] kube-apiserver[2212]: E0127 01:25:28.267613 2212 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.337945] kube-apiserver[2212]: E0127 01:25:28.267639 2212 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.340053] kube-apiserver[2212]: E0127 01:25:28.267667 2212 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.341797] kube-apiserver[2212]: E0127 01:25:28.267711 2212 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.343450] kube-apiserver[2212]: E0127 01:25:28.267803 2212 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.345405] kube-apiserver[2212]: E0127 01:25:28.267935 2212 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.347682] kube-apiserver[2212]: E0127 01:25:28.267960 2212 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.349998] kube-apiserver[2212]: E0127 01:25:28.267977 2212 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 21.352778] kube-apiserver[2212]: I0127 01:25:28.268002 2212 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 21.356620] kube-apiserver[2212]: I0127 01:25:28.268020 2212 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 21.359578] kube-apiserver[2212]: I0127 01:25:28.269614 2212 client.go:354] parsed scheme: ""
kube# [ 21.361201] kube-apiserver[2212]: I0127 01:25:28.269634 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.362655] kube-apiserver[2212]: I0127 01:25:28.269690 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.364308] kube-apiserver[2212]: I0127 01:25:28.269736 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.365909] kube-apiserver[2212]: I0127 01:25:28.278692 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.367247] kube-apiserver[2212]: I0127 01:25:28.278942 2212 client.go:354] parsed scheme: ""
kube# [ 21.368238] kube-apiserver[2212]: I0127 01:25:28.278968 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.369418] kube-apiserver[2212]: I0127 01:25:28.279005 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.370739] kube-apiserver[2212]: I0127 01:25:28.279040 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.372037] kube-apiserver[2212]: I0127 01:25:28.283633 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.376370] kube-apiserver[2212]: I0127 01:25:28.315459 2212 master.go:233] Using reconciler: lease
kube# [ 21.377508] kube-apiserver[2212]: I0127 01:25:28.315768 2212 client.go:354] parsed scheme: ""
kube# [ 21.378558] kube-apiserver[2212]: I0127 01:25:28.315780 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.379768] kube-apiserver[2212]: I0127 01:25:28.315827 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.381130] kube-apiserver[2212]: I0127 01:25:28.315865 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.382599] kube-apiserver[2212]: I0127 01:25:28.320815 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.384020] kube-apiserver[2212]: I0127 01:25:28.322165 2212 client.go:354] parsed scheme: ""
kube# [ 21.385030] kube-apiserver[2212]: I0127 01:25:28.322190 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.386259] kube-apiserver[2212]: I0127 01:25:28.322227 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.387624] kube-apiserver[2212]: I0127 01:25:28.322260 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.389186] kube-apiserver[2212]: I0127 01:25:28.326808 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.390669] kube-apiserver[2212]: I0127 01:25:28.327175 2212 client.go:354] parsed scheme: ""
kube# [ 21.391890] kube-apiserver[2212]: I0127 01:25:28.327189 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.393239] kube-apiserver[2212]: I0127 01:25:28.327217 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.394724] kube-apiserver[2212]: I0127 01:25:28.327244 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.396308] kube-apiserver[2212]: I0127 01:25:28.333922 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.397771] kube-apiserver[2212]: I0127 01:25:28.334207 2212 client.go:354] parsed scheme: ""
kube# [ 21.398756] kube-apiserver[2212]: I0127 01:25:28.334232 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.400011] kube-apiserver[2212]: I0127 01:25:28.334373 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.401688] kube-apiserver[2212]: I0127 01:25:28.334417 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.403093] kube-apiserver[2212]: I0127 01:25:28.339082 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.404465] kube-apiserver[2212]: I0127 01:25:28.339327 2212 client.go:354] parsed scheme: ""
kube# [ 21.405560] kube-apiserver[2212]: I0127 01:25:28.339342 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.406984] kube-apiserver[2212]: I0127 01:25:28.339406 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.408386] kube-apiserver[2212]: I0127 01:25:28.339444 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.409679] kube-apiserver[2212]: I0127 01:25:28.344078 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.411386] kube-apiserver[2212]: I0127 01:25:28.344459 2212 client.go:354] parsed scheme: ""
kube# [ 21.412666] kube-apiserver[2212]: I0127 01:25:28.344472 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.414089] kube-apiserver[2212]: I0127 01:25:28.344501 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.415733] kube-apiserver[2212]: I0127 01:25:28.344535 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.417244] kube-apiserver[2212]: I0127 01:25:28.349010 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.418630] kube-apiserver[2212]: I0127 01:25:28.349247 2212 client.go:354] parsed scheme: ""
kube# [ 21.419648] kube-apiserver[2212]: I0127 01:25:28.349267 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.420993] kube-apiserver[2212]: I0127 01:25:28.349311 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.422486] kube-apiserver[2212]: I0127 01:25:28.349427 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.423896] kube-apiserver[2212]: I0127 01:25:28.358712 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.425395] kube-apiserver[2212]: I0127 01:25:28.359123 2212 client.go:354] parsed scheme: ""
kube# [ 21.426414] kube-apiserver[2212]: I0127 01:25:28.359149 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.427656] kube-apiserver[2212]: I0127 01:25:28.359195 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.429075] kube-apiserver[2212]: I0127 01:25:28.359245 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.430640] kube-apiserver[2212]: I0127 01:25:28.363789 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.432087] kube-apiserver[2212]: I0127 01:25:28.364100 2212 client.go:354] parsed scheme: ""
kube# [ 21.433148] kube-apiserver[2212]: I0127 01:25:28.364119 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.434411] kube-apiserver[2212]: I0127 01:25:28.364156 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.435981] kube-apiserver[2212]: I0127 01:25:28.364207 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.437425] kube-apiserver[2212]: I0127 01:25:28.368950 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.438806] kube-apiserver[2212]: I0127 01:25:28.369317 2212 client.go:354] parsed scheme: ""
kube# [ 21.439813] kube-apiserver[2212]: I0127 01:25:28.369394 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.441243] kube-apiserver[2212]: I0127 01:25:28.369431 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.442688] kube-apiserver[2212]: I0127 01:25:28.369497 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.444135] kube-apiserver[2212]: I0127 01:25:28.374135 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.445636] kube-apiserver[2212]: I0127 01:25:28.374502 2212 client.go:354] parsed scheme: ""
kube# [ 21.446710] kube-apiserver[2212]: I0127 01:25:28.374526 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.447949] kube-apiserver[2212]: I0127 01:25:28.374565 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.449460] kube-apiserver[2212]: I0127 01:25:28.374618 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.450906] kube-apiserver[2212]: I0127 01:25:28.379142 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.452359] kube-apiserver[2212]: I0127 01:25:28.379514 2212 client.go:354] parsed scheme: ""
kube# [ 21.453362] kube-apiserver[2212]: I0127 01:25:28.379534 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.454649] kube-apiserver[2212]: I0127 01:25:28.379583 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.456383] kube-apiserver[2212]: I0127 01:25:28.379628 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.457924] kube-apiserver[2212]: I0127 01:25:28.384480 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.459451] kube-apiserver[2212]: I0127 01:25:28.384953 2212 client.go:354] parsed scheme: ""
kube# [ 21.460491] kube-apiserver[2212]: I0127 01:25:28.384967 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.461868] kube-apiserver[2212]: I0127 01:25:28.384992 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.463347] kube-apiserver[2212]: I0127 01:25:28.385025 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.464686] kube-apiserver[2212]: I0127 01:25:28.389652 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.466078] kube-apiserver[2212]: I0127 01:25:28.389940 2212 client.go:354] parsed scheme: ""
kube# [ 21.467217] kube-apiserver[2212]: I0127 01:25:28.389954 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.468470] kube-apiserver[2212]: I0127 01:25:28.389979 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.469905] kube-apiserver[2212]: I0127 01:25:28.390015 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.471310] kube-apiserver[2212]: I0127 01:25:28.396707 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.472820] kube-apiserver[2212]: I0127 01:25:28.397206 2212 client.go:354] parsed scheme: ""
kube# [ 21.473819] kube-apiserver[2212]: I0127 01:25:28.397234 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.475063] kube-apiserver[2212]: I0127 01:25:28.397272 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.476538] kube-apiserver[2212]: I0127 01:25:28.397321 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.478030] kube-apiserver[2212]: I0127 01:25:28.405510 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.479518] kube-apiserver[2212]: I0127 01:25:28.405780 2212 client.go:354] parsed scheme: ""
kube# [ 21.480549] kube-apiserver[2212]: I0127 01:25:28.405795 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.481889] kube-apiserver[2212]: I0127 01:25:28.405848 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.483430] kube-apiserver[2212]: I0127 01:25:28.405901 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.484876] kube-apiserver[2212]: I0127 01:25:28.410852 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.486325] kube-apiserver[2212]: I0127 01:25:28.411077 2212 client.go:354] parsed scheme: ""
kube# [ 21.487340] kube-apiserver[2212]: I0127 01:25:28.411096 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.488598] kube-apiserver[2212]: I0127 01:25:28.411137 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.489985] kube-apiserver[2212]: I0127 01:25:28.411171 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.491429] kube-apiserver[2212]: I0127 01:25:28.415550 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.492767] kube-apiserver[2212]: I0127 01:25:28.415818 2212 client.go:354] parsed scheme: ""
kube# [ 21.493784] kube-apiserver[2212]: I0127 01:25:28.415837 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.495095] kube-apiserver[2212]: I0127 01:25:28.415873 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.496982] kube-apiserver[2212]: I0127 01:25:28.415906 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.498968] kube-apiserver[2212]: I0127 01:25:28.420273 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.523076] kube-apiserver[2212]: I0127 01:25:28.462130 2212 client.go:354] parsed scheme: ""
kube# [ 21.524312] kube-apiserver[2212]: I0127 01:25:28.462153 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.525608] kube-apiserver[2212]: I0127 01:25:28.462188 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.527044] kube-apiserver[2212]: I0127 01:25:28.462229 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.529711] kube-apiserver[2212]: I0127 01:25:28.468062 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.529885] kube-apiserver[2212]: I0127 01:25:28.468737 2212 client.go:354] parsed scheme: ""
kube# [ 21.530090] kube-apiserver[2212]: I0127 01:25:28.468753 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.530517] kube-apiserver[2212]: I0127 01:25:28.468803 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.530783] kube-apiserver[2212]: I0127 01:25:28.468848 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.535976] kube-apiserver[2212]: I0127 01:25:28.475068 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.536507] kube-apiserver[2212]: I0127 01:25:28.475583 2212 client.go:354] parsed scheme: ""
kube# [ 21.536796] kube-apiserver[2212]: I0127 01:25:28.475631 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.536989] kube-apiserver[2212]: I0127 01:25:28.475678 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.537397] kube-apiserver[2212]: I0127 01:25:28.475729 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.546809] kube-apiserver[2212]: I0127 01:25:28.485907 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.548300] kube-apiserver[2212]: I0127 01:25:28.487414 2212 client.go:354] parsed scheme: ""
kube# [ 21.548456] kube-apiserver[2212]: I0127 01:25:28.487427 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.548777] kube-apiserver[2212]: I0127 01:25:28.487454 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.549032] kube-apiserver[2212]: I0127 01:25:28.487483 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.556097] kube-apiserver[2212]: I0127 01:25:28.495199 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.556455] kube-apiserver[2212]: I0127 01:25:28.495562 2212 client.go:354] parsed scheme: ""
kube# [ 21.556797] kube-apiserver[2212]: I0127 01:25:28.495580 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.557099] kube-apiserver[2212]: I0127 01:25:28.495617 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.557389] kube-apiserver[2212]: I0127 01:25:28.495686 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.561255] kube-apiserver[2212]: I0127 01:25:28.500366 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.561540] kube-apiserver[2212]: I0127 01:25:28.500660 2212 client.go:354] parsed scheme: ""
kube# [ 21.561800] kube-apiserver[2212]: I0127 01:25:28.500677 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.562070] kube-apiserver[2212]: I0127 01:25:28.500706 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.562376] kube-apiserver[2212]: I0127 01:25:28.500741 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.566187] kube-apiserver[2212]: I0127 01:25:28.505291 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.566487] kube-apiserver[2212]: I0127 01:25:28.505577 2212 client.go:354] parsed scheme: ""
kube# [ 21.566883] kube-apiserver[2212]: I0127 01:25:28.505597 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.567142] kube-apiserver[2212]: I0127 01:25:28.505624 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.567462] kube-apiserver[2212]: I0127 01:25:28.505697 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.572332] kube-apiserver[2212]: I0127 01:25:28.511318 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.572610] kube-apiserver[2212]: I0127 01:25:28.511693 2212 client.go:354] parsed scheme: ""
kube# [ 21.572844] kube-apiserver[2212]: I0127 01:25:28.511711 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.573101] kube-apiserver[2212]: I0127 01:25:28.511749 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.573390] kube-apiserver[2212]: I0127 01:25:28.511787 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.581392] kube-apiserver[2212]: I0127 01:25:28.520509 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.581681] kube-apiserver[2212]: I0127 01:25:28.520752 2212 client.go:354] parsed scheme: ""
kube# [ 21.582014] kube-apiserver[2212]: I0127 01:25:28.520774 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.582385] kube-apiserver[2212]: I0127 01:25:28.520801 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.582668] kube-apiserver[2212]: I0127 01:25:28.520832 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.587825] kube-apiserver[2212]: I0127 01:25:28.526901 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.589909] kube-apiserver[2212]: I0127 01:25:28.528998 2212 client.go:354] parsed scheme: ""
kube# [ 21.590088] kube-apiserver[2212]: I0127 01:25:28.529028 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.590488] kube-apiserver[2212]: I0127 01:25:28.529078 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.592770] kube-apiserver[2212]: I0127 01:25:28.531879 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.598517] kube-apiserver[2212]: I0127 01:25:28.537625 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.598865] kube-apiserver[2212]: I0127 01:25:28.537977 2212 client.go:354] parsed scheme: ""
kube# [ 21.599085] kube-apiserver[2212]: I0127 01:25:28.537996 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.599459] kube-apiserver[2212]: I0127 01:25:28.538031 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.599739] kube-apiserver[2212]: I0127 01:25:28.538064 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.603522] kube-apiserver[2212]: I0127 01:25:28.542564 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.603755] kube-apiserver[2212]: I0127 01:25:28.542867 2212 client.go:354] parsed scheme: ""
kube# [ 21.603963] kube-apiserver[2212]: I0127 01:25:28.542886 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.604306] kube-apiserver[2212]: I0127 01:25:28.543065 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.604560] kube-apiserver[2212]: I0127 01:25:28.543108 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.609158] kube-apiserver[2212]: I0127 01:25:28.548213 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.609450] kube-apiserver[2212]: I0127 01:25:28.548553 2212 client.go:354] parsed scheme: ""
kube# [ 21.609661] kube-apiserver[2212]: I0127 01:25:28.548573 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.609891] kube-apiserver[2212]: I0127 01:25:28.548605 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.610262] kube-apiserver[2212]: I0127 01:25:28.548649 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.614293] kube-apiserver[2212]: I0127 01:25:28.553411 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.614673] kube-apiserver[2212]: I0127 01:25:28.553763 2212 client.go:354] parsed scheme: ""
kube# [ 21.614936] kube-apiserver[2212]: I0127 01:25:28.553790 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.615280] kube-apiserver[2212]: I0127 01:25:28.553896 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.615546] kube-apiserver[2212]: I0127 01:25:28.553933 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.619347] kube-apiserver[2212]: I0127 01:25:28.558464 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.619700] kube-apiserver[2212]: I0127 01:25:28.558786 2212 client.go:354] parsed scheme: ""
kube# [ 21.619912] kube-apiserver[2212]: I0127 01:25:28.558818 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.620224] kube-apiserver[2212]: I0127 01:25:28.558866 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.620463] kube-apiserver[2212]: I0127 01:25:28.558949 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.625427] kube-apiserver[2212]: I0127 01:25:28.564545 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.626888] kube-apiserver[2212]: I0127 01:25:28.566014 2212 client.go:354] parsed scheme: ""
kube# [ 21.626983] kube-apiserver[2212]: I0127 01:25:28.566031 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.627404] kube-apiserver[2212]: I0127 01:25:28.566061 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.627649] kube-apiserver[2212]: I0127 01:25:28.566096 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.631417] kube-apiserver[2212]: I0127 01:25:28.570536 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.632847] kube-apiserver[2212]: I0127 01:25:28.571974 2212 client.go:354] parsed scheme: ""
kube# [ 21.632978] kube-apiserver[2212]: I0127 01:25:28.571990 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.633252] kube-apiserver[2212]: I0127 01:25:28.572015 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.635553] kube-apiserver[2212]: I0127 01:25:28.574678 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.640790] kube-apiserver[2212]: I0127 01:25:28.579884 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.641070] kube-apiserver[2212]: I0127 01:25:28.580120 2212 client.go:354] parsed scheme: ""
kube# [ 21.641435] kube-apiserver[2212]: I0127 01:25:28.580144 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.641661] kube-apiserver[2212]: I0127 01:25:28.580187 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.642038] kube-apiserver[2212]: I0127 01:25:28.580248 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.650041] kube-apiserver[2212]: I0127 01:25:28.589151 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.651390] kube-apiserver[2212]: I0127 01:25:28.590511 2212 client.go:354] parsed scheme: ""
kube# [ 21.651497] kube-apiserver[2212]: I0127 01:25:28.590529 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.651775] kube-apiserver[2212]: I0127 01:25:28.590557 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.652072] kube-apiserver[2212]: I0127 01:25:28.590589 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.656112] kube-apiserver[2212]: I0127 01:25:28.595227 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.657723] kube-apiserver[2212]: I0127 01:25:28.596849 2212 client.go:354] parsed scheme: ""
kube# [ 21.657829] kube-apiserver[2212]: I0127 01:25:28.596865 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.658112] kube-apiserver[2212]: I0127 01:25:28.596893 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.658404] kube-apiserver[2212]: I0127 01:25:28.596940 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.662332] kube-apiserver[2212]: I0127 01:25:28.601406 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.663830] kube-apiserver[2212]: I0127 01:25:28.602934 2212 client.go:354] parsed scheme: ""
kube# [ 21.663969] kube-apiserver[2212]: I0127 01:25:28.602950 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.664243] kube-apiserver[2212]: I0127 01:25:28.602978 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.664643] kube-apiserver[2212]: I0127 01:25:28.603015 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.668341] kube-apiserver[2212]: I0127 01:25:28.607447 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.669857] kube-apiserver[2212]: I0127 01:25:28.608968 2212 client.go:354] parsed scheme: ""
kube# [ 21.669977] kube-apiserver[2212]: I0127 01:25:28.608984 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.670265] kube-apiserver[2212]: I0127 01:25:28.609009 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.670584] kube-apiserver[2212]: I0127 01:25:28.609037 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.678341] kube-apiserver[2212]: I0127 01:25:28.617428 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.679716] kube-apiserver[2212]: I0127 01:25:28.618834 2212 client.go:354] parsed scheme: ""
kube# [ 21.679820] kube-apiserver[2212]: I0127 01:25:28.618853 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.680094] kube-apiserver[2212]: I0127 01:25:28.618887 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.680371] kube-apiserver[2212]: I0127 01:25:28.618919 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.687525] kube-apiserver[2212]: I0127 01:25:28.626637 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.687805] kube-apiserver[2212]: I0127 01:25:28.626907 2212 client.go:354] parsed scheme: ""
kube# [ 21.688090] kube-apiserver[2212]: I0127 01:25:28.626930 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.688504] kube-apiserver[2212]: I0127 01:25:28.626971 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.688770] kube-apiserver[2212]: I0127 01:25:28.627011 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.694364] kube-apiserver[2212]: I0127 01:25:28.633467 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.694762] kube-apiserver[2212]: I0127 01:25:28.633852 2212 client.go:354] parsed scheme: ""
kube# [ 21.695126] kube-apiserver[2212]: I0127 01:25:28.633896 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.695483] kube-apiserver[2212]: I0127 01:25:28.633936 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.695860] kube-apiserver[2212]: I0127 01:25:28.634012 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.700631] kube-apiserver[2212]: I0127 01:25:28.639730 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.700835] kube-apiserver[2212]: I0127 01:25:28.639901 2212 client.go:354] parsed scheme: ""
kube# [ 21.701122] kube-apiserver[2212]: I0127 01:25:28.639929 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.701498] kube-apiserver[2212]: I0127 01:25:28.639977 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.701757] kube-apiserver[2212]: I0127 01:25:28.640028 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.710148] kube-apiserver[2212]: I0127 01:25:28.649257 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.710393] kube-apiserver[2212]: I0127 01:25:28.649493 2212 client.go:354] parsed scheme: ""
kube# [ 21.710792] kube-apiserver[2212]: I0127 01:25:28.649524 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.711144] kube-apiserver[2212]: I0127 01:25:28.649556 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.711523] kube-apiserver[2212]: I0127 01:25:28.649783 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.715545] kube-apiserver[2212]: I0127 01:25:28.654593 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.716987] kube-apiserver[2212]: I0127 01:25:28.655081 2212 client.go:354] parsed scheme: ""
kube# [ 21.717505] kube-apiserver[2212]: I0127 01:25:28.655113 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.717777] kube-apiserver[2212]: I0127 01:25:28.655140 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.718056] kube-apiserver[2212]: I0127 01:25:28.655186 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.725092] kube-apiserver[2212]: I0127 01:25:28.664176 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.726713] kube-apiserver[2212]: I0127 01:25:28.665831 2212 client.go:354] parsed scheme: ""
kube# [ 21.727270] kube-apiserver[2212]: I0127 01:25:28.665849 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.727766] kube-apiserver[2212]: I0127 01:25:28.665883 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.728869] kube-apiserver[2212]: I0127 01:25:28.665925 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.731642] kube-apiserver[2212]: I0127 01:25:28.670722 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.731913] kube-apiserver[2212]: I0127 01:25:28.671010 2212 client.go:354] parsed scheme: ""
kube# [ 21.732134] kube-apiserver[2212]: I0127 01:25:28.671036 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.732457] kube-apiserver[2212]: I0127 01:25:28.671063 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.732752] kube-apiserver[2212]: I0127 01:25:28.671090 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.736615] kube-apiserver[2212]: I0127 01:25:28.675706 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.736969] kube-apiserver[2212]: I0127 01:25:28.676073 2212 client.go:354] parsed scheme: ""
kube# [ 21.737293] kube-apiserver[2212]: I0127 01:25:28.676097 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.737627] kube-apiserver[2212]: I0127 01:25:28.676146 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.737944] kube-apiserver[2212]: I0127 01:25:28.676199 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.741912] kube-apiserver[2212]: I0127 01:25:28.681033 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.742229] kube-apiserver[2212]: I0127 01:25:28.681269 2212 client.go:354] parsed scheme: ""
kube# [ 21.742491] kube-apiserver[2212]: I0127 01:25:28.681303 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.742769] kube-apiserver[2212]: I0127 01:25:28.681337 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.743053] kube-apiserver[2212]: I0127 01:25:28.681451 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.746854] kube-apiserver[2212]: I0127 01:25:28.685953 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.747070] kube-apiserver[2212]: I0127 01:25:28.686182 2212 client.go:354] parsed scheme: ""
kube# [ 21.747438] kube-apiserver[2212]: I0127 01:25:28.686200 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.747692] kube-apiserver[2212]: I0127 01:25:28.686233 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.748008] kube-apiserver[2212]: I0127 01:25:28.686268 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.752017] kube-apiserver[2212]: I0127 01:25:28.691127 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.752540] kube-apiserver[2212]: I0127 01:25:28.691625 2212 client.go:354] parsed scheme: ""
kube# [ 21.752798] kube-apiserver[2212]: I0127 01:25:28.691682 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.753075] kube-apiserver[2212]: I0127 01:25:28.691715 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.753347] kube-apiserver[2212]: I0127 01:25:28.691749 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.760963] kube-apiserver[2212]: I0127 01:25:28.700074 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.761303] kube-apiserver[2212]: I0127 01:25:28.700418 2212 client.go:354] parsed scheme: ""
kube# [ 21.761538] kube-apiserver[2212]: I0127 01:25:28.700435 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.761792] kube-apiserver[2212]: I0127 01:25:28.700461 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.762069] kube-apiserver[2212]: I0127 01:25:28.700492 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.770529] kube-apiserver[2212]: I0127 01:25:28.709633 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.770894] kube-apiserver[2212]: I0127 01:25:28.709962 2212 client.go:354] parsed scheme: ""
kube# [ 21.771484] kube-apiserver[2212]: I0127 01:25:28.709984 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.771860] kube-apiserver[2212]: I0127 01:25:28.710065 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.772111] kube-apiserver[2212]: I0127 01:25:28.710101 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.775870] kube-apiserver[2212]: I0127 01:25:28.714964 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.776259] kube-apiserver[2212]: I0127 01:25:28.715268 2212 client.go:354] parsed scheme: ""
kube# [ 21.776540] kube-apiserver[2212]: I0127 01:25:28.715300 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.776858] kube-apiserver[2212]: I0127 01:25:28.715342 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.777131] kube-apiserver[2212]: I0127 01:25:28.715442 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.781042] kube-apiserver[2212]: I0127 01:25:28.720138 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.781631] kube-apiserver[2212]: I0127 01:25:28.720706 2212 client.go:354] parsed scheme: ""
kube# [ 21.781939] kube-apiserver[2212]: I0127 01:25:28.720748 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.782373] kube-apiserver[2212]: I0127 01:25:28.720798 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.782568] kube-apiserver[2212]: I0127 01:25:28.720855 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.790394] kube-apiserver[2212]: I0127 01:25:28.729503 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.790668] kube-apiserver[2212]: I0127 01:25:28.729733 2212 client.go:354] parsed scheme: ""
kube# [ 21.790958] kube-apiserver[2212]: I0127 01:25:28.729758 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.791203] kube-apiserver[2212]: I0127 01:25:28.729791 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.791541] kube-apiserver[2212]: I0127 01:25:28.729836 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.801490] kube-apiserver[2212]: I0127 01:25:28.740564 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.801849] kube-apiserver[2212]: I0127 01:25:28.740942 2212 client.go:354] parsed scheme: ""
kube# [ 21.802126] kube-apiserver[2212]: I0127 01:25:28.740964 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.802470] kube-apiserver[2212]: I0127 01:25:28.741011 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.802748] kube-apiserver[2212]: I0127 01:25:28.741055 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.810809] kube-apiserver[2212]: I0127 01:25:28.749910 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.811222] kube-apiserver[2212]: I0127 01:25:28.750265 2212 client.go:354] parsed scheme: ""
kube# [ 21.811543] kube-apiserver[2212]: I0127 01:25:28.750294 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.811838] kube-apiserver[2212]: I0127 01:25:28.750388 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.812242] kube-apiserver[2212]: I0127 01:25:28.750430 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.816049] kube-apiserver[2212]: I0127 01:25:28.755149 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.816426] kube-apiserver[2212]: I0127 01:25:28.755515 2212 client.go:354] parsed scheme: ""
kube# [ 21.816660] kube-apiserver[2212]: I0127 01:25:28.755544 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.816899] kube-apiserver[2212]: I0127 01:25:28.755596 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.817237] kube-apiserver[2212]: I0127 01:25:28.755643 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.823806] kube-apiserver[2212]: I0127 01:25:28.762831 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.824212] kube-apiserver[2212]: I0127 01:25:28.763250 2212 client.go:354] parsed scheme: ""
kube# [ 21.824579] kube-apiserver[2212]: I0127 01:25:28.763293 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.824877] kube-apiserver[2212]: I0127 01:25:28.763331 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.825209] kube-apiserver[2212]: I0127 01:25:28.763410 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.833371] kube-apiserver[2212]: I0127 01:25:28.772472 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.833640] kube-apiserver[2212]: I0127 01:25:28.772750 2212 client.go:354] parsed scheme: ""
kube# [ 21.833855] kube-apiserver[2212]: I0127 01:25:28.772771 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.834148] kube-apiserver[2212]: I0127 01:25:28.772808 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.834419] kube-apiserver[2212]: I0127 01:25:28.772847 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.842443] kube-apiserver[2212]: I0127 01:25:28.781551 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.842775] kube-apiserver[2212]: I0127 01:25:28.781833 2212 client.go:354] parsed scheme: ""
kube# [ 21.843073] kube-apiserver[2212]: I0127 01:25:28.781858 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.843422] kube-apiserver[2212]: I0127 01:25:28.781908 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.843690] kube-apiserver[2212]: I0127 01:25:28.781999 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.851857] kube-apiserver[2212]: I0127 01:25:28.790964 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.853713] kube-apiserver[2212]: I0127 01:25:28.792491 2212 client.go:354] parsed scheme: ""
kube# [ 21.853821] kube-apiserver[2212]: I0127 01:25:28.792507 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.854127] kube-apiserver[2212]: I0127 01:25:28.792538 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.854421] kube-apiserver[2212]: I0127 01:25:28.792566 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.858200] kube-apiserver[2212]: I0127 01:25:28.797260 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.859840] kube-apiserver[2212]: I0127 01:25:28.797665 2212 client.go:354] parsed scheme: ""
kube# [ 21.860890] kube-apiserver[2212]: I0127 01:25:28.797683 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.862141] kube-apiserver[2212]: I0127 01:25:28.797715 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.863639] kube-apiserver[2212]: I0127 01:25:28.797752 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.865203] kube-apiserver[2212]: I0127 01:25:28.802647 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.866596] kube-apiserver[2212]: I0127 01:25:28.802941 2212 client.go:354] parsed scheme: ""
kube# [ 21.867601] kube-apiserver[2212]: I0127 01:25:28.802964 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.868838] kube-apiserver[2212]: I0127 01:25:28.802987 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.870481] kube-apiserver[2212]: I0127 01:25:28.803032 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.872125] kube-apiserver[2212]: I0127 01:25:28.809077 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.873610] kube-apiserver[2212]: I0127 01:25:28.809486 2212 client.go:354] parsed scheme: ""
kube# [ 21.874631] kube-apiserver[2212]: I0127 01:25:28.809504 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.875965] kube-apiserver[2212]: I0127 01:25:28.809539 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.877450] kube-apiserver[2212]: I0127 01:25:28.809573 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.878933] kube-apiserver[2212]: I0127 01:25:28.814243 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.880387] kube-apiserver[2212]: I0127 01:25:28.814603 2212 client.go:354] parsed scheme: ""
kube# [ 21.881470] kube-apiserver[2212]: I0127 01:25:28.814622 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.882730] kube-apiserver[2212]: I0127 01:25:28.814652 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.884142] kube-apiserver[2212]: I0127 01:25:28.814706 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.885620] kube-apiserver[2212]: I0127 01:25:28.819477 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.887058] kube-apiserver[2212]: I0127 01:25:28.819707 2212 client.go:354] parsed scheme: ""
kube# [ 21.888075] kube-apiserver[2212]: I0127 01:25:28.819721 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.889346] kube-apiserver[2212]: I0127 01:25:28.819746 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.890799] kube-apiserver[2212]: I0127 01:25:28.819784 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.892197] kube-apiserver[2212]: I0127 01:25:28.824244 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.893568] kube-apiserver[2212]: I0127 01:25:28.824695 2212 client.go:354] parsed scheme: ""
kube# [ 21.894557] kube-apiserver[2212]: I0127 01:25:28.824708 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 21.895920] kube-apiserver[2212]: I0127 01:25:28.824733 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 21.897309] kube-apiserver[2212]: I0127 01:25:28.824771 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.898683] kube-apiserver[2212]: I0127 01:25:28.829256 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.956024] kube-apiserver[2212]: W0127 01:25:28.895070 2212 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
kube# [ 21.960499] kube-apiserver[2212]: W0127 01:25:28.899615 2212 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
kube# [ 21.962718] kube-apiserver[2212]: W0127 01:25:28.901839 2212 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
kube# [ 21.964160] kube-apiserver[2212]: W0127 01:25:28.902257 2212 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
kube# [ 21.965550] kube-apiserver[2212]: W0127 01:25:28.903519 2212 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
kube# [ 22.316943] kube-apiserver[2212]: I0127 01:25:29.255987 2212 client.go:354] parsed scheme: ""
kube# [ 22.318087] kube-apiserver[2212]: I0127 01:25:29.256014 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.319428] kube-apiserver[2212]: I0127 01:25:29.256050 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.320798] kube-apiserver[2212]: I0127 01:25:29.256083 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.322366] kube-apiserver[2212]: I0127 01:25:29.260994 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.397345] kube-apiserver[2212]: E0127 01:25:29.336370 2212 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.399185] kube-apiserver[2212]: E0127 01:25:29.336429 2212 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.400929] kube-apiserver[2212]: E0127 01:25:29.336459 2212 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.402555] kube-apiserver[2212]: E0127 01:25:29.336483 2212 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.404466] kube-apiserver[2212]: E0127 01:25:29.336517 2212 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.406316] kube-apiserver[2212]: E0127 01:25:29.336544 2212 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.408118] kube-apiserver[2212]: E0127 01:25:29.336566 2212 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.410305] kube-apiserver[2212]: E0127 01:25:29.336622 2212 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.412008] kube-apiserver[2212]: E0127 01:25:29.336699 2212 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.413698] kube-apiserver[2212]: E0127 01:25:29.336762 2212 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.415377] kube-apiserver[2212]: E0127 01:25:29.336789 2212 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.417369] kube-apiserver[2212]: E0127 01:25:29.336820 2212 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.419394] kube-apiserver[2212]: I0127 01:25:29.336848 2212 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 22.422245] kube-apiserver[2212]: I0127 01:25:29.336863 2212 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 22.424756] kube-apiserver[2212]: I0127 01:25:29.337878 2212 client.go:354] parsed scheme: ""
kube# [ 22.426069] kube-apiserver[2212]: I0127 01:25:29.337897 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.427507] kube-apiserver[2212]: I0127 01:25:29.337949 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.429056] kube-apiserver[2212]: I0127 01:25:29.338006 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.430710] kube-apiserver[2212]: I0127 01:25:29.342841 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.432186] kube-apiserver[2212]: I0127 01:25:29.343322 2212 client.go:354] parsed scheme: ""
kube# [ 22.433226] kube-apiserver[2212]: I0127 01:25:29.343339 2212 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.434552] kube-apiserver[2212]: I0127 01:25:29.343415 2212 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.436053] kube-apiserver[2212]: I0127 01:25:29.343447 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.437522] kube-apiserver[2212]: I0127 01:25:29.348254 2212 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.431399] kube-apiserver[2212]: I0127 01:25:30.370089 2212 secure_serving.go:116] Serving securely on [::]:443
kube# [ 23.432564] kube-apiserver[2212]: I0127 01:25:30.370138 2212 autoregister_controller.go:140] Starting autoregister controller
kube# [ 23.433960] kube-apiserver[2212]: I0127 01:25:30.370148 2212 cache.go:32] Waiting for caches to sync for autoregister controller
kube# [ 23.440206] kube-apiserver[2212]: I0127 01:25:30.372271 2212 apiservice_controller.go:94] Starting APIServiceRegistrationController
kube# [ 23.442180] kube-apiserver[2212]: I0127 01:25:30.372312 2212 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
kube# [ 23.444958] kube-apiserver[2212]: I0127 01:25:30.372330 2212 available_controller.go:376] Starting AvailableConditionController
kube# [ 23.447811] kube-apiserver[2212]: I0127 01:25:30.372378 2212 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
kube# [ 23.450867] kube-apiserver[2212]: I0127 01:25:30.374070 2212 crd_finalizer.go:255] Starting CRDFinalizer
kube# [ 23.452931] kube-apiserver[2212]: I0127 01:25:30.374111 2212 controller.go:83] Starting OpenAPI controller
kube# [ 23.454919] kube-apiserver[2212]: I0127 01:25:30.374133 2212 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
kube# [ 23.457262] kube-apiserver[2212]: I0127 01:25:30.374146 2212 establishing_controller.go:73] Starting EstablishingController
kube# [ 23.460341] kube-apiserver[2212]: I0127 01:25:30.374252 2212 customresource_discovery_controller.go:208] Starting DiscoveryController
kube# [ 23.463394] kube-apiserver[2212]: I0127 01:25:30.374458 2212 naming_controller.go:288] Starting NamingConditionController
kube# [ 23.465643] kube-apiserver[2212]: I0127 01:25:30.374525 2212 controller.go:81] Starting OpenAPI AggregationController
kube# [ 23.467421] kube-apiserver[2212]: E0127 01:25:30.374574 2212 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.1, ResourceVersion: 0, AdditionalErrorMsg:
kube# [ 23.469919] kube-apiserver[2212]: I0127 01:25:30.374557 2212 crdregistration_controller.go:112] Starting crd-autoregister controller
kube# [ 23.471466] kube-apiserver[2212]: I0127 01:25:30.374620 2212 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
kube# [ 23.473485] kube-proxy[2089]: W0127 01:25:30.400460 2089 node.go:113] Failed to retrieve node info: nodes "kube" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
kube# [ 23.475410] kube-proxy[2089]: I0127 01:25:30.400491 2089 server_others.go:143] Using iptables Proxier.
kube# [ 23.476564] kube-proxy[2089]: W0127 01:25:30.405458 2089 proxier.go:316] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
kube# [ 23.478278] kube-scheduler[2046]: E0127 01:25:30.404482 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 23.480752] kube-scheduler[2046]: E0127 01:25:30.404491 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 23.484097] kube-scheduler[2046]: E0127 01:25:30.404498 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 23.487936] kube-scheduler[2046]: E0127 01:25:30.404526 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 23.491623] kube-scheduler[2046]: E0127 01:25:30.404484 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 23.494948] kube-scheduler[2046]: E0127 01:25:30.404560 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 23.498795] kube-scheduler[2046]: E0127 01:25:30.404514 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 23.503148] etcd[2124]: proto: no coders for int
kube# [ 23.504963] etcd[2124]: proto: no encoder for ValueSize int [GetProperties]
kube# [ 23.507487] kube-controller-manager[2075]: E0127 01:25:30.402560 2075 leaderelection.go:324] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
kube# [ 23.511657] kube-scheduler[2046]: E0127 01:25:30.404588 2046 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 23.514378] kube-scheduler[2046]: E0127 01:25:30.404482 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 23.517030] kube-scheduler[2046]: E0127 01:25:30.404641 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube# [ 23.520033] kube-proxy[2089]: I0127 01:25:30.422412 2089 server.go:534] Version: v1.15.6
kube# [ 23.521442] kube-proxy[2089]: I0127 01:25:30.436475 2089 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
kube# [ 23.523120] kube-proxy[2089]: I0127 01:25:30.436517 2089 conntrack.go:52] Setting nf_conntrack_max to 524288Error from server (NotFound): nodes "kube.my.xzy" not found
kube#
kube: exit status 1
kube# [ 23.529034] systemd[1]: Started Kubernetes systemd probe.
kube# [ 23.531439] kube-apiserver[2212]: I0127 01:25:30.470527 2212 cache.go:39] Caches are synced for autoregister controller
kube# [ 23.533534] kube-apiserver[2212]: I0127 01:25:30.472535 2212 cache.go:39] Caches are synced for AvailableConditionController controller
kube# [ 23.536930] kube-apiserver[2212]: I0127 01:25:30.472548 2212 cache.go:39] Caches are synced for APIServiceRegistrationController controller
kube# [ 23.539371] kube-apiserver[2212]: I0127 01:25:30.476506 2212 controller_utils.go:1036] Caches are synced for crd-autoregister controller
kube# [ 23.540757] systemd[1]: run-rda2fd89e32af4dc7975f8408488ad551.scope: Succeeded.
kube# [ 23.541702] kube-proxy[2089]: I0127 01:25:30.478789 2089 conntrack.go:83] Setting conntrack hashsize to 131072
(5.50 seconds)
kube# [ 23.556439] kube-proxy[2089]: I0127 01:25:30.495528 2089 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
kube# [ 23.558622] kube-proxy[2089]: I0127 01:25:30.495614 2089 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
kube# [ 23.560874] kube-proxy[2089]: I0127 01:25:30.495906 2089 config.go:187] Starting service config controller
kube# [ 23.562867] kube-proxy[2089]: I0127 01:25:30.496005 2089 controller_utils.go:1029] Waiting for caches to sync for service config controller
kube# [ 23.565308] kube-proxy[2089]: I0127 01:25:30.495863 2089 config.go:96] Starting endpoints config controller
kube# [ 23.567216] kube-proxy[2089]: I0127 01:25:30.496157 2089 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
kube# [ 23.569282] kube-proxy[2089]: E0127 01:25:30.505647 2089 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 23.571755] kube-proxy[2089]: E0127 01:25:30.505715 2089 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# [ 23.576464] kube-proxy[2089]: E0127 01:25:30.515534 2089 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.15ed99f704f19c27", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube", UID:"kube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"kube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ace29d8db827, ext:5672114473, loc:(*time.Location)(0x2740d40)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ace29d8db827, ext:5672114473, loc:(*time.Location)(0x2740d40)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
kube# [ 23.767681] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver:kubelet-api-admin created
kube# [ 23.772828] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: clusterrole.rbac.authorization.k8s.io/system:coredns created
kube# [ 23.782733] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
kube# [ 23.787910] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: clusterrole.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 23.791949] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 23.797357] kube-apiserver[2212]: I0127 01:25:30.736437 2212 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
kube# [ 23.800153] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: role.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 23.805252] kube-apiserver[2212]: I0127 01:25:30.744338 2212 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
kube# [ 23.807231] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2072]: rolebinding.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 23.810009] systemd[1]: Started Kubernetes addon manager.
kube# [ 23.812691] certmgr[1951]: 2020/01/27 01:25:30 [INFO] manager: certificate successfully processed
kube# [ 23.820577] kube-addons[2273]: INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress ==
kube# [ 23.829914] kube-addons[2273]: INFO: == Kubernetes addon manager started at 2020-01-27T01:25:30+00:00 with ADDON_CHECK_INTERVAL_SEC=60 ==
kube# [ 24.378682] kube-addons[2273]: error: the server doesn't have a resource type "serviceaccount"
kube# [ 24.380143] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 24.429839] kube-apiserver[2212]: I0127 01:25:31.368909 2212 controller.go:107] OpenAPI AggregationController: Processing item
kube# [ 24.431607] kube-apiserver[2212]: I0127 01:25:31.368934 2212 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
kube# [ 24.433039] kube-apiserver[2212]: I0127 01:25:31.369140 2212 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
kube# [ 24.438201] kube-apiserver[2212]: I0127 01:25:31.377252 2212 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
kube# [ 24.446353] kube-apiserver[2212]: I0127 01:25:31.385462 2212 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
kube# [ 24.447835] kube-apiserver[2212]: I0127 01:25:31.385488 2212 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
kube# [ 24.466370] kube-scheduler[2046]: E0127 01:25:31.405415 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 24.468734] kube-scheduler[2046]: E0127 01:25:31.406727 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 24.471409] kube-scheduler[2046]: E0127 01:25:31.407842 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 24.473818] kube-scheduler[2046]: E0127 01:25:31.409643 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 24.476623] kube-scheduler[2046]: E0127 01:25:31.410622 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 24.479365] kube-scheduler[2046]: E0127 01:25:31.411793 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 24.482387] kube-scheduler[2046]: E0127 01:25:31.413268 2046 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 24.484931] kube-scheduler[2046]: E0127 01:25:31.414404 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 24.488146] kube-scheduler[2046]: E0127 01:25:31.415666 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 24.490619] kube-scheduler[2046]: E0127 01:25:31.416918 2046 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 24.568012] kube-proxy[2089]: E0127 01:25:31.506808 2089 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 24.570431] kube-proxy[2089]: E0127 01:25:31.507707 2089 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.06 seconds)
kube# [ 24.829472] kube-apiserver[2212]: W0127 01:25:31.768556 2212 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.1.1]
kube# [ 24.831710] kube-apiserver[2212]: I0127 01:25:31.770829 2212 controller.go:606] quota admission added evaluator for: endpoints
kube# [ 24.948716] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 24.949960] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 25.046302] nscd[1122]: 1122 checking for monitored file `/etc/netgroup': No such file or directory
kube# [ 25.521299] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 25.523297] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 25.657658] kube-proxy[2089]: I0127 01:25:32.596267 2089 controller_utils.go:1036] Caches are synced for service config controller
kube# [ 25.659144] kube-proxy[2089]: I0127 01:25:32.596334 2089 controller_utils.go:1036] Caches are synced for endpoints config controller
kube# [ 26.021526] kube-controller-manager[2075]: I0127 01:25:32.960163 2075 leaderelection.go:245] successfully acquired lease kube-system/kube-controller-manager
kube# [ 26.023997] kube-controller-manager[2075]: I0127 01:25:32.960303 2075 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"4b0ff1a0-e681-45b0-8ec3-de0defec1507", APIVersion:"v1", ResourceVersion:"155", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube_3917d2b1-2e17-4b1a-8e84-86eb71de2a73 became leader
kube# [ 26.110477] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 26.112857] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 26.243587] kube-controller-manager[2075]: I0127 01:25:33.182652 2075 plugins.go:103] No cloud provider specified.
kube# [ 26.244889] kube-controller-manager[2075]: I0127 01:25:33.183960 2075 controller_utils.go:1029] Waiting for caches to sync for tokens controller
kube# [ 26.250298] kube-apiserver[2212]: I0127 01:25:33.188999 2212 controller.go:606] quota admission added evaluator for: serviceaccounts
kube# [ 26.328747] kube-scheduler[2046]: I0127 01:25:33.267299 2046 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
kube# [ 26.337529] kube-scheduler[2046]: I0127 01:25:33.276611 2046 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
kube# [ 26.345438] kube-controller-manager[2075]: I0127 01:25:33.284406 2075 controller_utils.go:1036] Caches are synced for tokens controller
kube# [ 26.359691] kube-controller-manager[2075]: I0127 01:25:33.298749 2075 controllermanager.go:532] Started "replicaset"
kube# [ 26.360150] kube-controller-manager[2075]: I0127 01:25:33.298887 2075 replica_set.go:182] Starting replicaset controller
kube# [ 26.360452] kube-controller-manager[2075]: I0127 01:25:33.299288 2075 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
kube# [ 26.375795] kube-controller-manager[2075]: I0127 01:25:33.314865 2075 controllermanager.go:532] Started "csrapproving"
kube# [ 26.376011] kube-controller-manager[2075]: W0127 01:25:33.314912 2075 controllermanager.go:511] "tokencleaner" is disabled
kube# [ 26.376501] kube-controller-manager[2075]: I0127 01:25:33.315054 2075 certificate_controller.go:113] Starting certificate controller
kube# [ 26.376724] kube-controller-manager[2075]: I0127 01:25:33.315076 2075 controller_utils.go:1029] Waiting for caches to sync for certificate controller
kube# [ 26.388450] kube-controller-manager[2075]: I0127 01:25:33.327550 2075 controllermanager.go:532] Started "endpoint"
kube# [ 26.388675] kube-controller-manager[2075]: I0127 01:25:33.327764 2075 endpoints_controller.go:166] Starting endpoint controller
kube# [ 26.388971] kube-controller-manager[2075]: I0127 01:25:33.327788 2075 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
kube# [ 26.405090] kube-controller-manager[2075]: I0127 01:25:33.344145 2075 controllermanager.go:532] Started "job"
kube# [ 26.405431] kube-controller-manager[2075]: I0127 01:25:33.344300 2075 job_controller.go:143] Starting job controller
kube# [ 26.405732] kube-controller-manager[2075]: I0127 01:25:33.344333 2075 controller_utils.go:1029] Waiting for caches to sync for job controller
kube# [ 26.418993] kube-controller-manager[2075]: I0127 01:25:33.358044 2075 controllermanager.go:532] Started "cronjob"
kube# [ 26.419257] kube-controller-manager[2075]: W0127 01:25:33.358077 2075 controllermanager.go:524] Skipping "ttl-after-finished"
kube# [ 26.419633] kube-controller-manager[2075]: I0127 01:25:33.358170 2075 cronjob_controller.go:96] Starting CronJob Manager
kube# [ 26.432040] kube-controller-manager[2075]: I0127 01:25:33.371135 2075 controllermanager.go:532] Started "statefulset"
kube# [ 26.432243] kube-controller-manager[2075]: W0127 01:25:33.371174 2075 controllermanager.go:524] Skipping "csrsigning"
kube# [ 26.434437] kube-controller-manager[2075]: I0127 01:25:33.373145 2075 stateful_set.go:145] Starting stateful set controller
kube# [ 26.434579] kube-controller-manager[2075]: I0127 01:25:33.373521 2075 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
kube# [ 26.500323] kube-controller-manager[2075]: I0127 01:25:33.439375 2075 controllermanager.go:532] Started "persistentvolume-expander"
kube# [ 26.500495] kube-controller-manager[2075]: I0127 01:25:33.439444 2075 expand_controller.go:300] Starting expand controller
kube# [ 26.504063] kube-controller-manager[2075]: I0127 01:25:33.443179 2075 controller_utils.go:1029] Waiting for caches to sync for expand controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 26.687905] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 26.689259] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 26.746937] kube-controller-manager[2075]: I0127 01:25:33.686001 2075 controllermanager.go:532] Started "replicationcontroller"
kube# [ 26.747101] kube-controller-manager[2075]: I0127 01:25:33.686059 2075 replica_set.go:182] Starting replicationcontroller controller
kube# [ 26.747454] kube-controller-manager[2075]: I0127 01:25:33.686078 2075 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
kube# [ 27.199958] kube-controller-manager[2075]: I0127 01:25:34.138763 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
kube# [ 27.200151] kube-controller-manager[2075]: I0127 01:25:34.138821 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
kube# [ 27.200551] kube-controller-manager[2075]: I0127 01:25:34.138877 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
kube# [ 27.200855] kube-controller-manager[2075]: I0127 01:25:34.138945 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
kube# [ 27.201080] kube-controller-manager[2075]: I0127 01:25:34.138982 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
kube# [ 27.201464] kube-controller-manager[2075]: I0127 01:25:34.139007 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
kube# [ 27.201761] kube-controller-manager[2075]: I0127 01:25:34.139043 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
kube# [ 27.202005] kube-controller-manager[2075]: I0127 01:25:34.139068 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
kube# [ 27.202263] kube-controller-manager[2075]: I0127 01:25:34.139103 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
kube# [ 27.202463] kube-controller-manager[2075]: I0127 01:25:34.139137 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
kube# [ 27.202697] kube-controller-manager[2075]: I0127 01:25:34.139171 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
kube# [ 27.203041] kube-controller-manager[2075]: I0127 01:25:34.139210 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
kube# [ 27.203501] kube-controller-manager[2075]: I0127 01:25:34.139266 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
kube# [ 27.203710] kube-controller-manager[2075]: I0127 01:25:34.139336 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
kube# [ 27.203965] kube-controller-manager[2075]: I0127 01:25:34.139404 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
kube# [ 27.204311] kube-controller-manager[2075]: I0127 01:25:34.139447 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
kube# [ 27.204621] kube-controller-manager[2075]: I0127 01:25:34.139509 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
kube# [ 27.204936] kube-controller-manager[2075]: I0127 01:25:34.139542 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
kube# [ 27.205459] kube-controller-manager[2075]: I0127 01:25:34.139576 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
kube# [ 27.205754] kube-controller-manager[2075]: I0127 01:25:34.139621 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
kube# [ 27.206113] kube-controller-manager[2075]: I0127 01:25:34.139655 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
kube# [ 27.206484] kube-controller-manager[2075]: I0127 01:25:34.139690 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
kube# [ 27.206759] kube-controller-manager[2075]: I0127 01:25:34.139713 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
kube# [ 27.207212] kube-controller-manager[2075]: I0127 01:25:34.139749 2075 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.extensions
kube# [ 27.207503] kube-controller-manager[2075]: I0127 01:25:34.139842 2075 controllermanager.go:532] Started "resourcequota"
kube# [ 27.207744] kube-controller-manager[2075]: I0127 01:25:34.139876 2075 resource_quota_controller.go:271] Starting resource quota controller
kube# [ 27.207989] kube-controller-manager[2075]: I0127 01:25:34.139911 2075 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 27.208269] kube-controller-manager[2075]: I0127 01:25:34.139946 2075 resource_quota_monitor.go:303] QuotaMonitor running
kube# [ 27.246719] kube-controller-manager[2075]: I0127 01:25:34.185732 2075 node_lifecycle_controller.go:77] Sending events to api server
kube# [ 27.246868] kube-controller-manager[2075]: E0127 01:25:34.185779 2075 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
kube# [ 27.247348] kube-controller-manager[2075]: W0127 01:25:34.185792 2075 controllermanager.go:524] Skipping "cloud-node-lifecycle"
kube# [ 27.292398] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 27.294223] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 27.498488] kube-controller-manager[2075]: I0127 01:25:34.437582 2075 controllermanager.go:532] Started "podgc"
kube# [ 27.499490] kube-controller-manager[2075]: I0127 01:25:34.437682 2075 gc_controller.go:76] Starting GC controller
kube# [ 27.499669] kube-controller-manager[2075]: I0127 01:25:34.438790 2075 controller_utils.go:1029] Waiting for caches to sync for GC controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 27.749861] kube-controller-manager[2075]: I0127 01:25:34.688787 2075 controllermanager.go:532] Started "serviceaccount"
kube# [ 27.750049] kube-controller-manager[2075]: I0127 01:25:34.688848 2075 serviceaccounts_controller.go:117] Starting service account controller
kube# [ 27.750382] kube-controller-manager[2075]: I0127 01:25:34.688865 2075 controller_utils.go:1029] Waiting for caches to sync for service account controller
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 27.858587] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 27.859831] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 28.428835] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 28.430226] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 28.446711] kube-controller-manager[2075]: I0127 01:25:35.385563 2075 controllermanager.go:532] Started "horizontalpodautoscaling"
kube# [ 28.446859] kube-controller-manager[2075]: I0127 01:25:35.385624 2075 horizontal.go:156] Starting HPA controller
kube# [ 28.447096] kube-controller-manager[2075]: I0127 01:25:35.385643 2075 controller_utils.go:1029] Waiting for caches to sync for HPA controller
kube# [ 28.596494] kube-controller-manager[2075]: I0127 01:25:35.535591 2075 node_ipam_controller.go:94] Sending events to api server.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 28.997747] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 28.999068] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 29.571068] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 29.572383] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 30.149805] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.151719] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 30.717639] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.719118] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 31.287724] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 31.289099] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 31.857108] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 31.858577] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 32.428884] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 32.430346] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 33.002329] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.003558] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 33.569040] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.570456] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 34.137506] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.138853] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 34.707701] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.709207] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 35.275428] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 35.276802] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 35.852362] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 35.853667] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 36.423686] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 36.425246] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 36.993678] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 36.994996] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 37.566079] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 37.567558] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 38.132548] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.134539] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 38.598668] kube-controller-manager[2075]: I0127 01:25:45.537418 2075 range_allocator.go:78] Sending events to api server.
kube# [ 38.598932] kube-controller-manager[2075]: I0127 01:25:45.537494 2075 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses.
kube# [ 38.599206] kube-controller-manager[2075]: I0127 01:25:45.537523 2075 controllermanager.go:532] Started "nodeipam"
kube# [ 38.599491] kube-controller-manager[2075]: I0127 01:25:45.537624 2075 node_ipam_controller.go:162] Starting ipam controller
kube# [ 38.599716] kube-controller-manager[2075]: I0127 01:25:45.537659 2075 controller_utils.go:1029] Waiting for caches to sync for node controller
kube# [ 38.616193] kube-controller-manager[2075]: E0127 01:25:45.555261 2075 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
kube# [ 38.616335] kube-controller-manager[2075]: W0127 01:25:45.555289 2075 controllermanager.go:524] Skipping "service"
kube# [ 38.628248] kube-controller-manager[2075]: I0127 01:25:45.567335 2075 controllermanager.go:532] Started "persistentvolume-binder"
kube# [ 38.628402] kube-controller-manager[2075]: I0127 01:25:45.567393 2075 pv_controller_base.go:282] Starting persistent volume controller
kube# [ 38.628659] kube-controller-manager[2075]: I0127 01:25:45.567425 2075 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
kube# [ 38.642856] kube-controller-manager[2075]: I0127 01:25:45.581959 2075 controllermanager.go:532] Started "clusterrole-aggregation"
kube# [ 38.643115] kube-controller-manager[2075]: I0127 01:25:45.582061 2075 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
kube# [ 38.643388] kube-controller-manager[2075]: I0127 01:25:45.582089 2075 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
kube# [ 38.659461] kube-controller-manager[2075]: I0127 01:25:45.598448 2075 controllermanager.go:532] Started "daemonset"
kube# [ 38.659654] kube-controller-manager[2075]: I0127 01:25:45.598472 2075 daemon_controller.go:267] Starting daemon sets controller
kube# [ 38.659858] kube-controller-manager[2075]: I0127 01:25:45.598494 2075 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
kube# [ 38.678793] kube-controller-manager[2075]: I0127 01:25:45.617881 2075 controllermanager.go:532] Started "disruption"
kube# [ 38.679012] kube-controller-manager[2075]: I0127 01:25:45.617981 2075 disruption.go:333] Starting disruption controller
kube# [ 38.679529] kube-controller-manager[2075]: I0127 01:25:45.617999 2075 controller_utils.go:1029] Waiting for caches to sync for disruption controller
kube# [ 38.712748] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.714149] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 38.750400] kube-controller-manager[2075]: I0127 01:25:45.689478 2075 controllermanager.go:532] Started "ttl"
kube# [ 38.750586] kube-controller-manager[2075]: W0127 01:25:45.689502 2075 controllermanager.go:511] "bootstrapsigner" is disabled
kube# [ 38.750834] kube-controller-manager[2075]: I0127 01:25:45.689548 2075 ttl_controller.go:116] Starting TTL controller
kube# [ 38.751068] kube-controller-manager[2075]: I0127 01:25:45.689568 2075 controller_utils.go:1029] Waiting for caches to sync for TTL controller
kube# [ 39.001243] kube-controller-manager[2075]: W0127 01:25:45.940329 2075 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
kube# [ 39.001463] kube-controller-manager[2075]: E0127 01:25:45.940383 2075 plugins.go:590] Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec: permission denied
kube# [ 39.001803] kube-controller-manager[2075]: I0127 01:25:45.940586 2075 controllermanager.go:532] Started "attachdetach"
kube# [ 39.002032] kube-controller-manager[2075]: I0127 01:25:45.940640 2075 attach_detach_controller.go:335] Starting attach detach controller
kube# [ 39.002259] kube-controller-manager[2075]: I0127 01:25:45.940675 2075 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 39.250956] kube-controller-manager[2075]: I0127 01:25:46.190036 2075 controllermanager.go:532] Started "pv-protection"
kube# [ 39.251400] kube-controller-manager[2075]: I0127 01:25:46.190119 2075 pv_protection_controller.go:82] Starting PV protection controller
kube# [ 39.251687] kube-controller-manager[2075]: I0127 01:25:46.190144 2075 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
kube# [ 39.288010] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 39.289934] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.06 seconds)
kube# [ 39.500559] kube-controller-manager[2075]: I0127 01:25:46.439629 2075 controllermanager.go:532] Started "deployment"
kube# [ 39.500757] kube-controller-manager[2075]: I0127 01:25:46.439680 2075 deployment_controller.go:152] Starting deployment controller
kube# [ 39.501085] kube-controller-manager[2075]: I0127 01:25:46.439697 2075 controller_utils.go:1029] Waiting for caches to sync for deployment controller
kube# [ 39.650651] kube-controller-manager[2075]: I0127 01:25:46.589315 2075 node_lifecycle_controller.go:291] Sending events to api server.
kube# [ 39.650946] kube-controller-manager[2075]: I0127 01:25:46.589542 2075 node_lifecycle_controller.go:324] Controller is using taint based evictions.
kube# [ 39.651251] kube-controller-manager[2075]: I0127 01:25:46.589591 2075 taint_manager.go:158] Sending events to api server.
kube# [ 39.651488] kube-controller-manager[2075]: I0127 01:25:46.589922 2075 node_lifecycle_controller.go:418] Controller will reconcile labels.
kube# [ 39.651677] kube-controller-manager[2075]: I0127 01:25:46.590011 2075 node_lifecycle_controller.go:431] Controller will taint node by condition.
kube# [ 39.651881] kube-controller-manager[2075]: I0127 01:25:46.590041 2075 controllermanager.go:532] Started "nodelifecycle"
kube# [ 39.652097] kube-controller-manager[2075]: W0127 01:25:46.590056 2075 controllermanager.go:524] Skipping "root-ca-cert-publisher"
kube# [ 39.652433] kube-controller-manager[2075]: I0127 01:25:46.590124 2075 node_lifecycle_controller.go:455] Starting node controller
kube# [ 39.652699] kube-controller-manager[2075]: I0127 01:25:46.590147 2075 controller_utils.go:1029] Waiting for caches to sync for taint controller
kube# [ 39.861028] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 39.862832] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 39.905127] kube-controller-manager[2075]: I0127 01:25:46.844150 2075 controllermanager.go:532] Started "namespace"
kube# [ 39.905410] kube-controller-manager[2075]: I0127 01:25:46.844205 2075 namespace_controller.go:186] Starting namespace controller
kube# [ 39.905532] kube-controller-manager[2075]: I0127 01:25:46.844221 2075 controller_utils.go:1029] Waiting for caches to sync for namespace controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 40.435498] kube-addons[2273]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 40.436748] kube-addons[2273]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 40.704588] kube-controller-manager[2075]: I0127 01:25:47.643220 2075 garbagecollector.go:128] Starting garbage collector controller
kube# [ 40.704866] kube-controller-manager[2075]: I0127 01:25:47.643923 2075 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 40.705119] kube-controller-manager[2075]: I0127 01:25:47.643233 2075 controllermanager.go:532] Started "garbagecollector"
kube# [ 40.705431] kube-controller-manager[2075]: I0127 01:25:47.643973 2075 graph_builder.go:280] GraphBuilder running
kube# [ 40.712544] kube-controller-manager[2075]: I0127 01:25:47.651643 2075 controllermanager.go:532] Started "csrcleaner"
kube# [ 40.712704] kube-controller-manager[2075]: W0127 01:25:47.651672 2075 core.go:174] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
kube# [ 40.712967] kube-controller-manager[2075]: W0127 01:25:47.651685 2075 controllermanager.go:524] Skipping "route"
kube# [ 40.713490] kube-controller-manager[2075]: I0127 01:25:47.651730 2075 cleaner.go:81] Starting CSR cleaner controller
kube# [ 40.727982] kube-controller-manager[2075]: I0127 01:25:47.667061 2075 controllermanager.go:532] Started "pvc-protection"
kube# [ 40.728521] kube-controller-manager[2075]: I0127 01:25:47.667598 2075 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 40.731636] kube-controller-manager[2075]: I0127 01:25:47.670744 2075 pvc_protection_controller.go:100] Starting PVC protection controller
kube# [ 40.731767] kube-controller-manager[2075]: I0127 01:25:47.670776 2075 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
kube# [ 40.749947] kube-controller-manager[2075]: I0127 01:25:47.689036 2075 controller_utils.go:1036] Caches are synced for service account controller
kube# [ 40.750635] kube-controller-manager[2075]: I0127 01:25:47.689744 2075 controller_utils.go:1036] Caches are synced for TTL controller
kube# [ 40.751229] kube-controller-manager[2075]: I0127 01:25:47.690305 2075 controller_utils.go:1036] Caches are synced for PV protection controller
kube# [ 40.798882] kube-controller-manager[2075]: I0127 01:25:47.737874 2075 controller_utils.go:1036] Caches are synced for node controller
kube# [ 40.799083] kube-controller-manager[2075]: I0127 01:25:47.737916 2075 range_allocator.go:157] Starting range CIDR allocator
kube# [ 40.799384] kube-controller-manager[2075]: I0127 01:25:47.737939 2075 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller
kube# [ 40.805294] kube-controller-manager[2075]: I0127 01:25:47.744408 2075 controller_utils.go:1036] Caches are synced for namespace controller
kube# [ 40.831864] kube-controller-manager[2075]: I0127 01:25:47.770955 2075 controller_utils.go:1036] Caches are synced for PVC protection controller
kube# [ 40.846737] kube-controller-manager[2075]: I0127 01:25:47.785831 2075 controller_utils.go:1036] Caches are synced for HPA controller
kube# [ 40.851220] kube-controller-manager[2075]: I0127 01:25:47.790316 2075 controller_utils.go:1036] Caches are synced for taint controller
kube# [ 40.851333] kube-controller-manager[2075]: I0127 01:25:47.790364 2075 taint_manager.go:182] Starting NoExecuteTaintManager
kube# [ 40.860695] kube-controller-manager[2075]: I0127 01:25:47.799780 2075 controller_utils.go:1036] Caches are synced for ReplicaSet controller
kube# [ 40.899046] kube-controller-manager[2075]: I0127 01:25:47.838129 2075 controller_utils.go:1036] Caches are synced for cidrallocator controller
kube# [ 40.900248] kube-controller-manager[2075]: I0127 01:25:47.839340 2075 controller_utils.go:1036] Caches are synced for GC controller
kube# [ 40.900859] kube-controller-manager[2075]: I0127 01:25:47.839963 2075 controller_utils.go:1036] Caches are synced for deployment controller
kube# [ 40.976188] kube-controller-manager[2075]: I0127 01:25:47.915249 2075 controller_utils.go:1036] Caches are synced for certificate controller
kube# [ 41.006450] kube-addons[2273]: INFO: == Default service account in the kube-system namespace has token default-token-zpjt7 ==
kube# [ 41.012038] kube-addons[2273]: find: ‘/etc/kubernetes/admission-controls’: No such file or directory
kube# [ 41.017155] kube-addons[2273]: INFO: == Entering periodical apply loop at 2020-01-27T01:25:47+00:00 ==
kube# [ 41.034618] kube-controller-manager[2075]: I0127 01:25:47.973722 2075 controller_utils.go:1036] Caches are synced for stateful set controller
kube# [ 41.059596] kube-controller-manager[2075]: I0127 01:25:47.998679 2075 controller_utils.go:1036] Caches are synced for daemon sets controller
kube# [ 41.095341] kube-addons[2273]: INFO: Leader is kube
kube# [ 41.201771] kube-controller-manager[2075]: I0127 01:25:48.140846 2075 controller_utils.go:1036] Caches are synced for attach detach controller
kube# [ 41.222253] kube-addons[2273]: error: no objects passed to create
kube# [ 41.226881] kube-addons[2273]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:25:48+00:00 ==
kube# [ 41.226991] kube-addons[2273]: INFO: == Reconciling with deprecated label ==
kube# [ 41.305512] kube-controller-manager[2075]: I0127 01:25:48.244590 2075 controller_utils.go:1036] Caches are synced for job controller
kube# [ 41.346321] kube-controller-manager[2075]: I0127 01:25:48.285384 2075 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 41.384474] kube-apiserver[2212]: I0127 01:25:48.323042 2212 controller.go:606] quota admission added evaluator for: deployments.extensions
kube# [ 41.395092] kube-apiserver[2212]: I0127 01:25:48.334179 2212 controller.go:606] quota admission added evaluator for: replicasets.apps
kube# [ 41.397430] kube-controller-manager[2075]: I0127 01:25:48.336528 2075 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3a21b479-f355-4e14-956a-1b4d5e773484", APIVersion:"apps/v1", ResourceVersion:"268", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7cb9b6dd8f to 2
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube# [ 41.411380] kube-controller-manager[2075]: I0127 01:25:48.350427 2075 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"1697212a-cead-4ada-bf8c-f6369119b0d4", APIVersion:"apps/v1", ResourceVersion:"269", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-spqtb
kube: exit status 1
(0.06 seconds)
kube# [ 41.415759] kube-controller-manager[2075]: I0127 01:25:48.354759 2075 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"1697212a-cead-4ada-bf8c-f6369119b0d4", APIVersion:"apps/v1", ResourceVersion:"269", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-54dvp
kube# [ 41.432476] kube-scheduler[2046]: E0127 01:25:48.371101 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 41.434699] kube-scheduler[2046]: E0127 01:25:48.373771 2046 factory.go:702] pod is already present in the activeQ
kube# [ 41.435158] kube-scheduler[2046]: E0127 01:25:48.374160 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 41.435508] kube-scheduler[2046]: E0127 01:25:48.374581 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 41.435872] kube-scheduler[2046]: E0127 01:25:48.374712 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 41.488963] kube-controller-manager[2075]: I0127 01:25:48.427962 2075 controller_utils.go:1036] Caches are synced for endpoint controller
kube# [ 41.504402] kube-controller-manager[2075]: I0127 01:25:48.443408 2075 controller_utils.go:1036] Caches are synced for expand controller
kube# [ 41.528514] kube-controller-manager[2075]: I0127 01:25:48.467620 2075 controller_utils.go:1036] Caches are synced for persistent volume controller
kube# [ 41.547217] kube-controller-manager[2075]: I0127 01:25:48.486265 2075 controller_utils.go:1036] Caches are synced for ReplicationController controller
kube# [ 41.579203] kube-controller-manager[2075]: I0127 01:25:48.518254 2075 controller_utils.go:1036] Caches are synced for disruption controller
kube# [ 41.579331] kube-controller-manager[2075]: I0127 01:25:48.518295 2075 disruption.go:341] Sending events to api server.
kube# [ 41.601019] kube-controller-manager[2075]: I0127 01:25:48.540130 2075 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 41.605112] kube-controller-manager[2075]: I0127 01:25:48.544193 2075 controller_utils.go:1036] Caches are synced for garbage collector controller
kube# [ 41.605286] kube-controller-manager[2075]: I0127 01:25:48.544218 2075 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
kube# [ 41.628823] kube-controller-manager[2075]: I0127 01:25:48.567838 2075 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 42.201148] kube-controller-manager[2075]: I0127 01:25:49.139921 2075 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 42.301094] kube-controller-manager[2075]: I0127 01:25:49.240176 2075 controller_utils.go:1036] Caches are synced for garbage collector controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 42.629905] kube-addons[2273]: configmap/coredns created
kube# [ 42.630057] kube-addons[2273]: deployment.extensions/coredns created
kube# [ 42.630646] kube-addons[2273]: serviceaccount/coredns created
kube# [ 42.630893] kube-addons[2273]: service/kube-dns created
kube# [ 42.631219] kube-addons[2273]: INFO: == Reconciling with addon-manager label ==
kube# [ 42.762402] kube-addons[2273]: error: no objects passed to apply
kube# [ 42.769223] kube-addons[2273]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:25:49+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 100.848162] kube-addons[2273]: INFO: Leader is kube
kube# [ 100.973039] kube-addons[2273]: error: no objects passed to create
kube# [ 100.979102] kube-addons[2273]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:26:47+00:00 ==
kube# [ 100.979360] kube-addons[2273]: INFO: == Reconciling with deprecated label ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 102.332330] kube-addons[2273]: configmap/coredns unchanged
kube# [ 102.332525] kube-addons[2273]: deployment.extensions/coredns unchanged
kube# [ 102.332902] kube-addons[2273]: serviceaccount/coredns unchanged
kube# [ 102.333085] kube-addons[2273]: service/kube-dns unchanged
kube# [ 102.333477] kube-addons[2273]: INFO: == Reconciling with addon-manager label ==
kube# [ 102.462049] kube-addons[2273]: error: no objects passed to apply
kube# [ 102.468415] kube-addons[2273]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:26:49+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 108.526369] kube-scheduler[2046]: E0127 01:26:55.465070 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 108.526600] kube-scheduler[2046]: E0127 01:26:55.465270 2046 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
error: interrupted by the user
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment