Skip to content

Instantly share code, notes, and snippets.

@xavierzwirtz
Created January 27, 2020 01:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xavierzwirtz/c9494c3eb4960cceafeee25f02d9ce6d to your computer and use it in GitHub Desktop.
Save xavierzwirtz/c9494c3eb4960cceafeee25f02d9ce6d to your computer and use it in GitHub Desktop.
hung2
these derivations will be built:
/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv
/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv
/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv
/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv
/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv
/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv
/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv
/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv
/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv
/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv
/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv
/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv
/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv
these paths will be fetched (112.93 MiB download, 449.40 MiB unpacked):
/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook
/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0
/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9
/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9
/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux
/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625
/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2
/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4
/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin
/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev
/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev
/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook
/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin
/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum
/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36
/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01
/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2
/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev
/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1
/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0
/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source
/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3
/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1
/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin
/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1
/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3
/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source
/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12
/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux
/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib
/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver
/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev
/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev
/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0
/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin
/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3
/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev
/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4
/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2
copying path '/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3' from 'https://cache.nixos.org'...
copying path '/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4' from 'https://cache.nixos.org'...
copying path '/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1' from 'https://cache.nixos.org'...
copying path '/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9' from 'https://cache.nixos.org'...
copying path '/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625' from 'https://cache.nixos.org'...
copying path '/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01' from 'https://cache.nixos.org'...
copying path '/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36' from 'https://cache.nixos.org'...
copying path '/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2' from 'https://cache.nixos.org'...
copying path '/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2' from 'https://cache.nixos.org'...
copying path '/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0' from 'https://cache.nixos.org'...
copying path '/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2' from 'https://cache.nixos.org'...
copying path '/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4' from 'https://cache.nixos.org'...
copying path '/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12' from 'https://cache.nixos.org'...
copying path '/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source' from 'https://cache.nixos.org'...
copying path '/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source' from 'https://cache.nixos.org'...
copying path '/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source' from 'https://cache.nixos.org'...
copying path '/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12' from 'https://cache.nixos.org'...
copying path '/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9' from 'https://cache.nixos.org'...
copying path '/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3' from 'https://cache.nixos.org'...
copying path '/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5' from 'https://cache.nixos.org'...
copying path '/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib' from 'https://cache.nixos.org'...
copying path '/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1' from 'https://cache.nixos.org'...
building '/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv'...
building '/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv'...
building '/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv'...
copying path '/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin' from 'https://cache.nixos.org'...
building '/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv'...
building '/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv'...
copying path '/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev' from 'https://cache.nixos.org'...
building '/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv'...
copying path '/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3' from 'https://cache.nixos.org'...
copying path '/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1' from 'https://cache.nixos.org'...
copying path '/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver' from 'https://cache.nixos.org'...
building '/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv'...
building '/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv'...
copying path '/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum' from 'https://cache.nixos.org'...
building '/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv'...
building '/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv'...
Packing layer...
Computing layer checksum...
Creating layer #1 for /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27
Creating layer #2 for /nix/store/xvxsbvbi7ckccz4pz2j6np7czadgjy2x-zlib-1.2.11
Creating layer #3 for /nix/store/n55nxs8xxdwkwv4kqh99pdnyqxp0d1zg-libpng-apng-1.6.37
Creating layer #4 for /nix/store/0ykbl0k34cfh80gvawqy5f8v1yq7pph8-bzip2-1.0.6.0.1
Creating layer #5 for /nix/store/s7j9n1wccws4kgigknl4rfqpyjxy544y-libjpeg-turbo-2.0.3
Creating layer #6 for /nix/store/w4snc9q1ns3rqg8zykkh9ric1d92akwd-dejavu-fonts-minimal-2.37
Creating layer #7 for /nix/store/nzb33937sf9031ik3v7c8d039lnviglk-freetype-2.10.1
Creating layer #8 for /nix/store/784rh7jrfhagbkydjfrv68h9x3g4gqmk-gcc-8.3.0-lib
Creating layer #9 for /nix/store/blykn8wlxh1n91dzxizyxvkygmd911cx-xz-5.2.4
tar: Removing leading `/' from member names
Creating layer #10 for /nix/store/lp6xmsg44yflzd3rv2qc4dc0m9y0qr2n-expat-2.2.7
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #11 for /nix/store/9r9px061ymn6r8wdzgdhbm7sdb5b0dri-fontconfig-2.12.6
Creating layer #12 for /nix/store/yydyda5cz2x74pqp643q2r3p6ipy6d9b-giflib-5.1.4
tar: Removing leading `/' from member namestar:
Removing leading `/' from member names
Creating layer #13 for /nix/store/nl4l9vkbvpp5jblr7kycx2qqchbnn98a-libtiff-4.0.10
tar: Removing leading `/' from member names
Creating layer #14 for /nix/store/5zvqxjp62ahwvgqm4y4x9p9ym112hljj-libxml2-2.9.9
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #15 for /nix/store/6mhw8asq3ciinkky6mqq6qn6sfxrkgks-fontconfig-2.12.6-lib
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #16 for /nix/store/vwydn02iqfg7xp1a6rhpyhs8vl9v2b6d-libwebp-1.0.3
Creating layer #17 for /nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #18 for /nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #19 for /nix/store/g42rl3xfqml0yrh5yjdfy4rfdpk1cc7y-libxslt-1.1.33
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member namesCreating layer #21 for /nix/store/6p4kq0v91y90jv5zqb4gri38c47wxglj-pcre-8.43
Creating layer #20 for /nix/store/z9vsvmll45kjdf7j9h0vlxjjya6yxgc0-openssl-1.1.1d
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #22 for /nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: tar: Removing leading `/' from member namesRemoving leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: tar: Removing leading `/' from member namesRemoving leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #23 for /nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source /nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source /nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1 /nix/store/27hpjxyy26v0bpp7x8g72nddcv6nv3hw-bulk-layers /nix/store/gskazlyrm0f1bbcngy04f8m07lm2wsqf-nginx-config.json /nix/store/n8w8r7z1z962scfcc1h7rsdqnaf5xncc-closure
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Finished building layer 'nginx-granular-docker-layers'
building '/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv'...
Cooking the image...
Finished.
building '/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv'...
building '/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv'...
starting VDE switch for network 1
running the VM test script
starting all VMs
kube: starting vm
kube# Formatting '/build/vm-state-kube/kube.qcow2', fmt=qcow2 size=4294967296 cluster_size=65536 lazy_refcounts=off refcount_bits=16
kube: QEMU running (pid 9)
(0.06 seconds)
kube: waiting for success: kubectl get node kube.my.xzy | grep -w Ready
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube: waiting for the VM to finish booting
kube# cSeaBIOS (version rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org)
kube#
kube#
kube# iPXE (http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+7FF90FD0+7FEF0FD0 C980
kube#
kube#
kube#
kube#
kube# iPXE (http://ipxe.org) 00:08.0 CA80 PCI2.10 PnP PMM 7FF90FD0 7FEF0FD0 CA80
kube#
kube#
kube# Booting from ROM...
kube# Probing EDD (edd=off to disable)... oc[ 0.000000] Linux version 4.19.95 (nixbld@localhost) (gcc version 8.3.0 (GCC)) #1-NixOS SMP Sun Jan 12 11:17:30 UTC 2020
kube# [ 0.000000] Command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.000000] x86/fpu: x87 FPU will use FXSAVE
kube# [ 0.000000] BIOS-provided physical RAM map:
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
kube# [ 0.000000] NX (Execute Disable) protection: active
kube# [ 0.000000] SMBIOS 2.8 present.
kube# [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
kube# [ 0.000000] Hypervisor detected: KVM
kube# [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
kube# [ 0.000000] kvm-clock: cpu 0, msr 3555f001, primary cpu clock
kube# [ 0.000000] kvm-clock: using sched offset of 528232590 cycles
kube# [ 0.000001] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
kube# [ 0.000002] tsc: Detected 3499.998 MHz processor
kube# [ 0.000953] last_pfn = 0x7ffdc max_arch_pfn = 0x400000000
kube# [ 0.000990] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
kube# [ 0.002695] found SMP MP-table at [mem 0x000f5980-0x000f598f]
kube# [ 0.002790] Scanning 1 areas for low memory corruption
kube# [ 0.002892] RAMDISK: [mem 0x7f63e000-0x7ffcffff]
kube# [ 0.002899] ACPI: Early table checksum verification disabled
kube# [ 0.002926] ACPI: RSDP 0x00000000000F5940 000014 (v00 BOCHS )
kube# [ 0.002928] ACPI: RSDT 0x000000007FFE152E 000030 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
kube# [ 0.002931] ACPI: FACP 0x000000007FFE1392 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
kube# [ 0.002933] ACPI: DSDT 0x000000007FFDFA80 001912 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
kube# [ 0.002935] ACPI: FACS 0x000000007FFDFA40 000040
kube# [ 0.002936] ACPI: APIC 0x000000007FFE1406 0000F0 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
kube# [ 0.002938] ACPI: HPET 0x000000007FFE14F6 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
kube# [ 0.003141] No NUMA configuration found
kube# [ 0.003142] Faking a node at [mem 0x0000000000000000-0x000000007ffdbfff]
kube# [ 0.003144] NODE_DATA(0) allocated [mem 0x7ffd8000-0x7ffdbfff]
kube# [ 0.003155] Zone ranges:
kube# [ 0.003156] DMA [mem 0x0000000000001000-0x0000000000ffffff]
kube# [ 0.003157] DMA32 [mem 0x0000000001000000-0x000000007ffdbfff]
kube# [ 0.003157] Normal empty
kube# [ 0.003158] Movable zone start for each node
kube# [ 0.003158] Early memory node ranges
kube# [ 0.003159] node 0: [mem 0x0000000000001000-0x000000000009efff]
kube# [ 0.003160] node 0: [mem 0x0000000000100000-0x000000007ffdbfff]
kube# [ 0.003357] Reserved but unavailable: 98 pages
kube# [ 0.003358] Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff]
kube# [ 0.013385] ACPI: PM-Timer IO Port: 0x608
kube# [ 0.013397] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
kube# [ 0.013420] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
kube# [ 0.013422] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
kube# [ 0.013423] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
kube# [ 0.013424] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
kube# [ 0.013425] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
kube# [ 0.013425] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
kube# [ 0.013428] Using ACPI (MADT) for SMP configuration information
kube# [ 0.013430] ACPI: HPET id: 0x8086a201 base: 0xfed00000
kube# [ 0.013436] smpboot: Allowing 16 CPUs, 0 hotplug CPUs
kube# [ 0.013451] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
kube# [ 0.013452] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
kube# [ 0.013453] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
kube# [ 0.013453] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
kube# [ 0.013455] [mem 0x80000000-0xfeffbfff] available for PCI devices
kube# [ 0.013455] Booting paravirtualized kernel on KVM
kube# [ 0.013458] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
kube# [ 0.069852] random: get_random_bytes called from start_kernel+0x93/0x4ca with crng_init=0
kube# [ 0.069859] setup_percpu: NR_CPUS:384 nr_cpumask_bits:384 nr_cpu_ids:16 nr_node_ids:1
kube# [ 0.070368] percpu: Embedded 44 pages/cpu s142424 r8192 d29608 u262144
kube# [ 0.070391] KVM setup async PF for cpu 0
kube# [ 0.070395] kvm-stealtime: cpu 0, msr 7d016180
kube# [ 0.070400] Built 1 zonelists, mobility grouping on. Total pages: 515941
kube# [ 0.070400] Policy zone: DMA32
kube# [ 0.070402] Kernel command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.073909] Memory: 2028748K/2096616K available (10252K kernel code, 1140K rwdata, 1904K rodata, 1448K init, 764K bss, 67868K reserved, 0K cma-reserved)
kube# [ 0.074171] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1
kube# [ 0.074175] ftrace: allocating 28577 entries in 112 pages
kube# [ 0.081229] rcu: Hierarchical RCU implementation.
kube# [ 0.081230] rcu: RCU event tracing is enabled.
kube# [ 0.081230] rcu: RCU restricting CPUs from NR_CPUS=384 to nr_cpu_ids=16.
kube# [ 0.081231] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16
kube# [ 0.082728] NR_IRQS: 24832, nr_irqs: 552, preallocated irqs: 16
kube# [ 0.086589] Console: colour VGA+ 80x25
kube# [ 0.135619] console [ttyS0] enabled
kube# [ 0.135952] ACPI: Core revision 20180810
kube# [ 0.136468] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
kube# [ 0.137370] APIC: Switch to symmetric I/O mode setup
kube# [ 0.137938] x2apic enabled
kube# [ 0.138313] Switched APIC routing to physical x2apic.
kube# [ 0.139448] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
kube# [ 0.140018] tsc: Marking TSC unstable due to TSCs unsynchronized
kube# [ 0.140609] Calibrating delay loop (skipped) preset value.. 6999.99 BogoMIPS (lpj=3499998)
kube# [ 0.141603] pid_max: default: 32768 minimum: 301
kube# [ 0.142051] Security Framework initialized
kube# [ 0.142435] Yama: becoming mindful.
kube# [ 0.142616] AppArmor: AppArmor initialized
kube# [ 0.143310] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
kube# [ 0.143823] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
kube# [ 0.144608] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.145605] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.146469] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
kube# [ 0.146603] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
kube# [ 0.147603] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
kube# [ 0.148383] Spectre V2 : Mitigation: Full AMD retpoline
kube# [ 0.148602] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
kube# [ 0.149719] Freeing SMP alternatives memory: 28K
kube# [ 0.253586] smpboot: CPU0: AMD Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
kube# [ 0.253601] Performance Events: AMD PMU driver.
kube# [ 0.253605] ... version: 0
kube# [ 0.253994] ... bit width: 48
kube# [ 0.254602] ... generic registers: 4
kube# [ 0.254978] ... value mask: 0000ffffffffffff
kube# [ 0.255474] ... max period: 00007fffffffffff
kube# [ 0.255602] ... fixed-purpose events: 0
kube# [ 0.255984] ... event mask: 000000000000000f
kube# [ 0.256640] rcu: Hierarchical SRCU implementation.
kube# [ 0.257690] smp: Bringing up secondary CPUs ...
kube# [ 0.258175] x86: Booting SMP configuration:
kube# [ 0.258574] .... node #0, CPUs: #1
kube# [ 0.055744] kvm-clock: cpu 1, msr 3555f041, secondary cpu clock
kube# [ 0.259362] KVM setup async PF for cpu 1
kube# [ 0.259515] kvm-stealtime: cpu 1, msr 7d056180
kube# [ 0.260650] #2
kube# [ 0.055744] kvm-clock: cpu 2, msr 3555f081, secondary cpu clock
kube# [ 0.261148] KVM setup async PF for cpu 2
kube# [ 0.261522] kvm-stealtime: cpu 2, msr 7d096180
kube# [ 0.262654] #3
kube# [ 0.055744] kvm-clock: cpu 3, msr 3555f0c1, secondary cpu clock
kube# [ 0.263132] KVM setup async PF for cpu 3
kube# [ 0.263510] kvm-stealtime: cpu 3, msr 7d0d6180
kube# [ 0.264631] #4
kube# [ 0.055744] kvm-clock: cpu 4, msr 3555f101, secondary cpu clock
kube# [ 0.265116] KVM setup async PF for cpu 4
kube# [ 0.265499] kvm-stealtime: cpu 4, msr 7d116180
kube# [ 0.266644] #5
kube# [ 0.055744] kvm-clock: cpu 5, msr 3555f141, secondary cpu clock
kube# [ 0.267122] KVM setup async PF for cpu 5
kube# [ 0.267509] kvm-stealtime: cpu 5, msr 7d156180
kube# [ 0.267644] #6
kube# [ 0.055744] kvm-clock: cpu 6, msr 3555f181, secondary cpu clock
kube# [ 0.269072] KVM setup async PF for cpu 6
kube# [ 0.269518] kvm-stealtime: cpu 6, msr 7d196180
kube# [ 0.269642] #7
kube# [ 0.055744] kvm-clock: cpu 7, msr 3555f1c1, secondary cpu clock
kube# [ 0.270975] KVM setup async PF for cpu 7
kube# [ 0.271513] kvm-stealtime: cpu 7, msr 7d1d6180
kube# [ 0.271643] #8
kube# [ 0.055744] kvm-clock: cpu 8, msr 3555f201, secondary cpu clock
kube# [ 0.272863] KVM setup async PF for cpu 8
kube# [ 0.273525] kvm-stealtime: cpu 8, msr 7d216180
kube# [ 0.273648] #9
kube# [ 0.055744] kvm-clock: cpu 9, msr 3555f241, secondary cpu clock
kube# [ 0.274774] KVM setup async PF for cpu 9
kube# [ 0.275516] kvm-stealtime: cpu 9, msr 7d256180
kube# [ 0.275643] #10
kube# [ 0.055744] kvm-clock: cpu 10, msr 3555f281, secondary cpu clock
kube# [ 0.276667] KVM setup async PF for cpu 10
kube# [ 0.277522] kvm-stealtime: cpu 10, msr 7d296180
kube# [ 0.277638] #11
kube# [ 0.055744] kvm-clock: cpu 11, msr 3555f2c1, secondary cpu clock
kube# [ 0.278122] KVM setup async PF for cpu 11
kube# [ 0.278539] kvm-stealtime: cpu 11, msr 7d2d6180
kube# [ 0.279643] #12
kube# [ 0.055744] kvm-clock: cpu 12, msr 3555f301, secondary cpu clock
kube# [ 0.280120] KVM setup async PF for cpu 12
kube# [ 0.280535] kvm-stealtime: cpu 12, msr 7d316180
kube# [ 0.281642] #13
kube# [ 0.055744] kvm-clock: cpu 13, msr 3555f341, secondary cpu clock
kube# [ 0.282119] KVM setup async PF for cpu 13
kube# [ 0.282536] kvm-stealtime: cpu 13, msr 7d356180
kube# [ 0.283642] #14
kube# [ 0.055744] kvm-clock: cpu 14, msr 3555f381, secondary cpu clock
kube# [ 0.284132] KVM setup async PF for cpu 14
kube# [ 0.284558] kvm-stealtime: cpu 14, msr 7d396180
kube# [ 0.285639] #15
kube# [ 0.055744] kvm-clock: cpu 15, msr 3555f3c1, secondary cpu clock
kube# [ 0.286114] KVM setup async PF for cpu 15
kube# [ 0.286533] kvm-stealtime: cpu 15, msr 7d3d6180
kube# [ 0.287606] smp: Brought up 1 node, 16 CPUs
kube# [ 0.288007] smpboot: Max logical packages: 16
kube# [ 0.288422] smpboot: Total of 16 processors activated (111999.93 BogoMIPS)
kube# [ 0.289830] devtmpfs: initialized
kube# [ 0.289984] x86/mm: Memory block size: 128MB
kube# [ 0.290734] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
kube# [ 0.291611] futex hash table entries: 4096 (order: 6, 262144 bytes)
kube# [ 0.292723] pinctrl core: initialized pinctrl subsystem
kube# [ 0.293394] NET: Registered protocol family 16
kube# [ 0.293643] audit: initializing netlink subsys (disabled)
kube# [ 0.294171] audit: type=2000 audit(1580088508.968:1): state=initialized audit_enabled=0 res=1
kube# [ 0.294636] cpuidle: using governor menu
kube# [ 0.295833] ACPI: bus type PCI registered
kube# [ 0.296262] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
kube# [ 0.296676] PCI: Using configuration type 1 for base access
kube# [ 0.298021] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
kube# [ 0.299851] ACPI: Added _OSI(Module Device)
kube# [ 0.300306] ACPI: Added _OSI(Processor Device)
kube# [ 0.300604] ACPI: Added _OSI(3.0 _SCP Extensions)
kube# [ 0.301097] ACPI: Added _OSI(Processor Aggregator Device)
kube# [ 0.301606] ACPI: Added _OSI(Linux-Dell-Video)
kube# [ 0.302079] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
kube# [ 0.302967] ACPI: 1 ACPI AML tables successfully acquired and loaded
kube# [ 0.304707] ACPI: Interpreter enabled
kube# [ 0.305107] ACPI: (supports S0 S3 S4 S5)
kube# [ 0.305528] ACPI: Using IOAPIC for interrupt routing
kube# [ 0.305612] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
kube# [ 0.306668] ACPI: Enabled 2 GPEs in block 00 to 0F
kube# [ 0.309038] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
kube# [ 0.309606] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
kube# [ 0.310319] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
kube# [ 0.310606] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
kube# [ 0.311662] acpiphp: Slot [3] registered
kube# [ 0.312057] acpiphp: Slot [4] registered
kube# [ 0.312621] acpiphp: Slot [5] registered
kube# [ 0.313013] acpiphp: Slot [6] registered
kube# [ 0.313626] acpiphp: Slot [7] registered
kube# [ 0.314051] acpiphp: Slot [8] registered
kube# [ 0.314632] acpiphp: Slot [9] registered
kube# [ 0.315092] acpiphp: Slot [10] registered
kube# [ 0.315533] acpiphp: Slot [11] registered
kube# [ 0.315623] acpiphp: Slot [12] registered
kube# [ 0.316026] acpiphp: Slot [13] registered
kube# [ 0.316620] acpiphp: Slot [14] registered
kube# [ 0.317012] acpiphp: Slot [15] registered
kube# [ 0.317415] acpiphp: Slot [16] registered
kube# [ 0.317620] acpiphp: Slot [17] registered
kube# [ 0.318022] acpiphp: Slot [18] registered
kube# [ 0.318622] acpiphp: Slot [19] registered
kube# [ 0.319014] acpiphp: Slot [20] registered
kube# [ 0.319411] acpiphp: Slot [21] registered
kube# [ 0.319620] acpiphp: Slot [22] registered
kube# [ 0.320015] acpiphp: Slot [23] registered
kube# [ 0.320619] acpiphp: Slot [24] registered
kube# [ 0.321010] acpiphp: Slot [25] registered
kube# [ 0.321406] acpiphp: Slot [26] registered
kube# [ 0.321619] acpiphp: Slot [27] registered
kube# [ 0.322010] acpiphp: Slot [28] registered
kube# [ 0.322621] acpiphp: Slot [29] registered
kube# [ 0.323012] acpiphp: Slot [30] registered
kube# [ 0.323414] acpiphp: Slot [31] registered
kube# [ 0.323610] PCI host bridge to bus 0000:00
kube# [ 0.324005] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
kube# [ 0.324603] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
kube# [ 0.325242] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
kube# [ 0.325603] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
kube# [ 0.326603] pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
kube# [ 0.327603] pci_bus 0000:00: root bus resource [bus 00-ff]
kube# [ 0.331615] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
kube# [ 0.332290] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
kube# [ 0.332603] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
kube# [ 0.333602] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
kube# [ 0.337363] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI
kube# [ 0.337608] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
kube# [ 0.405160] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
kube# [ 0.405660] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
kube# [ 0.406242] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
kube# [ 0.406657] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
kube# [ 0.407608] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
kube# [ 0.408532] pci 0000:00:02.0: vgaarb: setting as boot VGA device
kube# [ 0.408532] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
kube# [ 0.408604] pci 0000:00:02.0: vgaarb: bridge control possible
kube# [ 0.409143] vgaarb: loaded
kube# [ 0.409678] PCI: Using ACPI for IRQ routing
kube# [ 0.410204] NetLabel: Initializing
kube# [ 0.410603] NetLabel: domain hash size = 128
kube# [ 0.411009] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
kube# [ 0.411612] NetLabel: unlabeled traffic allowed by default
kube# [ 0.412269] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
kube# [ 0.412620] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
kube# [ 0.413603] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
kube# [ 0.419094] clocksource: Switched to clocksource kvm-clock
kube# [ 0.424056] VFS: Disk quotas dquot_6.6.0
kube# [ 0.424447] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
kube# [ 0.425157] AppArmor: AppArmor Filesystem Enabled
kube# [ 0.425611] pnp: PnP ACPI init
kube# [ 0.426094] pnp: PnP ACPI: found 6 devices
kube# [ 0.433054] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
kube# [ 0.433922] NET: Registered protocol family 2
kube# [ 0.434413] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes)
kube# [ 0.435164] TCP established hash table entries: 16384 (order: 5, 131072 bytes)
kube# [ 0.435856] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
kube# [ 0.436480] TCP: Hash tables configured (established 16384 bind 16384)
kube# [ 0.437116] UDP hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.437685] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.438324] NET: Registered protocol family 1
kube# [ 0.438758] pci 0000:00:01.0: PIIX3: Enabling Passive Release
kube# [ 0.439300] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
kube# [ 0.439862] pci 0000:00:01.0: Activating ISA DMA hang workarounds
kube# [ 0.449011] PCI Interrupt Link [LNKD] enabled at IRQ 11
kube# [ 0.458300] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x6c3 took 17445 usecs
kube# [ 0.459029] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
kube# [ 0.459874] Trying to unpack rootfs image as initramfs...
kube# [ 0.540367] Freeing initrd memory: 9800K
kube# [ 0.540860] Scanning for low memory corruption every 60 seconds
kube# [ 0.541822] Initialise system trusted keyrings
kube# [ 0.542303] workingset: timestamp_bits=40 max_order=19 bucket_order=0
kube# [ 0.543494] zbud: loaded
kube# [ 0.544553] Key type asymmetric registered
kube# [ 0.544960] Asymmetric key parser 'x509' registered
kube# [ 0.545418] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
kube# [ 0.546242] io scheduler noop registered
kube# [ 0.546641] io scheduler deadline registered
kube# [ 0.547065] io scheduler cfq registered (default)
kube# [ 0.547534] io scheduler mq-deadline registered
kube# [ 0.547977] io scheduler kyber registered
kube# [ 0.548879] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
kube# [ 0.572407] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
kube# [ 0.575124] brd: module loaded
kube# [ 0.576033] mce: Using 10 MCE banks
kube# [ 0.576376] sched_clock: Marking stable (521276979, 54744486)->(578237396, -2215931)
kube# [ 0.577557] registered taskstats version 1
kube# [ 0.577957] Loading compiled-in X.509 certificates
kube# [ 0.578419] zswap: loaded using pool lzo/zbud
kube# [ 0.579223] AppArmor: AppArmor sha1 policy hashing enabled
kube# [ 0.581365] Freeing unused kernel image memory: 1448K
kube# [ 0.589610] Write protecting the kernel read-only data: 14336k
kube# [ 0.590619] Freeing unused kernel image memory: 2012K
kube# [ 0.591167] Freeing unused kernel image memory: 144K
kube# [ 0.591639] Run /init as init process
kube#
kube# <<< NixOS Stage 1 >>>
kube#
kube# loading module virtio_balloon...
kube# loading module virtio_console...
kube# loading module virtio_rng...
kube# loading module dm_mod...
kube# [ 0.617604] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: dm-devel@redhat.com
kube# running udev...
kube# [ 0.621092] systemd-udevd[181]: Starting version 243
kube# [ 0.621879] systemd-udevd[182]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 0.623070] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 0.624476] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 0.637167] rtc_cmos 00:00: RTC can wake from S4
kube# [ 0.638075] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
kube# [ 0.638205] rtc_cmos 00:00: registered as rtc0
kube# [ 0.639481] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
kube# [ 0.639788] serio: i8042 KBD port at 0x60,0x64 irq 1
kube# [ 0.640797] serio: i8042 AUX port at 0x60,0x64 irq 12
kube# [ 0.643064] SCSI subsystem initialized
kube# [ 0.646318] PCI Interrupt Link [LNKC] enabled at IRQ 10
kube# [ 0.647201] ACPI: bus type USB registered
kube# [ 0.648080] usbcore: registered new interface driver usbfs
kube# [ 0.648678] usbcore: registered new interface driver hub
kube# [ 0.649317] usbcore: registered new device driver usb
kube# [ 0.650793] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
kube# [ 0.654534] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
kube# [ 0.656103] scsi host0: ata_piix
kube# [ 0.656598] scsi host1: ata_piix
kube# [ 0.656993] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1c0 irq 14
kube# [ 0.657670] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1c8 irq 15
kube# [ 0.658882] uhci_hcd: USB Universal Host Controller Interface driver
kube# [ 0.660683] random: fast init done
kube# [ 0.661123] random: crng init done
kube# [ 0.677629] uhci_hcd 0000:00:01.2: UHCI Host Controller
kube# [ 0.677673] PCI Interrupt Link [LNKA] enabled at IRQ 10
kube# [ 0.678163] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
kube# [ 0.679891] uhci_hcd 0000:00:01.2: detected 2 ports
kube# [ 0.680416] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c0c0
kube# [ 0.681031] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19
kube# [ 0.681851] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
kube# [ 0.682565] usb usb1: Product: UHCI Host Controller
kube# [ 0.683057] usb usb1: Manufacturer: Linux 4.19.95 uhci_hcd
kube# [ 0.683598] usb usb1: SerialNumber: 0000:00:01.2
kube# [ 0.684154] hub 1-0:1.0: USB hub found
kube# [ 0.684530] hub 1-0:1.0: 2 ports detected
kube# [ 0.688584] PCI Interrupt Link [LNKB] enabled at IRQ 11
kube# [ 0.744440] virtio_blk virtio8: [vda] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB)
kube# [ 0.747194] 9pnet: Installing 9P2000 support
kube# [ 0.817413] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
kube# [ 0.818833] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
kube# [ 0.842097] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
kube# [ 0.842836] cdrom: Uniform CD-ROM driver Revision: 3.20
kube# [ 1.010626] usb 1-1: new full-speed USB device number 2 using uhci_hcd
kube# kbd_mode: KDSKBMODE: Inappropriate ioctl for device
kube# %Gstarting device mapper and LVM...
kube# [ 1.105157] clocksource: Switched to clocksource acpi_pm
kube# mke2fs 1.45.3 (14-Jul-2019)
kube# Creating filesystem with 1048576 4k blocks and 262144 inodes
kube# Filesystem UUID: d43897e6-5bf2-4c23-afde-370827117dba
kube# Superblock backups stored on blocks:
kube# 32768, 98304, 163840, 229376, 294912, 819200, 884736
kube#
kube# Allocating group tables: 0/32 done
kube# Writing inode tables: 0/32 done
kube# Creating journal (16384 blocks): done
kube# Writing superblocks and filesystem accounting information: 0/32 [ 1.179763] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
kube# [ 1.180538] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
kube# [ 1.181259] usb 1-1: Product: QEMU USB Tablet
kube# [ 1.181687] usb 1-1: Manufacturer: QEMU
kube# [ 1.182071] usb 1-1: SerialNumber: 28754-0000:00:01.2-1
kube# [ 1.189731] hidraw: raw HID events driver (C) Jiri Kosina
kube# [ 1.196004] usbcore: registered new interface driver usbhid
kube# [ 1.196571] usbhid: USB HID core driver
kube# [ 1.197996] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2
kube# [ 1.199276] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
kube# done
kube#
kube# checking /dev/vda...
kube# fsck (busybox 1.30.1)
kube# [fsck.ext4 (1) -- /mnt-root/] fsck.ext4 -a /dev/vda
kube# /dev/vda: clean, 11/262144 files, 36942/1048576 blocks
kube# mounting /dev/vda on /...
kube# [ 1.285377] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null)
kube# mounting store on /nix/.ro-store...
kube# [ 1.294160] FS-Cache: Loaded
kube# [ 1.297034] 9p: Installing v9fs 9p2000 file system support
kube# [ 1.297638] FS-Cache: Netfs '9p' registered for caching
kube# mounting tmpfs on /nix/.rw-store...
kube# mounting shared on /tmp/shared...
kube# mounting xchg on /tmp/xchg...
kube# mounting overlay filesystem on /nix/store...
kube#
kube# <<< NixOS Stage 2 >>>
kube#
kube# [ 1.437199] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 1.438294] booting system configuration /nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62
kube# running activation script...
kube# setting up /etc...
kube# starting systemd...
kube# [ 2.547581] systemd[1]: Inserted module 'autofs4'
kube# [ 2.571077] NET: Registered protocol family 10
kube# [ 2.571779] Segment Routing with IPv6
kube# [ 2.582717] systemd[1]: systemd 243 running in system mode. (+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT +UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
kube# [ 2.584868] systemd[1]: Detected virtualization kvm.
kube# [ 2.585374] systemd[1]: Detected architecture x86-64.
kube# [ 2.592470] systemd[1]: Set hostname to <kube>.
kube# [ 2.594496] systemd[1]: Initializing machine ID from random generator.
kube# [ 2.640117] systemd-fstab-generator[618]: Checking was requested for "store", but it is not a device.
kube# [ 2.643187] systemd-fstab-generator[618]: Checking was requested for "shared", but it is not a device.
kube# [ 2.644249] systemd-fstab-generator[618]: Checking was requested for "xchg", but it is not a device.
kube# [ 2.869138] systemd[1]: /nix/store/0vscs3kafrn5z3g1bwdgabsdnii8kszz-unit-cfssl.service/cfssl.service:16: StateDirectory= path is absolute, ignoring: /var/lib/cfssl
kube# [ 2.882928] systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
kube# [ 2.884665] systemd[1]: Created slice kubernetes.slice.
kube# [ 2.885985] systemd[1]: Created slice system-getty.slice.
kube# [ 2.886959] systemd[1]: Created slice User and Session Slice.
kube# [ 2.928587] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 2.932562] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
kube# [ 2.947147] tun: Universal TUN/TAP device driver, 1.6
kube# [ 2.953621] loop: module loaded
kube# [ 2.958159] Bridge firewalling registered
kube# [ 3.142633] audit: type=1325 audit(1580088511.037:2): table=filter family=2 entries=12
kube# [ 3.156695] audit: type=1325 audit(1580088511.045:3): table=filter family=10 entries=12
kube# [ 3.157580] audit: type=1300 audit(1580088511.045:3): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=29 a2=40 a3=e1bfa0 items=0 ppid=641 pid=679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.161047] audit: type=1327 audit(1580088511.045:3): proctitle=6970367461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D7000746370002D2D73796E002D6A004C4F47002D2D6C6F672D6C6576656C00696E666F002D2D6C6F672D707265666978007265667573656420636F6E6E656374696F6E3A20
kube# [ 3.176106] audit: type=1325 audit(1580088511.071:4): table=filter family=2 entries=13
kube# [ 3.177017] audit: type=1300 audit(1580088511.071:4): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=990850 items=0 ppid=641 pid=681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.180472] audit: type=1327 audit(1580088511.071:4): proctitle=69707461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D6D00706B74747970650000002D2D706B742D7479706500756E6963617374002D6A006E69786F732D66772D726566757365
kube# [ 3.184337] audit: type=1130 audit(1580088511.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-journald comm="systemd" exe="/nix/store/lqhv9pl3cp8vcgfq0w2ms5l3pg7a6ga3-systemd-243.3/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
kube# [ 3.133226] systemd-modules-load[630]: Failed to find module 'gcov-proc'
kube# [ 3.134610] systemd-modules-load[630]: Inserted module 'bridge'
kube# [ 3.135753] systemd-modules-load[630]: Inserted module 'macvlan'
kube# [ 3.136724] systemd-modules-load[630]: Inserted module 'tap'
kube# [ 3.137762] systemd-modules-load[630]: Inserted module 'tun'
kube# [ 3.193242] audit: type=1325 audit(1580088511.088:6): table=filter family=10 entries=13
kube# [ 3.138976] systemd-modules-load[630]: Inserted module 'loop'
kube# [ 3.141021] systemd-modules-load[630]: Inserted module 'br_netfilter'
kube# [ 3.142369] systemd-udevd[638]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 3.143898] systemd-udevd[638]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 3.145685] systemd-udevd[638]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 3.147427] systemd[1]: Starting Flush Journal to Persistent Storage...
kube# [ 3.213400] systemd-journald[629]: Received client request to flush runtime journal.
kube# [ 3.202937] systemd[1]: Started udev Kernel Device Manager.
kube# [ 3.204847] systemd[1]: Started Flush Journal to Persistent Storage.
kube# [ 3.206624] systemd[1]: Starting Create Volatile Files and Directories...
kube# [ 3.263770] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
kube# [ 3.266740] ACPI: Power Button [PWRF]
kube# [ 3.275093] Floppy drive(s): fd0 is 2.88M AMI BIOS
kube# [ 3.277260] parport_pc 00:04: reported by Plug and Play ACPI
kube# [ 3.279099] parport0: PC-style at 0x378, irq 7 [PCSPP(,...)]
kube# [ 3.229308] systemd[1]: Started Create Volatile Files and Directories.
kube# [ 3.230852] systemd[1]: Starting Rebuild Journal Catalog...
kube# [ 3.232509] systemd[1]: Starting Update UTMP about System Boot/Shutdown...
kube# [ 3.289649] FDC 0 is a S82078B
kube# [ 3.295835] Linux agpgart interface v0.103
kube# [ 3.251656] systemd[1]: Started Update UTMP about System Boot/Shutdown.
kube# [ 3.264816] systemd[1]: Started Rebuild Journal Catalog.
kube# [ 3.266681] systemd[1]: Starting Update is Completed...
kube# [ 3.279117] systemd[1]: Started Update is Completed.
kube# [ 3.441962] mousedev: PS/2 mouse device common for all mice
kube# [ 3.448642] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
kube# [ 3.402736] systemd-udevd[712]: Using default interface naming scheme 'v243'.
kube# [ 3.405647] systemd-udevd[712]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.436615] systemd-udevd[711]: Using default interface naming scheme 'v243'.
kube# [ 3.438158] systemd-udevd[711]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.460590] systemd[1]: Found device Virtio network device.
kube# [ 3.518259] [drm] Found bochs VGA, ID 0xb0c0.
kube# [ 3.519419] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebd0000.
kube# [ 3.521301] [TTM] Zone kernel: Available graphics memory: 1021090 kiB
kube# [ 3.522601] [TTM] Initializing pool allocator
kube# [ 3.523343] [TTM] Initializing DMA pool allocator
kube# [ 3.497502] systemd[1]: Found device /dev/ttyS0.
kube# [ 3.569437] fbcon: bochsdrmfb (fb0) is primary device
kube# [ 3.655856] Console: switching to colour frame buffer device 128x48
kube# [ 3.740854] bochs-drm 0000:00:02.0: fb0: bochsdrmfb frame buffer device
kube# [ 3.749629] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0
kube# [ 3.750755] powernow_k8: Power state transitions not supported
kube# [ 3.751693] powernow_k8: Power state transitions not supported
kube# [ 3.752339] powernow_k8: Power state transitions not supported
kube# [ 3.752958] powernow_k8: Power state transitions not supported
kube# [ 3.753579] powernow_k8: Power state transitions not supported
kube# [ 3.754497] powernow_k8: Power state transitions not supported
kube# [ 3.755268] powernow_k8: Power state transitions not supported
kube# [ 3.755892] powernow_k8: Power state transitions not supported
kube# [ 3.756484] powernow_k8: Power state transitions not supported
kube# [ 3.757119] powernow_k8: Power state transitions not supported
kube# [ 3.757784] powernow_k8: Power state transitions not supported
kube# [ 3.758390] powernow_k8: Power state transitions not supported
kube# [ 3.759122] powernow_k8: Power state transitions not supported
kube# [ 3.759750] powernow_k8: Power state transitions not supported
kube# [ 3.760355] powernow_k8: Power state transitions not supported
kube# [ 3.761012] powernow_k8: Power state transitions not supported
kube# [ 3.805748] powernow_k8: Power state transitions not supported
kube# [ 3.806424] powernow_k8: Power state transitions not supported
kube# [ 3.807046] powernow_k8: Power state transitions not supported
kube# [ 3.807686] powernow_k8: Power state transitions not supported
kube# [ 3.808331] powernow_k8: Power state transitions not supported
kube# [ 3.809034] powernow_k8: Power state transitions not supported
kube# [ 3.809814] powernow_k8: Power state transitions not supported
kube# [ 3.810442] powernow_k8: Power state transitions not supported
kube# [ 3.811062] powernow_k8: Power state transitions not supported
kube# [ 3.811721] powernow_k8: Power state transitions not supported
kube# [ 3.812444] powernow_k8: Power state transitions not supported
kube# [ 3.813081] powernow_k8: Power state transitions not supported
kube# [ 3.813703] powernow_k8: Power state transitions not supported
kube# [ 3.814697] powernow_k8: Power state transitions not supported
kube# [ 3.815337] powernow_k8: Power state transitions not supported
kube# [ 3.815973] powernow_k8: Power state transitions not supported
kube# [ 3.817493] EDAC MC: Ver: 3.0.0
kube# [ 3.819987] MCE: In-kernel MCE decoding enabled.
kube# [ 3.851964] powernow_k8: Power state transitions not supported
kube# [ 3.852672] powernow_k8: Power state transitions not supported
kube# [ 3.853564] powernow_k8: Power state transitions not supported
kube# [ 3.854246] powernow_k8: Power state transitions not supported
kube# [ 3.854996] powernow_k8: Power state transitions not supported
kube# [ 3.855955] powernow_k8: Power state transitions not supported
kube# [ 3.857011] powernow_k8: Power state transitions not supported
kube# [ 3.858049] powernow_k8: Power state transitions not supported
kube# [ 3.858863] powernow_k8: Power state transitions not supported
kube# [ 3.859784] powernow_k8: Power state transitions not supported
kube# [ 3.860667] powernow_k8: Power state transitions not supported
kube# [ 3.861746] powernow_k8: Power state transitions not supported
kube# [ 3.863125] powernow_k8: Power state transitions not supported
kube# [ 3.864194] powernow_k8: Power state transitions not supported
kube# [ 3.865037] powernow_k8: Power state transitions not supported
kube# [ 3.865661] powernow_k8: Power state transitions not supported
kube# [ 3.812488] systemd[1]: Started Firewall.
kube# [ 3.910085] powernow_k8: Power state transitions not supported
kube# [ 3.910708] powernow_k8: Power state transitions not supported
kube# [ 3.911328] powernow_k8: Power state transitions not supported
kube# [ 3.911961] powernow_k8: Power state transitions not supported
kube# [ 3.912561] powernow_k8: Power state transitions not supported
kube# [ 3.913185] powernow_k8: Power state transitions not supported
kube# [ 3.913826] powernow_k8: Power state transitions not supported
kube# [ 3.914469] powernow_k8: Power state transitions not supported
kube# [ 3.915106] powernow_k8: Power state transitions not supported
kube# [ 3.915788] powernow_k8: Power state transitions not supported
kube# [ 3.916479] powernow_k8: Power state transitions not supported
kube# [ 3.917097] powernow_k8: Power state transitions not supported
kube# [ 3.917738] powernow_k8: Power state transitions not supported
kube# [ 3.918430] powernow_k8: Power state transitions not supported
kube# [ 3.919042] powernow_k8: Power state transitions not supported
kube# [ 3.919806] powernow_k8: Power state transitions not supported
kube# [ 3.961012] powernow_k8: Power state transitions not supported
kube# [ 3.961680] powernow_k8: Power state transitions not supported
kube# [ 3.962398] powernow_k8: Power state transitions not supported
kube# [ 3.963118] powernow_k8: Power state transitions not supported
kube# [ 3.963734] powernow_k8: Power state transitions not supported
kube# [ 3.964433] powernow_k8: Power state transitions not supported
kube# [ 3.965119] powernow_k8: Power state transitions not supported
kube# [ 3.965812] powernow_k8: Power state transitions not supported
kube# [ 3.966422] powernow_k8: Power state transitions not supported
kube# [ 3.967051] powernow_k8: Power state transitions not supported
kube# [ 3.967663] powernow_k8: Power state transitions not supported
kube# [ 3.968294] powernow_k8: Power state transitions not supported
kube# [ 3.969040] powernow_k8: Power state transitions not supported
kube# [ 3.969867] powernow_k8: Power state transitions not supported
kube# [ 3.970487] powernow_k8: Power state transitions not supported
kube# [ 3.971262] powernow_k8: Power state transitions not supported
kube# [ 4.018123] powernow_k8: Power state transitions not supported
kube# [ 4.018763] powernow_k8: Power state transitions not supported
kube# [ 4.019388] powernow_k8: Power state transitions not supported
kube# [ 4.020107] powernow_k8: Power state transitions not supported
kube# [ 4.020750] powernow_k8: Power state transitions not supported
kube# [ 4.021367] powernow_k8: Power state transitions not supported
kube# [ 4.021994] powernow_k8: Power state transitions not supported
kube# [ 4.022626] powernow_k8: Power state transitions not supported
kube# [ 4.023233] powernow_k8: Power state transitions not supported
kube# [ 4.023861] powernow_k8: Power state transitions not supported
kube# [ 4.024511] powernow_k8: Power state transitions not supported
kube# [ 4.025228] powernow_k8: Power state transitions not supported
kube# [ 4.025889] powernow_k8: Power state transitions not supported
kube# [ 4.026529] powernow_k8: Power state transitions not supported
kube# [ 4.027154] powernow_k8: Power state transitions not supported
kube# [ 4.027924] powernow_k8: Power state transitions not supported
kube# [ 4.066237] powernow_k8: Power state transitions not supported
kube# [ 4.066849] powernow_k8: Power state transitions not supported
kube# [ 4.067459] powernow_k8: Power state transitions not supported
kube# [ 4.068091] powernow_k8: Power state transitions not supported
kube# [ 4.068765] powernow_k8: Power state transitions not supported
kube# [ 4.069379] powernow_k8: Power state transitions not supported
kube# [ 4.069998] powernow_k8: Power state transitions not supported
kube# [ 4.070671] powernow_k8: Power state transitions not supported
kube# [ 4.071269] powernow_k8: Power state transitions not supported
kube# [ 4.071907] powernow_k8: Power state transitions not supported
kube# [ 4.072515] powernow_k8: Power state transitions not supported
kube# [ 4.073155] powernow_k8: Power state transitions not supported
kube# [ 4.073752] powernow_k8: Power state transitions not supported
kube# [ 4.074353] powernow_k8: Power state transitions not supported
kube# [ 4.074983] powernow_k8: Power state transitions not supported
kube# [ 4.075750] powernow_k8: Power state transitions not supported
kube# [ 4.125978] powernow_k8: Power state transitions not supported
kube# [ 4.126831] powernow_k8: Power state transitions not supported
kube# [ 4.127424] powernow_k8: Power state transitions not supported
kube# [ 4.128055] powernow_k8: Power state transitions not supported
kube# [ 4.128799] powernow_k8: Power state transitions not supported
kube# [ 4.129403] powernow_k8: Power state transitions not supported
kube# [ 4.130032] powernow_k8: Power state transitions not supported
kube# [ 4.130724] powernow_k8: Power state transitions not supported
kube# [ 4.131326] powernow_k8: Power state transitions not supported
kube# [ 4.131950] powernow_k8: Power state transitions not supported
kube# [ 4.132542] powernow_k8: Power state transitions not supported
kube# [ 4.133185] powernow_k8: Power state transitions not supported
kube# [ 4.133806] powernow_k8: Power state transitions not supported
kube# [ 4.134488] powernow_k8: Power state transitions not supported
kube# [ 4.135110] powernow_k8: Power state transitions not supported
kube# [ 4.135880] powernow_k8: Power state transitions not supported
kube# [ 4.166284] powernow_k8: Power state transitions not supported
kube# [ 4.166894] powernow_k8: Power state transitions not supported
kube# [ 4.167509] powernow_k8: Power state transitions not supported
kube# [ 4.168154] powernow_k8: Power state transitions not supported
kube# [ 4.168773] powernow_k8: Power state transitions not supported
kube# [ 4.169385] powernow_k8: Power state transitions not supported
kube# [ 4.170002] powernow_k8: Power state transitions not supported
kube# [ 4.170675] powernow_k8: Power state transitions not supported
kube# [ 4.171274] powernow_k8: Power state transitions not supported
kube# [ 4.171882] powernow_k8: Power state transitions not supported
kube# [ 4.172486] powernow_k8: Power state transitions not supported
kube# [ 4.173232] powernow_k8: Power state transitions not supported
kube# [ 4.173854] powernow_k8: Power state transitions not supported
kube# [ 4.174536] powernow_k8: Power state transitions not supported
kube# [ 4.175173] powernow_k8: Power state transitions not supported
kube# [ 4.175913] powernow_k8: Power state transitions not supported
kube# [ 4.202011] powernow_k8: Power state transitions not supported
kube# [ 4.202683] powernow_k8: Power state transitions not supported
kube# [ 4.203293] powernow_k8: Power state transitions not supported
kube# [ 4.203902] powernow_k8: Power state transitions not supported
kube# [ 4.204638] powernow_k8: Power state transitions not supported
kube# [ 4.205464] powernow_k8: Power state transitions not supported
kube# [ 4.206346] powernow_k8: Power state transitions not supported
kube# [ 4.207184] powernow_k8: Power state transitions not supported
kube# [ 4.208020] powernow_k8: Power state transitions not supported
kube# [ 4.149983] s[ 4.208754] powernow_k8: Power state transitions not supported
kube# ystemd-udevd[698[ 4.209547] powernow_k8: Power state transitions not supported
kube# ]: ethtool: auto[ 4.210261] powernow_k8: Power state transitions not supported
kube# negotiation is u[ 4.210979] powernow_k8: Power state transitions not supported
kube# nset or enabled,[ 4.211781] powernow_k8: Power state transitions not supported
kube# the speed and d[ 4.212476] powernow_k8: Power state transitions not supported
kube# uplex are not wr[ 4.213313] powernow_k8: Power state transitions not supported
kube# itable.
kube# [ 4.161000] systemd[1]: Found device /dev/hvc0.
kube# [ 4.240443] powernow_k8: Power state transitions not supported
kube# [ 4.241441] powernow_k8: Power state transitions not supported
kube# [ 4.242525] powernow_k8: Power state transitions not supported
kube# [ 4.243183] powernow_k8: Power state transitions not supported
kube# [ 4.244012] powernow_k8: Power state transitions not supported
kube# [ 4.244878] powernow_k8: Power state transitions not supported
kube# [ 4.245765] powernow_k8: Power state transitions not supported
kube# [ 4.246404] powernow_k8: Power state transitions not supported
kube# [ 4.247126] powernow_k8: Power state transitions not supported
kube# [ 4.247679] ppdev: user-space parallel port driver
kube# [ 4.247954] powernow_k8: Power state transitions not supported
kube# [ 4.249125] powernow_k8: Power state transitions not supported
kube# [ 4.250216] powernow_k8: Power state transitions not supported
kube# [ 4.250241] powernow_k8: Power state transitions not supported
kube# [ 4.251738] powernow_k8: Power state transitions not supported
kube# [ 4.251762] powernow_k8: Power state transitions not supported
kube# [ 4.253186] powernow_k8: Power state transitions not supported
kube# [ 4.204951] udevadm[640]: systemd-udev-settle.service is deprecated.
kube# [ 4.282532] powernow_k8: Power state transitions not supported
kube# [ 4.283547] powernow_k8: Power state transitions not supported
kube# [ 4.284700] powernow_k8: Power state transitions not supported
kube# [ 4.285381] powernow_k8: Power state transitions not supported
kube# [ 4.286636] powernow_k8: Power state transitions not supported
kube# [ 4.287724] powernow_k8: Power state transitions not supported
kube# [ 4.289118] powernow_k8: Power state transitions not supported
kube# [ 4.290237] powernow_k8: Power state transitions not supported
kube# [ 4.291224] powernow_k8: Power state transitions not supported
kube# [ 4.292287] powernow_k8: Power state transitions not supported
kube# [ 4.293503] powernow_k8: Power state transitions not supported
kube# [ 4.294246] powernow_k8: Power state transitions not supported
kube# [ 4.295262] powernow_k8: Power state transitions not supported
kube# [ 4.296254] powernow_k8: Power state transitions not supported
kube# [ 4.297568] powernow_k8: Power state transitions not supported
kube# [ 4.298765] powernow_k8: Power state transitions not supported
kube# [ 4.385692] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
kube# [ 4.487052] powernow_k8: Power state transitions not supported
kube# [ 4.488122] powernow_k8: Power state transitions not supported
kube# [ 4.489389] powernow_k8: Power state transitions not supported
kube# [ 4.490114] powernow_k8: Power state transitions not supported
kube# [ 4.491234] powernow_k8: Power state transitions not supported
kube# [ 4.492392] powernow_k8: Power state transitions not supported
kube# [ 4.493518] powernow_k8: Power state transitions not supported
kube# [ 4.494709] powernow_k8: Power state transitions not supported
kube# [ 4.495655] powernow_k8: Power state transitions not supported
kube# [ 4.496672] powernow_k8: Power state transitions not supported
kube# [ 4.497551] powernow_k8: Power state transitions not supported
kube# [ 4.498456] powernow_k8: Power state transitions not supported
kube# [ 4.499412] powernow_k8: Power state transitions not supported
kube# [ 4.500652] powernow_k8: Power state transitions not supported
kube# [ 4.501688] powernow_k8: Power state transitions not supported
kube# [ 4.502926] powernow_k8: Power state transitions not supported
kube# [ 4.527351] powernow_k8: Power state transitions not supported
kube# [ 4.528454] powernow_k8: Power state transitions not supported
kube# [ 4.529712] powernow_k8: Power state transitions not supported
kube# [ 4.530745] powernow_k8: Power state transitions not supported
kube# [ 4.532258] powernow_k8: Power state transitions not supported
kube# [ 4.533301] powernow_k8: Power state transitions not supported
kube# [ 4.534244] powernow_k8: Power state transitions not supported
kube# [ 4.535676] powernow_k8: Power state transitions not supported
kube# [ 4.536652] powernow_k8: Power state transitions not supported
kube# [ 4.537985] powernow_k8: Power state transitions not supported
kube# [ 4.539339] powernow_k8: Power state transitions not supported
kube# [ 4.540640] powernow_k8: Power state transitions not supported
kube# [ 4.541605] powernow_k8: Power state transitions not supported
kube# [ 4.542544] powernow_k8: Power state transitions not supported
kube# [ 4.543349] powernow_k8: Power state transitions not supported
kube# [ 4.544245] powernow_k8: Power state transitions not supported
kube# [ 4.568382] powernow_k8: Power state transitions not supported
kube# [ 4.569094] powernow_k8: Power state transitions not supported
kube# [ 4.569712] powernow_k8: Power state transitions not supported
kube# [ 4.570320] powernow_k8: Power state transitions not supported
kube# [ 4.570983] powernow_k8: Power state transitions not supported
kube# [ 4.571558] powernow_k8: Power state transitions not supported
kube# [ 4.572176] powernow_k8: Power state transitions not supported
kube# [ 4.572783] powernow_k8: Power state transitions not supported
kube# [ 4.573394] powernow_k8: Power state transitions not supported
kube# [ 4.574006] powernow_k8: Power state transitions not supported
kube# [ 4.574602] powernow_k8: Power state transitions not supported
kube# [ 4.575204] powernow_k8: Power state transitions not supported
kube# [ 4.575828] powernow_k8: Power state transitions not supported
kube# [ 4.576441] powernow_k8: Power state transitions not supported
kube# [ 4.577118] powernow_k8: Power state transitions not supported
kube# [ 4.577709] powernow_k8: Power state transitions not supported
kube# [ 4.600368] powernow_k8: Power state transitions not supported
kube# [ 4.601094] powernow_k8: Power state transitions not supported
kube# [ 4.601712] powernow_k8: Power state transitions not supported
kube# [ 4.602329] powernow_k8: Power state transitions not supported
kube# [ 4.603001] powernow_k8: Power state transitions not supported
kube# [ 4.603590] powernow_k8: Power state transitions not supported
kube# [ 4.604201] powernow_k8: Power state transitions not supported
kube# [ 4.604803] powernow_k8: Power state transitions not supported
kube# [ 4.605404] powernow_k8: Power state transitions not supported
kube# [ 4.606040] powernow_k8: Power state transitions not supported
kube# [ 4.606687] powernow_k8: Power state transitions not supported
kube# [ 4.607282] powernow_k8: Power state transitions not supported
kube# [ 4.607996] powernow_k8: Power state transitions not supported
kube# [ 4.608636] powernow_k8: Power state transitions not supported
kube# [ 4.609228] powernow_k8: Power state transitions not supported
kube# [ 4.609854] powernow_k8: Power state transitions not supported
kube# [ 4.603607] systemd[1]: Started udev Wait for Complete Device Initialization.
kube# [ 4.604679] systemd[1]: Reached target System Initialization.
kube# [ 4.605420] systemd[1]: Started Daily Cleanup of Temporary Directories.
kube# [ 4.606209] systemd[1]: Reached target Timers.
kube# [ 4.606795] systemd[1]: Listening on D-Bus System Message Bus Socket.
kube# [ 4.608621] systemd[1]: Starting Docker Socket for the API.
kube# [ 4.609633] systemd[1]: Listening on Nix Daemon Socket.
kube# [ 4.610819] systemd[1]: Listening on Docker Socket for the API.
kube# [ 4.611522] systemd[1]: Reached target Sockets.
kube# [ 4.612064] systemd[1]: Reached target Basic System.
kube# [ 4.612789] systemd[1]: Starting Kernel Auditing...
kube# [ 4.613771] systemd[1]: Started backdoor.service.
kube# [ 4.614820] systemd[1]: Starting DHCP Client...
kube# [ 4.616149] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 4.617941] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.619562] systemd[1]: Starting resolvconf update...
kube# connecting to host...
kube# [ 4.630352] nscd[793]: 793 monitoring file `/etc/passwd` (1)
kube# [ 4.630736] nscd[793]: 793 monitoring directory `/etc` (2)
kube# [ 4.631260] nscd[793]: 793 monitoring file `/etc/group` (3)
kube# [ 4.631524] nscd[793]: 793 monitoring directory `/etc` (2)
kube# [ 4.632575] nscd[793]: 793 monitoring file `/etc/hosts` (4)
kube# [ 4.632839] nscd[793]: 793 monitoring directory `/etc` (2)
kube# [ 4.633389] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[788]: touch: cannot touch '/var/lib/kubernetes/secrets/apitoken.secret': No such file or directory
kube# [ 4.633724] nscd[793]: 793 disabled inotify-based monitoring for file `/etc/resolv.conf': No such file or directory
kube# [ 4.634209] nscd[793]: 793 stat failed for file `/etc/resolv.conf'; will try again later: No such file or directory
kube# [ 4.637206] nscd[793]: 793 monitoring file `/etc/services` (5)
kube# [ 4.637470] nscd[793]: 793 monitoring directory `/etc` (2)
kube# [ 4.637786] nscd[793]: 793 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.638162] nscd[793]: 793 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.641386] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[788]: /nix/store/s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start: line 16: /var/lib/kubernetes/secrets/ca.pem: No such file or directory
kube# [ 4.644082] 3j5xawpr21sl93gg17ng2xhw943msvhn-audit-disable[785]: No rules
kube# [ 4.647370] dhcpcd[787]: dev: loaded udev
kube# [ 4.650146] systemd[1]: Started Kernel Auditing.
kube# [ 4.712807] 8021q: 802.1Q VLAN Support v1.8
kube# [ 4.669202] systemd[1]: Started D-Bus System Message Bus.
kube# [ 4.683783] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[788]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 4.684008] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[788]: Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Couldn't connect to server
kube: connected to guest root shell
kube# [ 4.753364] cfg80211: Loading compiled-in X.509 certificates for regulatory database
kube# sh: cannot set terminal process group (-1): Inappropriate ioctl for device
kube# sh: no job control in this shell
kube# [ 4.767990] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
kube: (connecting took 5.32 seconds)
(5.32 seconds)
kube# [ 4.770270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
kube# [ 4.771798] cfg80211: failed to load regulatory.db
kube# [ 4.716208] dbus-daemon[820]: dbus[820]: Unknown username "systemd-timesync" in message bus configuration file
kube# [ 4.755330] systemd[1]: kube-certmgr-bootstrap.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 4.755636] systemd[1]: kube-certmgr-bootstrap.service: Failed with result 'exit-code'.
kube# [ 4.760581] systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
kube# [ 4.766536] systemd[1]: nscd.service: Succeeded.
kube# [ 4.766972] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.768600] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.774434] systemd[1]: Started resolvconf update.
kube# [ 4.774761] systemd[1]: Reached target Network (Pre).
kube# [ 4.776214] systemd[1]: Starting Address configuration of eth1...
kube# [ 4.777301] nscd[855]: 855 monitoring file `/etc/passwd` (1)
kube# [ 4.777566] nscd[855]: 855 monitoring directory `/etc` (2)
kube# [ 4.778079] systemd[1]: Starting Link configuration of eth1...
kube# [ 4.778546] nscd[855]: 855 monitoring file `/etc/group` (3)
kube# [ 4.779021] nscd[855]: 855 monitoring directory `/etc` (2)
kube# [ 4.779256] nscd[855]: 855 monitoring file `/etc/hosts` (4)
kube# [ 4.779577] nscd[855]: 855 monitoring directory `/etc` (2)
kube# [ 4.779935] nscd[855]: 855 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.780295] nscd[855]: 855 monitoring directory `/etc` (2)
kube# [ 4.780561] nscd[855]: 855 monitoring file `/etc/services` (6)
kube# [ 4.780935] nscd[855]: 855 monitoring directory `/etc` (2)
kube# [ 4.782418] nscd[855]: 855 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.782657] nscd[855]: 855 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.789239] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.789455] systemd[1]: Reached target Host and Network Name Lookups.
kube# [ 4.790068] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[857]: Configuring link...
kube# [ 4.790425] systemd[1]: Reached target User and Group Name Lookups.
kube# [ 4.792495] systemd[1]: Starting Login Service...
kube# [ 4.805087] mn1g2a6qvkb8wddqmf7bgnb00q634fh2-unit-script-network-addresses-eth1-start[856]: adding address 192.168.1.1/24... done
kube# [ 4.861123] 8021q: adding VLAN 0 to HW filter on device eth1
kube# [ 4.808215] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[857]: bringing up interface... done
kube# [ 4.810264] systemd[1]: Started Link configuration of eth1.
kube# [ 4.810593] systemd[1]: Reached target All Network Interfaces (deprecated).
kube# [ 4.816061] systemd[1]: Started Address configuration of eth1.
kube# [ 4.817167] systemd[1]: Starting Networking Setup...
kube# [ 4.864730] nscd[855]: 855 monitored file `/etc/resolv.conf` was written to
kube# [ 4.875312] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 4.886059] systemd[1]: Started Networking Setup.
kube# [ 4.887303] systemd[1]: Starting Extra networking commands....
kube# [ 4.889726] systemd[1]: nscd.service: Succeeded.
kube# [ 4.891120] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.892480] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.894073] systemd[1]: Started Extra networking commands..
kube# [ 4.895475] systemd[1]: Reached target Network.
kube# [ 4.897045] systemd[1]: Starting CFSSL CA API server...
kube# [ 4.899375] systemd[1]: Starting etcd key-value store...
kube# [ 4.902498] nscd[929]: 929 monitoring file `/etc/passwd` (1)
kube# [ 4.905727] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 4.910218] nscd[929]: 929 monitoring directory `/etc` (2)
kube# [ 4.913075] systemd[1]: Starting Kubernetes addon manager...
kube# [ 4.917111] nscd[929]: 929 monitoring file `/etc/group` (3)
kube# [ 4.921134] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 4.924352] nscd[929]: 929 monitoring directory `/etc` (2)
kube# [ 4.927099] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 4.930352] nscd[929]: 929 monitoring file `/etc/hosts` (4)
kube# [ 4.932714] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 4.936066] nscd[929]: 929 monitoring directory `/etc` (2)
kube# [ 4.938336] systemd[1]: Starting Permit User Sessions...
kube# [ 4.941290] nscd[929]: 929 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.943684] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.946808] nscd[929]: 929 monitoring directory `/etc` (2)
kube# [ 4.949388] systemd[1]: Started Permit User Sessions.
kube# [ 4.952429] nscd[929]: 929 monitoring file `/etc/services` (6)
kube# [ 4.954811] systemd[1]: Started Getty on tty1.
kube# [ 4.957415] nscd[929]: 929 monitoring directory `/etc` (2)
kube# [ 4.959684] systemd[1]: Reached target Login Prompts.
kube# [ 4.962065] nscd[929]: 929 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.964403] nscd[929]: 929 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 5.021791] systemd[870]: systemd-logind.service: Executable /sbin/modprobe missing, skipping: No such file or directory
kube# [ 5.188402] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] generate received request
kube# [ 5.191120] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] received CSR
kube# [ 5.193421] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] generating key: rsa-2048
kube# [ 5.280408] Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# systemd-logind[962]: New seat seat0.
kube# [ 5.287362] systemd-logind[962]: Watching system buttons on /dev/input/event2 (Power Button)
kube# [ 5.289996] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[930]: Error in configuration:
kube# [ 5.292259] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[930]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 5.295528] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[930]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 5.300540] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[930]: * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 5.303825] systemd-logind[962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
kube# [ 5.307114] systemd[1]: Started Login Service.
kube# [ 5.309515] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube: exit status 1
(5.92 seconds)
kube# [ 5.312599] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 5.315036] systemd[1]: Failed to start Kubernetes addon manager.
kube# [ 5.343432] etcd[927]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 5.346171] etcd[927]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.349092] etcd[927]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 5.351653] etcd[927]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 5.354148] etcd[927]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 5.356535] etcd[927]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 5.359857] etcd[927]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 5.362273] etcd[927]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 5.364527] etcd[927]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.366597] etcd[927]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 5.368627] etcd[927]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 5.370696] etcd[927]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 5.372478] etcd[927]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.374549] etcd[927]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.376750] etcd[927]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.378844] etcd[927]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.380844] etcd[927]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 5.382629] etcd[927]: etcd Version: 3.3.13
kube#
kube# [ 5.384329] etcd[927]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 5.386077] etcd[927]: Go Version: go1.12.9
kube#
kube# [ 5.387578] etcd[927]: Go OS/Arch: linux/amd64
kube#
kube# [ 5.389093] etcd[927]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 5.390965] etcd[927]: failed to detect default host (could not find default route)
kube# [ 5.392685] etcd[927]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 5.395495] etcd[927]: open /var/lib/kubernetes/secrets/etcd.pem: no such file or directory
kube# [ 5.397564] systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 5.425177] systemd[1]: etcd.service: Failed with result 'exit-code'.
kube# [ 5.426682] systemd[1]: Failed to start etcd key-value store.
kube# [ 5.435016] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] encoded CSR
kube# [ 5.436827] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] signed certificate with serial number 208733668384399971802246807256512139257152651023
kube# [ 5.452232] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] generate received request
kube# [ 5.454380] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] received CSR
kube# [ 5.456071] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] generating key: rsa-2048
kube# [ 5.493493] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] encoded CSR
kube# [ 5.495979] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [INFO] signed certificate with serial number 234173146315212111525032217680913031046921330612
kube# [ 5.499420] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: 2020/01/27 01:28:33 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
kube# [ 5.502585] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: websites. For more information see the Baseline Requirements for the Issuance and Management
kube# [ 5.505887] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
kube# [ 5.508981] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[926]: specifically, section 10.2.3 ("Information Requirements").
kube# [ 5.521729] systemd[1]: Started CFSSL CA API server.
kube# [ 5.529738] cfssl[1041]: 2020/01/27 01:28:33 [INFO] Initializing signer
kube# [ 5.532007] cfssl[1041]: 2020/01/27 01:28:33 [WARNING] couldn't initialize ocsp signer: open : no such file or directory
kube# [ 5.534255] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/sign' is enabled
kube# [ 5.535979] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/authsign' is enabled
kube# [ 5.537591] cfssl[1041]: 2020/01/27 01:28:33 [INFO] bundler API ready
kube# [ 5.539155] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/bundle' is enabled
kube# [ 5.541062] cfssl[1041]: 2020/01/27 01:28:33 [WARNING] endpoint 'revoke' is disabled: cert db not configured (missing -db-config)
kube# [ 5.543005] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/info' is enabled
kube# [ 5.544730] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/scaninfo' is enabled
kube# [ 5.546429] cfssl[1041]: 2020/01/27 01:28:33 [WARNING] endpoint 'ocspsign' is disabled: signer not initialized
kube# [ 5.548234] cfssl[1041]: 2020/01/27 01:28:33 [WARNING] endpoint 'crl' is disabled: cert db not configured (missing -db-config)
kube# [ 5.550075] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/certinfo' is enabled
kube# [ 5.551717] cfssl[1041]: 2020/01/27 01:28:33 [WARNING] endpoint '/' is disabled: could not locate box "static"
kube# [ 5.553603] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/gencrl' is enabled
kube# [ 5.555422] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/newcert' is enabled
kube# [ 5.557168] cfssl[1041]: 2020/01/27 01:28:33 [INFO] setting up key / CSR generator
kube# [ 5.559036] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/newkey' is enabled
kube# [ 5.561217] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/init_ca' is enabled
kube# [ 5.563503] cfssl[1041]: 2020/01/27 01:28:33 [INFO] endpoint '/api/v1/cfssl/scan' is enabled
kube# [ 5.565552] cfssl[1041]: 2020/01/27 01:28:33 [INFO] Handler set up complete.
kube# [ 5.567722] cfssl[1041]: 2020/01/27 01:28:33 [INFO] Now listening on https://0.0.0.0:8888
kube# [ 5.633408] kube-proxy[932]: W0127 01:28:33.583381 932 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 5.650533] kube-proxy[932]: W0127 01:28:33.600820 932 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.654781] kube-proxy[932]: W0127 01:28:33.605130 932 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.657263] kube-proxy[932]: W0127 01:28:33.605470 932 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.660020] kube-proxy[932]: W0127 01:28:33.605784 932 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.662524] kube-proxy[932]: W0127 01:28:33.607360 932 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.665057] kube-proxy[932]: W0127 01:28:33.607593 932 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.672543] kube-proxy[932]: F0127 01:28:33.622863 932 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 5.678284] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 5.679775] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 5.826991] 8021q: adding VLAN 0 to HW filter on device eth0
kube# [ 5.774430] dhcpcd[787]: eth0: waiting for carrier
kube# [ 5.776264] dhcpcd[787]: eth0: carrier acquired
kube# [ 5.781461] dhcpcd[787]: DUID 00:01:00:01:25:c0:f9:41:52:54:00:12:34:56
kube# [ 5.782886] dhcpcd[787]: eth0: IAID 00:12:34:56
kube# [ 5.784113] dhcpcd[787]: eth0: adding address fe80::5054:ff:fe12:3456
kube# [ 5.790892] dhcpcd[787]: eth0: soliciting a DHCP lease
kube# [ 5.854268] NET: Registered protocol family 17
kube# [ 5.813191] dhcpcd[787]: eth0: offered 10.0.2.15 from 10.0.2.2
kube# [ 5.814676] dhcpcd[787]: eth0: leased 10.0.2.15 for 86400 seconds
kube# [ 5.816095] dhcpcd[787]: eth0: adding route to 10.0.2.0/24
kube# [ 5.817417] dhcpcd[787]: eth0: adding default route via 10.0.2.2
kube# [ 5.867394] nscd[929]: 929 monitored file `/etc/resolv.conf` was written to
kube# [ 5.878748] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 5.891981] kube-scheduler[935]: I0127 01:28:33.841884 935 serving.go:319] Generated self-signed cert in-memory
kube# [ 5.894616] systemd[1]: nscd.service: Succeeded.
kube# [ 5.896293] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 5.897856] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 5.904618] nscd[1133]: 1133 monitoring file `/etc/passwd` (1)
kube# [ 5.906021] nscd[1133]: 1133 monitoring directory `/etc` (2)
kube# [ 5.908152] nscd[1133]: 1133 monitoring file `/etc/group` (3)
kube# [ 5.909572] dhcpcd[787]: Failed to reload-or-try-restart ntpd.service: Unit ntpd.service not found.
kube# [ 5.911099] dhcpcd[787]: Failed to reload-or-try-restart openntpd.service: Unit openntpd.service not found.
kube# [ 5.912514] dhcpcd[787]: Failed to reload-or-try-restart chronyd.service: Unit chronyd.service not found.
kube# [ 5.913948] nscd[1133]: 1133 monitoring directory `/etc` (2)
kube# [ 5.915096] systemd[1]: Started Name Service Cache Daemon.
kube# [ 5.916295] nscd[1133]: 1133 monitoring file `/etc/hosts` (4)
kube# [ 5.917462] nscd[1133]: 1133 monitoring directory `/etc` (2)
kube# [ 5.918642] nscd[1133]: 1133 monitoring file `/etc/resolv.conf` (5)
kube# [ 5.919825] nscd[1133]: 1133 monitoring directory `/etc` (2)
kube# [ 5.921183] nscd[1133]: 1133 monitoring file `/etc/services` (6)
kube# [ 5.922451] nscd[1133]: 1133 monitoring directory `/etc` (2)
kube# [ 5.923610] nscd[1133]: 1133 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 5.925104] nscd[1133]: 1133 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 5.926619] dhcpcd[787]: forked to background, child pid 1140
kube# [ 5.938355] systemd[1]: Started DHCP Client.
kube# [ 5.939721] systemd[1]: Reached target Network is Online.
kube# [ 5.941693] systemd[1]: Starting certmgr...
kube# [ 5.943588] systemd[1]: Starting Docker Application Container Engine...
kube# [ 6.113450] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1145]: 2020/01/27 01:28:34 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 6.116367] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1145]: 2020/01/27 01:28:34 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 6.118774] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1145]: 2020/01/27 01:28:34 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 6.122073] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1145]: 2020/01/27 01:28:34 [ERROR] cert: failed to fetch remote CA: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.124457] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1145]: Failed: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.126608] systemd[1]: certmgr.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 6.128264] systemd[1]: certmgr.service: Failed with result 'exit-code'.
kube# [ 6.129709] systemd[1]: Failed to start certmgr.
kube# [ 6.159199] kube-controller-manager[931]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 6.174339] kube-scheduler[935]: W0127 01:28:34.124526 935 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 6.177422] kube-scheduler[935]: W0127 01:28:34.124562 935 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 6.180219] kube-scheduler[935]: W0127 01:28:34.124579 935 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 6.185579] kube-scheduler[935]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.190563] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.192572] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube# [ 6.199061] kube-apiserver[928]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 6.200676] kube-apiserver[928]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 6.202633] kube-apiserver[928]: I0127 01:28:34.149106 928 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 6.204154] kube-apiserver[928]: I0127 01:28:34.149293 928 server.go:147] Version: v1.15.6
kube# [ 6.205415] kube-apiserver[928]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 6.207093] kube-apiserver[928]: Usage:
kube# [ 6.208043] kube-apiserver[928]: kube-apiserver [flags]
kube# [ 6.208983] kube-apiserver[928]: Generic flags:
kube# [ 6.209943] kube-apiserver[928]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 6.212785] kube-apiserver[928]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 6.214980] kube-apiserver[928]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 6.217307] kube-apiserver[928]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.219732] kube-apiserver[928]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.222496] kube-apiserver[928]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 6.224840] kube-apiserver[928]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 6.227105] kube-apiserver[928]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 6.229278] kube-apiserver[928]: APIListChunking=true|false (BETA - default=true)
kube# [ 6.231068] kube-apiserver[928]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 6.232649] kube-apiserver[928]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 6.234281] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.235726] kube-apiserver[928]: AppArmor=true|false (BETA - default=true)
kube# [ 6.237160] kube-apiserver[928]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 6.238649] kube-apiserver[928]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 6.240277] kube-apiserver[928]: BlockVolume=true|false (BETA - default=true)
kube# [ 6.241936] kube-apiserver[928]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 6.243553] kube-apiserver[928]: CPUManager=true|false (BETA - default=true)
kube# [ 6.244998] kube-apiserver[928]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 6.246553] kube-apiserver[928]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 6.248076] kube-apiserver[928]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 6.249516] kube-apiserver[928]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 6.250993] kube-apiserver[928]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 6.252477] kube-apiserver[928]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 6.253958] kube-apiserver[928]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 6.255544] kube-apiserver[928]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 6.257134] kube-apiserver[928]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 6.258934] kube-apiserver[928]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 6.260542] kube-apiserver[928]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 6.261957] kube-apiserver[928]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 6.263423] kube-apiserver[928]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 6.264975] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 6.266086] kube-apiserver[928]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 6.267513] kube-apiserver[928]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 6.269037] kube-apiserver[928]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 6.270467] kube-apiserver[928]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 6.271951] kube-apiserver[928]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 6.273312] kube-apiserver[928]: DevicePlugins=true|false (BETA - default=true)
kube# [ 6.274647] kube-apiserver[928]: DryRun=true|false (BETA - default=true)
kube# [ 6.276007] kube-apiserver[928]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 6.277373] kube-apiserver[928]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 6.278729] kube-apiserver[928]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 6.280118] kube-apiserver[928]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 6.281544] kube-apiserver[928]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 6.282967] kube-apiserver[928]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 6.284453] kube-apiserver[928]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 6.285993] kube-apiserver[928]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 6.287375] kube-apiserver[928]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 6.288754] kube-apiserver[928]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 6.290235] kube-apiserver[928]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 6.291878] kube-apiserver[928]: MountContainers=true|false (ALPHA - default=false)
kube# [ 6.293256] kube-apiserver[928]: NodeLease=true|false (BETA - default=true)
kube# [ 6.294553] kube-apiserver[928]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 6.295985] kube-apiserver[928]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 6.297407] kube-apiserver[928]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 6.298769] kube-apiserver[928]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 6.300190] kube-apiserver[928]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 6.301588] kube-apiserver[928]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 6.303066] kube-apiserver[928]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 6.304584] kube-apiserver[928]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 6.306071] kube-apiserver[928]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 6.307564] kube-apiserver[928]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 6.309431] kube-apiserver[928]: RunAsGroup=true|false (BETA - default=true)
kube# [ 6.310759] kube-apiserver[928]: RuntimeClass=true|false (BETA - default=true)
kube# [ 6.312131] kube-apiserver[928]: SCTPSupport=true|false (ALPHA - default=false)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 6.313490] kube-apiserver[928]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 6.314945] kube-apiserver[928]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 6.316317] kube-apiserver[928]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 6.317772] kube-apiserver[928]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 6.319281] kube-apiserver[928]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 6.321024] kube-apiserver[928]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 6.322947] kube-apiserver[928]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 6.324824] kube-apiserver[928]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 6.326688] kube-apiserver[928]: Sysctls=true|false (BETA - default=true)
kube# [ 6.328418] kube-apiserver[928]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 6.330147] kube-apiserver[928]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 6.331827] kube-apiserver[928]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 6.333349] kube-apiserver[928]: TokenRequest=true|false (BETA - default=true)
kube# [ 6.334894] kube-apiserver[928]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 6.336463] kube-apiserver[928]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 6.337938] kube-apiserver[928]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 6.339385] kube-apiserver[928]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 6.340848] kube-apiserver[928]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 6.342304] kube-apiserver[928]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 6.343825] kube-apiserver[928]: WinDSR=true|false (ALPHA - default=false)
kube# [ 6.345206] kube-apiserver[928]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 6.346577] kube-apiserver[928]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 6.348158] kube-apiserver[928]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 6.350123] kube-apiserver[928]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 6.352389] kube-apiserver[928]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 6.354428] kube-apiserver[928]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 6.357369] kube-apiserver[928]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 6.360678] kube-apiserver[928]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 6.362449] kube-apiserver[928]: Etcd flags:
kube# [ 6.363434] kube-apiserver[928]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 6.365536] kube-apiserver[928]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 6.367361] kube-apiserver[928]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 6.369263] kube-apiserver[928]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 6.371358] kube-apiserver[928]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 6.373020] kube-apiserver[928]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 6.374620] kube-apiserver[928]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 6.376550] kube-apiserver[928]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 6.378497] kube-apiserver[928]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 6.380080] kube-apiserver[928]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 6.381746] kube-apiserver[928]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube#
kube: exit status 1
(0.07 seconds)
kube# [ 6.387150] kube-apiserver[928]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 6.389288] kube-apiserver[928]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 6.390759] kube-apiserver[928]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 6.393195] kube-apiserver[928]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 6.395017] dhcpcd[1140]: eth0: soliciting an IPv6 router
kube# [ 6.396068] kube-apiserver[928]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 6.400948] kube-apiserver[928]: Secure serving flags:
kube# [ 6.401980] kube-apiserver[928]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 6.405069] kube-apiserver[928]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 6.407288] kube-apiserver[928]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 6.409319] kube-apiserver[928]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 6.411236] kube-apiserver[928]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 6.414477] kube-apiserver[928]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 6.421513] kube-apiserver[928]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 6.423293] kube-apiserver[928]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 6.424828] kube-apiserver[928]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 6.429707] kube-apiserver[928]: Insecure serving flags:
kube# [ 6.430730] kube-apiserver[928]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 6.433106] kube-apiserver[928]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.435471] kube-apiserver[928]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.437360] kube-apiserver[928]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 6.439225] kube-apiserver[928]: Auditing flags:
kube# [ 6.440071] kube-apiserver[928]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 6.441821] kube-apiserver[928]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 6.443630] kube-apiserver[928]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 6.445220] kube-apiserver[928]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 6.447087] kube-apiserver[928]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 6.448985] kube-apiserver[928]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 6.450495] kube-apiserver[928]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 6.452034] kube-apiserver[928]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 6.454207] kube-apiserver[928]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 6.456068] kube-apiserver[928]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 6.457498] kube-apiserver[928]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 6.459150] kube-apiserver[928]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 6.461806] kube-apiserver[928]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 6.463597] kube-apiserver[928]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.465122] kube-apiserver[928]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.467725] kube-apiserver[928]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.470393] kube-apiserver[928]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 6.472168] kube-apiserver[928]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 6.473676] kube-apiserver[928]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 6.475578] kube-apiserver[928]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 6.477301] kube-apiserver[928]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 6.479228] kube-apiserver[928]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 6.481218] kube-apiserver[928]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 6.482856] kube-apiserver[928]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 6.484539] kube-apiserver[928]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 6.486205] kube-apiserver[928]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 6.487815] kube-apiserver[928]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 6.490470] kube-apiserver[928]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.491934] kube-apiserver[928]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.494424] kube-apiserver[928]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.497125] kube-apiserver[928]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 6.498922] kube-apiserver[928]: Features flags:
kube# [ 6.499833] kube-apiserver[928]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 6.501172] kube-apiserver[928]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 6.502568] kube-apiserver[928]: Authentication flags:
kube# [ 6.503433] kube-apiserver[928]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 6.506313] kube-apiserver[928]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 6.509796] kube-apiserver[928]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 6.512046] kube-apiserver[928]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 6.514653] kube-apiserver[928]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 6.516707] kube-apiserver[928]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 6.519140] kube-apiserver[928]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 6.521287] kube-apiserver[928]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 6.523610] kube-apiserver[928]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 6.525291] kube-apiserver[928]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 6.527964] kube-apiserver[928]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 6.529838] kube-apiserver[928]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 6.531838] kube-apiserver[928]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 6.534255] kube-apiserver[928]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 6.536920] kube-apiserver[928]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 6.539853] kube-apiserver[928]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 6.542652] kube-apiserver[928]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 6.545587] kube-apiserver[928]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 6.548430] kube-apiserver[928]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 6.550238] kube-apiserver[928]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 6.552053] kube-apiserver[928]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 6.553808] kube-apiserver[928]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 6.556028] kube-apiserver[928]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 6.559408] kube-apiserver[928]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 6.561352] kube-apiserver[928]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 6.564197] kube-apiserver[928]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 6.566179] kube-apiserver[928]: Authorization flags:
kube# [ 6.567239] kube-apiserver[928]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 6.569650] kube-apiserver[928]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 6.571698] kube-apiserver[928]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 6.573498] kube-apiserver[928]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 6.575355] kube-apiserver[928]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 6.577775] kube-apiserver[928]: Cloud provider flags:
kube# [ 6.578687] kube-apiserver[928]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 6.580335] kube-apiserver[928]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 6.581743] kube-apiserver[928]: Api enablement flags:
kube# [ 6.582758] kube-controller-manager[931]: I0127 01:28:34.493158 931 serving.go:319] Generated self-signed cert in-memory
kube# [ 6.584247] kube-controller-manager[931]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-controller-manager-client.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.589536] systemd[1]: kube-controller-manager.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.590969] kube-apiserver[928]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 6.594687] kube-apiserver[928]: Admission flags:
kube# [ 6.595967] kube-apiserver[928]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 6.605277] kube-apiserver[928]: --admission-control-config-file string File with admission control configuration.
kube# [ 6.606817] systemd[1]: kube-controller-manager.service: Failed with result 'exit-code'.
kube# [ 6.608089] kube-apiserver[928]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.616339] kube-apiserver[928]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.624614] kube-apiserver[928]: Misc flags:
kube# [ 6.625456] kube-apiserver[928]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 6.626946] kube-apiserver[928]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 6.629119] kube-apiserver[928]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 6.630703] kube-apiserver[928]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 6.632305] kube-apiserver[928]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 6.633716] kube-apiserver[928]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 6.635168] kube-apiserver[928]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 6.636606] kube-apiserver[928]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 6.637988] kube-apiserver[928]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 6.639417] kube-apiserver[928]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 6.641467] kube-apiserver[928]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 6.642795] kube-apiserver[928]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 6.644195] kube-apiserver[928]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 6.646682] kube-apiserver[928]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 6.648590] kube-apiserver[928]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 6.653295] kube-apiserver[928]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 6.656060] kube-apiserver[928]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 6.658433] kube-apiserver[928]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 6.660565] kube-apiserver[928]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 6.662674] kube-apiserver[928]: Global flags:
kube# [ 6.663489] kube-apiserver[928]: --alsologtostderr log to standard error as well as files
kube# [ 6.664966] kube-apiserver[928]: -h, --help help for kube-apiserver
kube# [ 6.666621] kube-apiserver[928]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 6.668535] kube-apiserver[928]: --log-dir string If non-empty, write log files in this directory
kube# [ 6.670082] kube-apiserver[928]: --log-file string If non-empty, use this log file
kube# [ 6.671332] kube-apiserver[928]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 6.673226] kube-apiserver[928]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 6.674592] kube-apiserver[928]: --logtostderr log to standard error instead of files (default true)
kube# [ 6.675982] kube-apiserver[928]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 6.677283] kube-apiserver[928]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 6.678593] kube-apiserver[928]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 6.679988] kube-apiserver[928]: -v, --v Level number for the log level verbosity
kube# [ 6.681224] kube-apiserver[928]: --version version[=true] Print version information and quit
kube# [ 6.682428] kube-apiserver[928]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 6.684114] kube-apiserver[928]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 7.154534] dhcpcd[1140]: eth0: Router Advertisement from fe80::2
kube# [ 7.155674] dhcpcd[1140]: eth0: adding address fec0::5054:ff:fe12:3456/64
kube# [ 7.156793] dhcpcd[1140]: eth0: adding route to fec0::/64
kube# [ 7.157805] dhcpcd[1140]: eth0: adding default route via fe80::2
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 7.431962] dockerd[1146]: time="2020-01-27T01:28:35.381797596Z" level=info msg="Starting up"Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube#
kube: exit status 1
(0.05 seconds)
kube# [ 7.447784] dockerd[1146]: time="2020-01-27T01:28:35.398104417Z" level=info msg="libcontainerd: started new containerd process" pid=1206
kube# [ 7.449558] dockerd[1146]: time="2020-01-27T01:28:35.398712316Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 7.450998] dockerd[1146]: time="2020-01-27T01:28:35.398733268Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 7.452698] dockerd[1146]: time="2020-01-27T01:28:35.398757293Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 7.454698] dockerd[1146]: time="2020-01-27T01:28:35.398776570Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 7.998359] dockerd[1146]: time="2020-01-27T01:28:35.948679141Z" level=info msg="starting containerd" revision=.m version=
kube# [ 8.000265] dockerd[1146]: time="2020-01-27T01:28:35.948897046Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
kube# [ 8.001984] dockerd[1146]: time="2020-01-27T01:28:35.948968284Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.003606] dockerd[1146]: time="2020-01-27T01:28:35.949132550Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.006067] dockerd[1146]: time="2020-01-27T01:28:35.949167471Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
kube# [ 8.010790] dockerd[1146]: time="2020-01-27T01:28:35.961176260Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.013327] dockerd[1146]: time="2020-01-27T01:28:35.961195815Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
kube# [ 8.014806] dockerd[1146]: time="2020-01-27T01:28:35.961256158Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.016351] dockerd[1146]: time="2020-01-27T01:28:35.961425174Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.017780] dockerd[1146]: time="2020-01-27T01:28:35.961538317Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.019961] dockerd[1146]: time="2020-01-27T01:28:35.961555358Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
kube# [ 8.021353] dockerd[1146]: time="2020-01-27T01:28:35.961591676Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.023494] dockerd[1146]: time="2020-01-27T01:28:35.961605085Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.025690] dockerd[1146]: time="2020-01-27T01:28:35.961616819Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.046214] dockerd[1146]: time="2020-01-27T01:28:35.996588023Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
kube# [ 8.047983] dockerd[1146]: time="2020-01-27T01:28:35.996613445Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
kube# [ 8.049727] dockerd[1146]: time="2020-01-27T01:28:35.996641941Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
kube# [ 8.051612] dockerd[1146]: time="2020-01-27T01:28:35.996655350Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
kube# [ 8.053320] dockerd[1146]: time="2020-01-27T01:28:35.996667922Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
kube# [ 8.054989] dockerd[1146]: time="2020-01-27T01:28:35.996681890Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
kube# [ 8.056599] dockerd[1146]: time="2020-01-27T01:28:35.996701166Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
kube# [ 8.058179] dockerd[1146]: time="2020-01-27T01:28:35.996719325Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
kube# [ 8.059945] dockerd[1146]: time="2020-01-27T01:28:35.996738601Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
kube# [ 8.062075] dockerd[1146]: time="2020-01-27T01:28:35.996752010Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
kube# [ 8.063827] dockerd[1146]: time="2020-01-27T01:28:35.996830512Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
kube# [ 8.065300] dockerd[1146]: time="2020-01-27T01:28:35.996874093Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
kube# [ 8.066711] dockerd[1146]: time="2020-01-27T01:28:35.998909547Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
kube# [ 8.068350] dockerd[1146]: time="2020-01-27T01:28:35.998945865Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
kube# [ 8.070005] dockerd[1146]: time="2020-01-27T01:28:35.998979109Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
kube# [ 8.071598] dockerd[1146]: time="2020-01-27T01:28:35.999002576Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
kube# [ 8.073319] dockerd[1146]: time="2020-01-27T01:28:35.999023808Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
kube# [ 8.074834] dockerd[1146]: time="2020-01-27T01:28:35.999036100Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
kube# [ 8.076401] dockerd[1146]: time="2020-01-27T01:28:35.999048392Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
kube# [ 8.077960] dockerd[1146]: time="2020-01-27T01:28:35.999067109Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
kube# [ 8.079532] dockerd[1146]: time="2020-01-27T01:28:35.999086944Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
kube# [ 8.081119] dockerd[1146]: time="2020-01-27T01:28:35.999112366Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
kube# [ 8.082778] dockerd[1146]: time="2020-01-27T01:28:35.999131642Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
kube# [ 8.084419] dockerd[1146]: time="2020-01-27T01:28:35.999223833Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
kube# [ 8.086058] dockerd[1146]: time="2020-01-27T01:28:35.999241712Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
kube# [ 8.087758] dockerd[1146]: time="2020-01-27T01:28:35.999254563Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
kube# [ 8.089446] dockerd[1146]: time="2020-01-27T01:28:35.999272722Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
kube# [ 8.091273] dockerd[1146]: time="2020-01-27T01:28:36.002608621Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
kube# [ 8.092806] dockerd[1146]: time="2020-01-27T01:28:36.002639910Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
kube# [ 8.094551] dockerd[1146]: time="2020-01-27T01:28:36.002658906Z" level=info msg="containerd successfully booted in 0.054546s"
kube# [ 8.096270] dockerd[1146]: time="2020-01-27T01:28:36.034805450Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.097568] dockerd[1146]: time="2020-01-27T01:28:36.034830593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.099234] dockerd[1146]: time="2020-01-27T01:28:36.034851545Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.101292] dockerd[1146]: time="2020-01-27T01:28:36.034868866Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.102870] dockerd[1146]: time="2020-01-27T01:28:36.042936931Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.104323] dockerd[1146]: time="2020-01-27T01:28:36.042968778Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.105976] dockerd[1146]: time="2020-01-27T01:28:36.042989451Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.107985] dockerd[1146]: time="2020-01-27T01:28:36.043016550Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.153344] dockerd[1146]: time="2020-01-27T01:28:36.103706665Z" level=warning msg="Your kernel does not support cgroup rt period"
kube# [ 8.154729] dockerd[1146]: time="2020-01-27T01:28:36.103734043Z" level=warning msg="Your kernel does not support cgroup rt runtime"
kube# [ 8.156180] dockerd[1146]: time="2020-01-27T01:28:36.103833776Z" level=info msg="Loading containers: start."
kube# [ 8.307350] Initializing XFRM netlink socket
kube# [ 8.293378] dockerd[1146]: time="2020-01-27T01:28:36.243727534Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
kube# [ 8.349961] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
kube# [ 8.295679] systemd-udevd[701]: Using default interface naming scheme 'v243'.
kube# [ 8.297414] systemd-udevd[701]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 8.329083] dockerd[1146]: time="2020-01-27T01:28:36.279422294Z" level=info msg="Loading containers: done."
kube# [ 8.345421] dhcpcd[1140]: docker0: waiting for carrier
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.06 seconds)
kube# [ 8.495515] dockerd[1146]: time="2020-01-27T01:28:36.445538099Z" level=info msg="Docker daemon" commit=633a0ea838f10e000b7c6d6eed1623e6e988b5bc graphdriver(s)=overlay2 version=19.03.5
kube# [ 8.497341] dockerd[1146]: time="2020-01-27T01:28:36.445641185Z" level=info msg="Daemon has completed initialization"
kube# [ 8.550382] systemd[1]: Started Docker Application Container Engine.
kube# [ 8.551775] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 8.553275] dockerd[1146]: time="2020-01-27T01:28:36.500291985Z" level=info msg="API listen on /run/docker.sock"
kube# [ 8.557341] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 9.587619] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: Loaded image: pause:latest
kube# [ 9.590017] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 10.613021] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: Loaded image: coredns/coredns:1.5.0
kube# [ 10.622304] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: rm: cannot remove '/opt/cni/bin/*': No such file or directory
kube# [ 10.623814] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1304]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 10.631782] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 10.633183] systemd[1]: Reached target Kubernetes.
kube# [ 10.722662] systemd[1]: Reached target Multi-User System.
kube# [ 10.724246] systemd[1]: Startup finished in 2.462s (kernel) + 8.169s (userspace) = 10.632s.
kube# [ 10.726004] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 10.727570] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 1.
kube# [ 10.729518] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 10.730949] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 10.769495] kube-proxy[1463]: W0127 01:28:38.719393 1463 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 10.783290] kube-proxy[1463]: W0127 01:28:38.733640 1463 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.786139] kube-proxy[1463]: W0127 01:28:38.733968 1463 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.788397] kube-proxy[1463]: W0127 01:28:38.734232 1463 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.790604] kube-proxy[1463]: W0127 01:28:38.736266 1463 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.792987] kube-proxy[1463]: W0127 01:28:38.736502 1463 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.795273] kube-proxy[1463]: W0127 01:28:38.736805 1463 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.803095] kube-proxy[1463]: F0127 01:28:38.753441 1463 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 10.807754] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 10.809096] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 11.314410] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.315906] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.317254] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.318749] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.320268] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 11.321401] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 11.322291] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 11.324129] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 11.374085] kube-apiserver[1497]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 11.375784] kube-apiserver[1497]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 11.377415] kube-apiserver[1497]: I0127 01:28:39.324107 1497 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 11.379056] kube-apiserver[1497]: I0127 01:28:39.324281 1497 server.go:147] Version: v1.15.6
kube# [ 11.380395] kube-apiserver[1497]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 11.382115] kube-apiserver[1497]: Usage:
kube# [ 11.383088] kube-apiserver[1497]: kube-apiserver [flags]
kube# [ 11.384108] kube-apiserver[1497]: Generic flags:
kube# [ 11.385040] kube-apiserver[1497]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 11.387773] kube-apiserver[1497]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 11.389766] kube-apiserver[1497]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 11.392344] kube-apiserver[1497]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.394946] kube-apiserver[1497]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.397461] kube-apiserver[1497]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 11.399693] kube-apiserver[1497]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 11.401481] kube-apiserver[1497]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 11.403275] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 11.404640] kube-apiserver[1497]: APIListChunking=true|false (BETA - default=true)
kube# [ 11.406162] kube-apiserver[1497]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 11.407612] kube-apiserver[1497]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 11.408990] kube-apiserver[1497]: AppArmor=true|false (BETA - default=true)
kube# [ 11.410392] kube-apiserver[1497]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 11.411827] kube-apiserver[1497]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 11.413541] kube-apiserver[1497]: BlockVolume=true|false (BETA - default=true)
kube# [ 11.415138] kube-apiserver[1497]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 11.416742] kube-apiserver[1497]: CPUManager=true|false (BETA - default=true)
kube# [ 11.418221] kube-apiserver[1497]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 11.419770] kube-apiserver[1497]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 11.421261] kube-apiserver[1497]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 11.422609] kube-apiserver[1497]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 11.424081] kube-apiserver[1497]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 11.425521] kube-apiserver[1497]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 11.426902] kube-apiserver[1497]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 11.428392] kube-apiserver[1497]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 11.429953] kube-apiserver[1497]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 11.431372] kube-apiserver[1497]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 11.432909] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 11.434082] kube-apiserver[1497]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 11.435410] kube-apiserver[1497]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 11.436961] kube-apiserver[1497]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 11.438412] kube-apiserver[1497]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 11.439894] kube-apiserver[1497]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 11.441739] kube-apiserver[1497]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 11.443512] kube-apiserver[1497]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 11.445380] kube-apiserver[1497]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 11.447074] kube-apiserver[1497]: DevicePlugins=true|false (BETA - default=true)
kube# [ 11.448428] kube-apiserver[1497]: DryRun=true|false (BETA - default=true)
kube# [ 11.449822] kube-apiserver[1497]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 11.451271] kube-apiserver[1497]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 11.452691] kube-apiserver[1497]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 11.454136] kube-apiserver[1497]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 11.455626] kube-apiserver[1497]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 11.457028] kube-apiserver[1497]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 11.458545] kube-apiserver[1497]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 11.460319] kube-apiserver[1497]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 11.461991] kube-apiserver[1497]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 11.463553] kube-apiserver[1497]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 11.465233] kube-apiserver[1497]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 11.466819] kube-apiserver[1497]: MountContainers=true|false (ALPHA - default=false)
kube# [ 11.468246] kube-apiserver[1497]: NodeLease=true|false (BETA - default=true)
kube# [ 11.469601] kube-apiserver[1497]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 11.471032] kube-apiserver[1497]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 11.472411] kube-apiserver[1497]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 11.473739] kube-apiserver[1497]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 11.475152] kube-apiserver[1497]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 11.476539] kube-apiserver[1497]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 11.477948] kube-apiserver[1497]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 11.479435] kube-apiserver[1497]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 11.481011] kube-apiserver[1497]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 11.482497] kube-apiserver[1497]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 11.483989] kube-apiserver[1497]: RunAsGroup=true|false (BETA - default=true)
kube# [ 11.485378] kube-apiserver[1497]: RuntimeClass=true|false (BETA - default=true)
kube# [ 11.486839] kube-apiserver[1497]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 11.488310] kube-apiserver[1497]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 11.489739] kube-apiserver[1497]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 11.491190] kube-apiserver[1497]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 11.492647] kube-apiserver[1497]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 11.494182] kube-apiserver[1497]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 11.495627] kube-apiserver[1497]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 11.497182] kube-apiserver[1497]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 11.498657] kube-apiserver[1497]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 11.500076] kube-apiserver[1497]: Sysctls=true|false (BETA - default=true)
kube# [ 11.501410] kube-apiserver[1497]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 11.503230] kube-apiserver[1497]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 11.504743] kube-apiserver[1497]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 11.506164] kube-apiserver[1497]: TokenRequest=true|false (BETA - default=true)
kube# [ 11.507555] kube-apiserver[1497]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 11.509002] kube-apiserver[1497]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 11.510435] kube-apiserver[1497]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 11.511851] kube-apiserver[1497]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 11.513383] kube-apiserver[1497]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 11.514834] kube-apiserver[1497]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 11.516233] kube-apiserver[1497]: WinDSR=true|false (ALPHA - default=false)
kube# [ 11.517585] kube-apiserver[1497]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 11.518949] kube-apiserver[1497]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 11.520335] kube-apiserver[1497]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 11.522130] kube-apiserver[1497]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 11.524165] kube-apiserver[1497]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 11.526223] kube-apiserver[1497]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 11.528995] kube-apiserver[1497]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 11.531668] kube-apiserver[1497]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 11.533231] kube-apiserver[1497]: Etcd flags:
kube# [ 11.534076] kube-apiserver[1497]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 11.535974] kube-apiserver[1497]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 11.537740] kube-apiserver[1497]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 11.539611] kube-apiserver[1497]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 11.541219] kube-apiserver[1497]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 11.542625] kube-apiserver[1497]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 11.544036] kube-apiserver[1497]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 11.545836] kube-apiserver[1497]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 11.547744] kube-apiserver[1497]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 11.549236] kube-apiserver[1497]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 11.550938] kube-apiserver[1497]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 11.552574] kube-apiserver[1497]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 11.554762] kube-apiserver[1497]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 11.556406] kube-apiserver[1497]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 11.559028] kube-apiserver[1497]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 11.560554] kube-apiserver[1497]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 11.565300] kube-apiserver[1497]: Secure serving flags:
kube# [ 11.566170] kube-apiserver[1497]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 11.568940] kube-apiserver[1497]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 11.570982] kube-apiserver[1497]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 11.572824] kube-apiserver[1497]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 11.574619] kube-apiserver[1497]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 11.577655] kube-apiserver[1497]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 11.584089] kube-apiserver[1497]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 11.586189] kube-apiserver[1497]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 11.587610] kube-apiserver[1497]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 11.591976] kube-apiserver[1497]: Insecure serving flags:
kube# [ 11.592857] kube-apiserver[1497]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 11.595020] kube-apiserver[1497]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.597210] kube-apiserver[1497]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.598985] kube-apiserver[1497]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 11.600816] kube-apiserver[1497]: Auditing flags:
kube# [ 11.601987] kube-apiserver[1497]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 11.604043] kube-apiserver[1497]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.606161] kube-apiserver[1497]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 11.608035] kube-apiserver[1497]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 11.610574] kube-apiserver[1497]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 11.612537] kube-apiserver[1497]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 11.614115] kube-apiserver[1497]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 11.615553] kube-apiserver[1497]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 11.617595] kube-apiserver[1497]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 11.619716] kube-apiserver[1497]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 11.621203] kube-apiserver[1497]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 11.622775] kube-apiserver[1497]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 11.625388] kube-apiserver[1497]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 11.627113] kube-apiserver[1497]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.628629] kube-apiserver[1497]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.631086] kube-apiserver[1497]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.634057] kube-apiserver[1497]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 11.635939] kube-apiserver[1497]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 11.637409] kube-apiserver[1497]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.639298] kube-apiserver[1497]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 11.640966] kube-apiserver[1497]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 11.642998] kube-apiserver[1497]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 11.645025] kube-apiserver[1497]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 11.646988] kube-apiserver[1497]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube#
kube# [ 11.652837] kube-apiserver[1497]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 11.654677] kube-apiserver[1497]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube: exit status 1
(0.06 seconds)
kube# [ 11.656410] kube-apiserver[1497]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 11.659241] kube-apiserver[1497]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.660717] kube-apiserver[1497]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.663503] kube-apiserver[1497]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.666404] kube-apiserver[1497]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 11.668326] kube-apiserver[1497]: Features flags:
kube# [ 11.669283] kube-apiserver[1497]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 11.670664] kube-apiserver[1497]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 11.672248] kube-apiserver[1497]: Authentication flags:
kube# [ 11.673162] kube-apiserver[1497]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 11.676313] kube-apiserver[1497]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 11.679631] kube-apiserver[1497]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 11.681464] kube-apiserver[1497]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 11.683885] kube-apiserver[1497]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 11.686038] kube-apiserver[1497]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 11.688633] kube-apiserver[1497]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 11.691187] kube-scheduler[1498]: I0127 01:28:39.615040 1498 serving.go:319] Generated self-signed cert in-memory
kube# [ 11.692695] kube-apiserver[1497]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 11.695054] kube-apiserver[1497]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 11.696945] kube-apiserver[1497]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 11.699761] kube-apiserver[1497]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 11.701766] kube-apiserver[1497]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 11.703959] kube-apiserver[1497]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 11.706451] kube-apiserver[1497]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 11.709196] kube-apiserver[1497]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 11.712164] kube-apiserver[1497]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 11.714787] kube-apiserver[1497]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 11.717729] kube-apiserver[1497]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 11.720696] kube-apiserver[1497]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 11.722446] kube-apiserver[1497]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 11.724177] kube-apiserver[1497]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 11.725856] kube-apiserver[1497]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 11.728138] kube-apiserver[1497]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 11.731647] kube-apiserver[1497]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 11.733587] kube-apiserver[1497]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 11.736474] kube-apiserver[1497]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 11.738460] kube-apiserver[1497]: Authorization flags:
kube# [ 11.739369] kube-apiserver[1497]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 11.741710] kube-apiserver[1497]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 11.743778] kube-apiserver[1497]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 11.745651] kube-apiserver[1497]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 11.747459] kube-apiserver[1497]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 11.750039] kube-apiserver[1497]: Cloud provider flags:
kube# [ 11.750946] kube-apiserver[1497]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 11.752531] kube-apiserver[1497]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 11.753845] kube-apiserver[1497]: Api enablement flags:
kube# [ 11.754726] kube-apiserver[1497]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 11.758098] kube-apiserver[1497]: Admission flags:
kube# [ 11.758848] kube-apiserver[1497]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 11.766561] kube-apiserver[1497]: --admission-control-config-file string File with admission control configuration.
kube# [ 11.767825] kube-apiserver[1497]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.775441] kube-apiserver[1497]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.782849] kube-apiserver[1497]: Misc flags:
kube# [ 11.783583] kube-apiserver[1497]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 11.784914] kube-apiserver[1497]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 11.786775] kube-apiserver[1497]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 11.788292] kube-apiserver[1497]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 11.789926] kube-apiserver[1497]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 11.791411] kube-apiserver[1497]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 11.792953] kube-apiserver[1497]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 11.794409] kube-apiserver[1497]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 11.795812] kube-apiserver[1497]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 11.797293] kube-apiserver[1497]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 11.799314] kube-apiserver[1497]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 11.800660] kube-apiserver[1497]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 11.802034] kube-apiserver[1497]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 11.804535] kube-apiserver[1497]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 11.806455] kube-apiserver[1497]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 11.810773] kube-apiserver[1497]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 11.813238] kube-apiserver[1497]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 11.815344] kube-apiserver[1497]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 11.817231] kube-apiserver[1497]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 11.819093] kube-apiserver[1497]: Global flags:
kube# [ 11.819836] kube-apiserver[1497]: --alsologtostderr log to standard error as well as files
kube# [ 11.821006] kube-apiserver[1497]: -h, --help help for kube-apiserver
kube# [ 11.822090] kube-apiserver[1497]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 11.823415] kube-apiserver[1497]: --log-dir string If non-empty, write log files in this directory
kube# [ 11.824641] kube-apiserver[1497]: --log-file string If non-empty, use this log file
kube# [ 11.825746] kube-apiserver[1497]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 11.827587] kube-apiserver[1497]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 11.828946] kube-apiserver[1497]: --logtostderr log to standard error instead of files (default true)
kube# [ 11.830191] kube-apiserver[1497]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 11.831420] kube-apiserver[1497]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 11.832650] kube-apiserver[1497]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 11.833953] kube-apiserver[1497]: -v, --v Level number for the log level verbosity
kube# [ 11.835081] kube-apiserver[1497]: --version version[=true] Print version information and quit
kube# [ 11.836273] kube-apiserver[1497]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 11.837745] kube-apiserver[1497]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 11.839510] kubelet[1462]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.841616] kubelet[1462]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.843819] kubelet[1462]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.846101] kubelet[1462]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.848143] kubelet[1462]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.850172] kubelet[1462]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.852226] kubelet[1462]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.854469] kubelet[1462]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.856786] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 11.857816] kubelet[1462]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.859847] kubelet[1462]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.861804] kubelet[1462]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.863743] kubelet[1462]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.865936] kubelet[1462]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 11.867932] kubelet[1462]: F0127 01:28:39.790139 1462 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 11.869821] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 11.870711] kube-scheduler[1498]: W0127 01:28:39.818363 1498 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 11.872788] kube-scheduler[1498]: W0127 01:28:39.818398 1498 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 11.875076] kube-scheduler[1498]: W0127 01:28:39.818414 1498 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 11.876542] kube-scheduler[1498]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 11.880734] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 11.881804] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 13.064635] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 13.065781] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
kube# [ 13.066918] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 13.068041] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 13.071037] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1569]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 13.341893] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1569]: Loaded image: pause:latest
kube# [ 13.344841] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1569]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 13.457525] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1569]: Loaded image: coredns/coredns:1.5.0
kube# [ 13.465994] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1569]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 13.472510] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 13.551007] kubelet[1628]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.553175] kubelet[1628]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.555372] kubelet[1628]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.557511] kubelet[1628]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.559525] kubelet[1628]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.561536] kubelet[1628]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.563535] kubelet[1628]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.565556] kubelet[1628]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.567585] kubelet[1628]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.569658] kubelet[1628]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.571653] kubelet[1628]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.573594] kubelet[1628]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.575605] kubelet[1628]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.577669] kubelet[1628]: F0127 01:28:41.501213 1628 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 13.579412] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 13.580509] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 14.564633] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 14.565943] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
kube# [ 14.567201] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 14.568475] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 14.571255] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1665]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 14.814387] systemd[1]: kube-certmgr-bootstrap.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 14.815762] systemd[1]: kube-certmgr-bootstrap.service: Scheduled restart job, restart counter is at 1.
kube# [ 14.817147] systemd[1]: Stopped Kubernetes certmgr bootstrapper.
kube# [ 14.818251] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 14.833046] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1715]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 14.834899] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1715]: Dload Upload Total Spent Left Speed
kube# [ 14.851818] cfssl[1041]: 2020/01/27 01:28:42 [INFO] 192.168.1.1:54228 - "POST /api/v1/cfssl/info" 200
kube# [ 14.853052] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1665]: Loaded image: pause:latest
100 1434 100 1432 100 2 75368 105 --:--:-- --:--:-- --:--:-- 75473-bootstrap-start[1715]:
kube# [ 14.857160] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1665]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 14.859767] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 14.860788] systemd[1]: kube-certmgr-bootstrap.service: Consumed 16ms CPU time, received 3.5K IP traffic, sent 1.6K IP traffic.
kube# [ 14.968510] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1665]: Loaded image: coredns/coredns:1.5.0
kube# [ 14.976640] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1665]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 14.982962] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 15.027067] kubelet[1757]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.029136] kubelet[1757]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.031315] kubelet[1757]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.033575] kubelet[1757]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.035612] kubelet[1757]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.037761] kubelet[1757]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.039813] kubelet[1757]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.042030] kubelet[1757]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.044093] kubelet[1757]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.046227] kubelet[1757]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.048331] kubelet[1757]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.050421] kubelet[1757]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.052641] kubelet[1757]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.061341] systemd[1]: Started Kubernetes systemd probe.
kube# [ 15.066740] kubelet[1757]: I0127 01:28:43.017043 1757 server.go:425] Version: v1.15.6
kube# [ 15.067978] kubelet[1757]: I0127 01:28:43.017248 1757 plugins.go:103] No cloud provider specified.
kube# [ 15.070595] kubelet[1757]: F0127 01:28:43.020977 1757 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 15.074384] systemd[1]: run-ra08c813caffa4e5382383c6b9f5203bf.scope: Succeeded.
kube# [ 15.075352] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 15.076403] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 15.314385] systemd[1]: kube-addon-manager.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 15.315647] systemd[1]: kube-addon-manager.service: Scheduled restart job, restart counter is at 1.
kube# [ 15.316917] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 15.317969] systemd[1]: Starting Kubernetes addon manager...
kube# [ 15.356796] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1779]: Error in configuration:
kube# [ 15.358074] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1779]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 15.360189] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1779]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 15.362331] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 15.363392] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 15.364313] systemd[1]: Failed to start Kubernetes addon manager.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 15.814546] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 15.815654] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 2.
kube# [ 15.816772] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 15.817652] systemd[1]: Started Kubernetes Proxy Service.
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 15.845761] kube-proxy[1810]: W0127 01:28:43.795728 1810 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 15.858058] kube-proxy[1810]: W0127 01:28:43.808428 1810 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.860515] kube-proxy[1810]: W0127 01:28:43.808894 1810 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.862446] kube-proxy[1810]: W0127 01:28:43.810817 1810 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.864452] kube-proxy[1810]: W0127 01:28:43.810996 1810 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.866525] kube-proxy[1810]: W0127 01:28:43.811302 1810 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.868732] kube-proxy[1810]: W0127 01:28:43.812764 1810 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 15.875254] kube-proxy[1810]: F0127 01:28:43.825634 1810 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory]
kube# [ 15.879148] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 15.880337] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 16.314452] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 16.315833] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
kube# [ 16.317126] systemd[1]: certmgr.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 16.319193] systemd[1]: certmgr.service: Scheduled restart job, restart counter is at 1.
kube# [ 16.321293] systemd[1]: Stopped certmgr.
kube# [ 16.322451] systemd[1]: Starting certmgr...
kube# [ 16.323954] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 16.325635] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 16.327364] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1833]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 16.329174] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 16.336252] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 16.337618] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 16.339955] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 16.342220] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 16.361234] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54230 - "POST /api/v1/cfssl/info" 200
kube# [ 16.379193] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 16.380942] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 2.
kube# [ 16.382614] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54232 - "POST /api/v1/cfssl/info" 200
kube# [ 16.384233] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 16.385363] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 16.386783] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 16.394485] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54234 - "POST /api/v1/cfssl/info" 200
kube# [ 16.408169] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54236 - "POST /api/v1/cfssl/info" 200
kube# [ 16.412838] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 16.417828] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54238 - "POST /api/v1/cfssl/info" 200
kube# [ 16.430566] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54240 - "POST /api/v1/cfssl/info" 200
kube# [ 16.435195] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 16.440488] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54242 - "POST /api/v1/cfssl/info" 200
kube# [ 16.447996] kube-apiserver[1870]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 16.449572] kube-apiserver[1870]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 16.451172] kube-apiserver[1870]: I0127 01:28:44.398139 1870 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 16.452932] kube-apiserver[1870]: I0127 01:28:44.403247 1870 server.go:147] Version: v1.15.6
kube# [ 16.454546] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54244 - "POST /api/v1/cfssl/info" 200
kube# [ 16.456323] kube-apiserver[1870]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 16.458228] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.json
kube# [ 16.461308] kube-apiserver[1870]: Usage:
kube# [ 16.461967] kube-apiserver[1870]: kube-apiserver [flags]
kube# [ 16.462673] kube-apiserver[1870]: Generic flags:
kube# [ 16.463365] kube-apiserver[1870]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 16.466163] kube-apiserver[1870]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 16.468012] kube-apiserver[1870]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 16.470055] kube-apiserver[1870]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.472132] kube-apiserver[1870]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.474194] kube-apiserver[1870]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 16.476098] kube-apiserver[1870]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 16.477615] kube-apiserver[1870]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 16.479320] kube-apiserver[1870]: APIListChunking=true|false (BETA - default=true)
kube# [ 16.480845] kube-apiserver[1870]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 16.482327] kube-apiserver[1870]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 16.483696] kube-apiserver[1870]: AppArmor=true|false (BETA - default=true)
kube# [ 16.485034] kube-apiserver[1870]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 16.486528] kube-apiserver[1870]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 16.487971] kube-apiserver[1870]: BlockVolume=true|false (BETA - default=true)
kube# [ 16.489215] kube-apiserver[1870]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 16.491006] kube-apiserver[1870]: CPUManager=true|false (BETA - default=true)
kube# [ 16.492236] kube-apiserver[1870]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 16.493540] kube-apiserver[1870]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 16.494790] kube-apiserver[1870]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 16.496031] kube-apiserver[1870]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 16.497246] kube-apiserver[1870]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 16.498499] kube-apiserver[1870]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 16.499754] kube-apiserver[1870]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 16.501165] kube-apiserver[1870]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 16.502658] kube-apiserver[1870]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 16.504215] kube-apiserver[1870]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 16.505823] kube-apiserver[1870]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 16.507470] kube-apiserver[1870]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 16.508892] kube-apiserver[1870]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 16.510328] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 16.511462] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54246 - "POST /api/v1/cfssl/info" 200
kube# [ 16.512632] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54248 - "POST /api/v1/cfssl/info" 200
kube# [ 16.513622] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54250 - "POST /api/v1/cfssl/info" 200
kube# [ 16.514616] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54252 - "POST /api/v1/cfssl/info" 200
kube# [ 16.515663] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube# [ 16.517347] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 16.519145] kube-apiserver[1870]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 16.520410] kube-apiserver[1870]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 16.521737] kube-apiserver[1870]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 16.523067] kube-apiserver[1870]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 16.524582] kube-apiserver[1870]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 16.526184] kube-apiserver[1870]: DevicePlugins=true|false (BETA - default=true)
kube# [ 16.527580] kube-apiserver[1870]: DryRun=true|false (BETA - default=true)
kube# [ 16.528756] kube-apiserver[1870]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 16.530133] kube-apiserver[1870]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 16.531561] kube-apiserver[1870]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 16.532929] kube-apiserver[1870]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 16.534260] kube-apiserver[1870]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 16.535530] kube-apiserver[1870]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 16.537017] kube-apiserver[1870]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 16.538395] kube-apiserver[1870]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 16.539641] kube-apiserver[1870]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 16.540927] kube-apiserver[1870]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 16.542253] kube-apiserver[1870]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 16.543691] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 16.544611] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54254 - "POST /api/v1/cfssl/info" 200
kube# [ 16.545628] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54256 - "POST /api/v1/cfssl/info" 200
kube# [ 16.546672] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54258 - "POST /api/v1/cfssl/info" 200
kube# [ 16.547759] kube-apiserver[1870]: MountContainers=true|false (ALPHA - default=false)
kube# [ 16.549211] kube-apiserver[1870]: NodeLease=true|false (BETA - default=true)
kube# [ 16.550618] kube-apiserver[1870]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 16.552199] kube-apiserver[1870]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 16.553611] kube-apiserver[1870]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 16.555356] kube-apiserver[1870]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 16.556782] kube-apiserver[1870]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 16.558203] kube-apiserver[1870]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 16.559451] kube-apiserver[1870]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 16.560857] kube-apiserver[1870]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 16.562707] kube-apiserver[1870]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 16.564130] kube-apiserver[1870]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 16.565553] kube-apiserver[1870]: RunAsGroup=true|false (BETA - default=true)
kube# [ 16.566808] kube-apiserver[1870]: RuntimeClass=true|false (BETA - default=true)
kube# [ 16.568088] kube-apiserver[1870]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 16.569344] kube-apiserver[1870]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 16.570680] kube-apiserver[1870]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 16.572141] kube-apiserver[1870]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 16.573678] kube-apiserver[1870]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 16.575441] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 16.577623] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 16.579587] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54260 - "POST /api/v1/cfssl/info" 200
kube# [ 16.581073] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54262 - "POST /api/v1/cfssl/info" 200
kube# [ 16.582359] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54264 - "POST /api/v1/cfssl/info" 200
kube# [ 16.583520] kube-apiserver[1870]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 16.584798] kube-apiserver[1870]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 16.586364] kube-apiserver[1870]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 16.587718] kube-apiserver[1870]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 16.589132] kube-apiserver[1870]: Sysctls=true|false (BETA - default=true)
kube# [ 16.590330] kube-apiserver[1870]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 16.591692] kube-apiserver[1870]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 16.593012] kube-apiserver[1870]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 16.594376] kube-apiserver[1870]: TokenRequest=true|false (BETA - default=true)
kube# [ 16.595611] kube-apiserver[1870]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 16.596937] kube-apiserver[1870]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 16.598462] kube-apiserver[1870]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 16.600130] kube-apiserver[1870]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 16.602000] kube-apiserver[1870]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 16.603778] kube-apiserver[1870]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 16.605345] kube-apiserver[1870]: WinDSR=true|false (ALPHA - default=false)
kube# [ 16.606738] kube-apiserver[1870]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 16.608276] kube-apiserver[1870]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 16.609947] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 16.611804] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 16.613631] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54266 - "POST /api/v1/cfssl/info" 200
kube# [ 16.614953] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54268 - "POST /api/v1/cfssl/info" 200
kube# [ 16.616004] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54270 - "POST /api/v1/cfssl/info" 200
kube# [ 16.617099] kube-apiserver[1870]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 16.618903] kube-apiserver[1870]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 16.620804] kube-apiserver[1870]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 16.622807] kube-apiserver[1870]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 16.625925] kube-apiserver[1870]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 16.628997] kube-apiserver[1870]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 16.630679] kube-apiserver[1870]: Etcd flags:
kube# [ 16.631511] kube-apiserver[1870]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 16.633336] kube-apiserver[1870]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 16.635230] kube-apiserver[1870]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 16.637311] kube-apiserver[1870]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 16.639048] kube-apiserver[1870]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 16.640426] kube-apiserver[1870]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 16.641779] kube-apiserver[1870]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 16.643722] kube-apiserver[1870]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 16.645426] kube-apiserver[1870]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 16.646683] kube-apiserver[1870]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 16.648186] kube-apiserver[1870]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 16.649566] kube-apiserver[1870]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 16.651507] kube-apiserver[1870]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 16.652901] kube-apiserver[1870]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 16.655230] kube-apiserver[1870]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 16.656772] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 16.659046] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54272 - "POST /api/v1/cfssl/info" 200
kube# [ 16.660351] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54274 - "POST /api/v1/cfssl/info" 200
kube# [ 16.661452] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54276 - "POST /api/v1/cfssl/info" 200
kube# [ 16.662706] kube-apiserver[1870]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 16.667582] kube-apiserver[1870]: Secure serving flags:
kube# [ 16.668497] kube-apiserver[1870]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 16.671561] kube-apiserver[1870]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 16.674077] kube-apiserver[1870]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 16.675895] kube-apiserver[1870]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 16.677583] kube-apiserver[1870]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 16.680674] kube-apiserver[1870]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 16.687764] kube-apiserver[1870]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 16.689572] kube-apiserver[1870]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 16.691140] kube-apiserver[1870]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 16.695813] kube-apiserver[1870]: Insecure serving flags:
kube# [ 16.696710] kube-apiserver[1870]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 16.699642] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54278 - "POST /api/v1/cfssl/info" 200
kube# [ 16.701262] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54280 - "POST /api/v1/cfssl/info" 200
kube# [ 16.702480] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 16.704987] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 16.706758] kube-apiserver[1870]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 16.708983] kube-apiserver[1870]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 16.710643] kube-apiserver[1870]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 16.712392] kube-apiserver[1870]: Auditing flags:
kube# [ 16.713076] kube-apiserver[1870]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 16.714655] kube-apiserver[1870]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 16.716552] kube-apiserver[1870]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 16.718184] kube-apiserver[1870]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 16.720069] kube-apiserver[1870]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 16.721814] kube-apiserver[1870]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 16.723397] kube-apiserver[1870]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 16.724977] kube-apiserver[1870]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 16.727188] kube-apiserver[1870]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 16.729037] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54282 - "POST /api/v1/cfssl/info" 200
kube# [ 16.730374] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54284 - "POST /api/v1/cfssl/info" 200
kube# [ 16.731669] systemd[1]: Started certmgr.
kube# [ 16.732547] kube-apiserver[1870]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 16.734089] kube-apiserver[1870]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 16.735731] kube-apiserver[1870]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 16.738591] kube-apiserver[1870]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 16.740569] kube-apiserver[1870]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 16.742011] kube-apiserver[1870]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 16.744638] kube-apiserver[1870]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 16.747369] kube-apiserver[1870]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 16.749213] kube-apiserver[1870]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 16.750611] kube-apiserver[1870]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 16.752319] kube-apiserver[1870]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 16.754015] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: 2020/01/27 01:28:44 [INFO] manager: watching 14 certificates
kube# [ 16.755339] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1831]: OK
kube# [ 16.756628] certmgr[1923]: 2020/01/27 01:28:44 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 16.758041] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 16.759438] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 16.761030] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54286 - "POST /api/v1/cfssl/info" 200
kube# [ 16.762207] kube-apiserver[1870]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 16.764084] kube-apiserver[1870]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 16.766191] kube-apiserver[1870]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 16.767989] kube-apiserver[1870]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 16.769780] kube-apiserver[1870]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 16.771453] kube-apiserver[1870]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 16.773205] kube-apiserver[1870]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 16.775921] kube-apiserver[1870]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 16.777194] kube-apiserver[1870]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 16.779853] kube-apiserver[1870]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 16.782755] kube-apiserver[1870]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 16.784669] kube-apiserver[1870]: Features flags:
kube# [ 16.785784] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1833]: Loaded image: pause:latest
kube# [ 16.787155] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 16.788547] kube-apiserver[1870]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 16.789966] kube-apiserver[1870]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 16.791601] kube-apiserver[1870]: Authentication flags:
kube# [ 16.792780] kube-apiserver[1870]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 16.796163] kube-apiserver[1870]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 16.799810] kube-apiserver[1870]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 16.801781] kube-apiserver[1870]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 16.804149] kube-apiserver[1870]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 16.806289] kube-apiserver[1870]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 16.808981] kube-apiserver[1870]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 16.811334] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54288 - "POST /api/v1/cfssl/info" 200
kube# [ 16.812461] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54290 - "POST /api/v1/cfssl/info" 200
kube# [ 16.813718] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54292 - "POST /api/v1/cfssl/info" 200
kube# [ 16.814964] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54294 - "POST /api/v1/cfssl/info" 200
kube# [ 16.816266] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1833]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 16.818179] kube-apiserver[1870]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 16.820410] kube-apiserver[1870]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 16.822384] kube-apiserver[1870]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 16.825369] kube-apiserver[1870]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 16.827470] kube-apiserver[1870]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 16.829520] kube-apiserver[1870]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 16.831935] kube-apiserver[1870]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 16.834732] kube-apiserver[1870]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 16.837609] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 16.839407] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 16.841385] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54296 - "POST /api/v1/cfssl/info" 200
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 16.842623] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54298 - "POST /api/v1/cfssl/info" 200
kube# [ 16.844047] kube-apiserver[1870]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 16.846598] kube-apiserver[1870]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 16.849580] kube-apiserver[1870]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 16.853751] kube-apiserver[1870]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 16.856302] kube-apiserver[1870]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 16.858668] kube-apiserver[1870]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 16.860831] kube-apiserver[1870]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 16.863759] kube-apiserver[1870]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 16.868090] kube-apiserver[1870]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 16.870238] kube-apiserver[1870]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 16.873099] kube-apiserver[1870]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 16.876100] kube-apiserver[1870]: Authorization flags:
kube# [ 16.877278] kube-apiserver[1870]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 16.879607] kube-apiserver[1870]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 16.881774] kube-apiserver[1870]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 16.884314] kube-apiserver[1870]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 16.886569] kube-apiserver[1870]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 16.889598] kube-apiserver[1870]: Cloud provider flags:
kube# [ 16.890720] kube-apiserver[1870]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 16.893262] kube-apiserver[1870]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 16.894769] kube-apiserver[1870]: Api enablement flags:
kube# [ 16.895799] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.jsonError in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube#
kube# [ 16.900458] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube: exit status 1
(0.06 seconds)
kube# [ 16.902360] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54300 - "POST /api/v1/cfssl/info" 200
kube# [ 16.903742] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54302 - "POST /api/v1/cfssl/info" 200
kube# [ 16.905186] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54304 - "POST /api/v1/cfssl/info" 200
kube# [ 16.906416] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54306 - "POST /api/v1/cfssl/info" 200
kube# [ 16.908153] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 16.909364] kube-apiserver[1870]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 16.913372] kube-apiserver[1870]: Admission flags:
kube# [ 16.914082] kube-apiserver[1870]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 16.922850] kube-apiserver[1870]: --admission-control-config-file string File with admission control configuration.
kube# [ 16.924454] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 16.926201] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54308 - "POST /api/v1/cfssl/info" 200
kube# [ 16.927414] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54310 - "POST /api/v1/cfssl/info" 200
kube# [ 16.928603] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54312 - "POST /api/v1/cfssl/info" 200
kube# [ 16.929837] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 2.
kube# [ 16.931220] kube-apiserver[1870]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 16.939763] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 16.941364] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 16.942222] kube-apiserver[1870]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 16.950573] kube-apiserver[1870]: Misc flags:
kube# [ 16.951292] kube-apiserver[1870]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 16.952673] kube-apiserver[1870]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 16.954988] kube-apiserver[1870]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 16.956645] kube-apiserver[1870]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 16.958622] kube-apiserver[1870]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 16.959998] kube-apiserver[1870]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 16.961369] kube-apiserver[1870]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 16.962596] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54314 - "POST /api/v1/cfssl/info" 200
kube# [ 16.963657] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54316 - "POST /api/v1/cfssl/info" 200
kube# [ 16.965076] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 16.966160] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 16.967915] kube-apiserver[1870]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 16.969343] kube-apiserver[1870]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 16.970915] kube-apiserver[1870]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 16.973304] kube-apiserver[1870]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 16.974836] kube-apiserver[1870]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 16.976903] kube-apiserver[1870]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 16.980409] kube-apiserver[1870]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 16.983022] kube-apiserver[1870]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 16.989306] kube-apiserver[1870]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 16.992413] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54318 - "POST /api/v1/cfssl/info" 200
kube# [ 16.993717] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54320 - "POST /api/v1/cfssl/info" 200
kube# [ 16.994925] kube-apiserver[1870]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 16.997536] kube-apiserver[1870]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 16.999676] kube-apiserver[1870]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 17.001711] kube-apiserver[1870]: Global flags:
kube# [ 17.002430] kube-apiserver[1870]: --alsologtostderr log to standard error as well as files
kube# [ 17.003580] kube-apiserver[1870]: -h, --help help for kube-apiserver
kube# [ 17.004645] kube-apiserver[1870]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 17.006040] kube-apiserver[1870]: --log-dir string If non-empty, write log files in this directory
kube# [ 17.007352] kube-apiserver[1870]: --log-file string If non-empty, use this log file
kube# [ 17.008710] kube-apiserver[1870]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 17.010853] kube-apiserver[1870]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 17.012367] kube-apiserver[1870]: --logtostderr log to standard error instead of files (default true)
kube# [ 17.013975] kube-apiserver[1870]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 17.015362] kube-apiserver[1870]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 17.016670] kube-apiserver[1870]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 17.018030] kube-apiserver[1870]: -v, --v Level number for the log level verbosity
kube# [ 17.019160] kube-apiserver[1870]: --version version[=true] Print version information and quit
kube# [ 17.020366] kube-apiserver[1870]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 17.021775] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 17.023167] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 17.024540] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54322 - "POST /api/v1/cfssl/info" 200
kube# [ 17.025566] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54324 - "POST /api/v1/cfssl/info" 200
kube# [ 17.026585] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54326 - "POST /api/v1/cfssl/info" 200
kube# [ 17.027672] kube-apiserver[1870]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 17.030267] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54328 - "POST /api/v1/cfssl/info" 200
kube# [ 17.035012] certmgr[1923]: 2020/01/27 01:28:44 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 17.038597] cfssl[1041]: 2020/01/27 01:28:44 [INFO] 192.168.1.1:54330 - "POST /api/v1/cfssl/info" 200
kube# [ 17.040340] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1833]: Loaded image: coredns/coredns:1.5.0
kube# [ 17.050037] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54332 - "POST /api/v1/cfssl/info" 200
kube# [ 17.052148] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1833]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 17.055521] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 17.058639] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54334 - "POST /api/v1/cfssl/info" 200
kube# [ 17.060938] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 17.072619] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54336 - "POST /api/v1/cfssl/info" 200
kube# [ 17.077452] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 17.080800] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54338 - "POST /api/v1/cfssl/info" 200
kube# [ 17.091465] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54340 - "POST /api/v1/cfssl/info" 200
kube# [ 17.096131] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: watching 14 certificates
kube# [ 17.096296] certmgr[1923]: 2020/01/27 01:28:45 [WARNING] metrics: no prometheus address or port configured
kube# [ 17.096564] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: checking certificates
kube# [ 17.096830] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queue processor is ready
kube# [ 17.098995] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54342 - "POST /api/v1/cfssl/info" 200
kube# [ 17.099224] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.099494] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:kube-addon-manager because it isn't ready
kube# [ 17.101797] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:kube-addon-manager (attempt 1)
kube# [ 17.105124] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54344 - "POST /api/v1/cfssl/info" 200
kube# [ 17.105311] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.105620] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /kubernetes because it isn't ready
kube# [ 17.106044] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /kubernetes (attempt 1)
kube# [ 17.108238] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54346 - "POST /api/v1/cfssl/info" 200
kube# [ 17.108674] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.108979] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /etcd-client because it isn't ready
kube# [ 17.109290] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /etcd-client (attempt 1)
kube# [ 17.111127] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54348 - "POST /api/v1/cfssl/info" 200
kube# [ 17.111400] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.111659] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:kube-apiserver because it isn't ready
kube# [ 17.111912] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:kube-apiserver (attempt 1)
kube# [ 17.114427] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54350 - "POST /api/v1/cfssl/info" 200
kube# [ 17.114802] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.115136] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /front-proxy-client because it isn't ready
kube# [ 17.115427] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /front-proxy-client (attempt 1)
kube# [ 17.117257] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54352 - "POST /api/v1/cfssl/info" 200
kube# [ 17.117604] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.117894] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /cluster-admin/O=system:masters because it isn't ready
kube# [ 17.118231] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /cluster-admin/O=system:masters (attempt 1)
kube# [ 17.130090] kubelet[2011]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.130292] kubelet[2011]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.130578] kubelet[2011]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.130885] kubelet[2011]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.131080] kubelet[2011]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.131304] kubelet[2011]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.131526] kubelet[2011]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.131732] kubelet[2011]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.132047] kubelet[2011]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.132292] kubelet[2011]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.132513] kubelet[2011]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.132733] kubelet[2011]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.133085] kubelet[2011]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.133447] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54354 - "POST /api/v1/cfssl/info" 200
kube# [ 17.133801] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.134149] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /kube-controller-manager because it isn't ready
kube# [ 17.134427] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /kube-controller-manager (attempt 1)
kube# [ 17.135372] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54356 - "POST /api/v1/cfssl/info" 200
kube# [ 17.135672] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.135944] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:kube-controller-manager because it isn't ready
kube# [ 17.136258] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:kube-controller-manager (attempt 1)
kube# [ 17.162721] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54358 - "POST /api/v1/cfssl/info" 200
kube# [ 17.163085] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.163386] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.163704] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.178333] systemd[1]: Started Kubernetes systemd probe.
kube# [ 17.178691] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54360 - "POST /api/v1/cfssl/info" 200
kube# [ 17.179115] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.179393] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:kube-proxy because it isn't ready
kube# [ 17.179671] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:kube-proxy (attempt 1)
kube# [ 17.181596] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54362 - "POST /api/v1/cfssl/info" 200
kube# [ 17.181972] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.182200] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.182413] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.184594] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54364 - "POST /api/v1/cfssl/info" 200
kube# [ 17.185011] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.185237] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:node:kube.my.xzy/O=system:nodes because it isn't ready
kube# [ 17.185462] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:node:kube.my.xzy/O=system:nodes (attempt 1)
kube# [ 17.187782] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54366 - "POST /api/v1/cfssl/info" 200
kube# [ 17.188195] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.188436] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:kube-scheduler because it isn't ready
kube# [ 17.188653] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:kube-scheduler (attempt 1)
kube# [ 17.188948] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.194823] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54368 - "POST /api/v1/cfssl/info" 200
kube# [ 17.195125] certmgr[1923]: 2020/01/27 01:28:45 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.195351] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: queueing /system:service-account-signer because it isn't ready
kube# [ 17.195608] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: processing certificate spec /system:service-account-signer (attempt 1)
kube# [ 17.195972] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.197426] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 601945382806879966609342249584711796788155093345
kube# [ 17.197768] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.198234] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54370 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.198513] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-kubelet-client.pem
kube# [ 17.202775] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.206756] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.207105] systemd[1]: run-r4e8e5f6b1186448ea0b23218e7cbcc8e.scope: Succeeded.
kube# [ 17.208839] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 222135681741499210314242564754368255471158688079
kube# [ 17.209259] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.209505] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54372 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.209800] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager-client.pem
kube# [ 17.211523] kubelet[2011]: I0127 01:28:45.161880 2011 server.go:425] Version: v1.15.6
kube# [ 17.211949] kubelet[2011]: I0127 01:28:45.162112 2011 plugins.go:103] No cloud provider specified.
kube# [ 17.215278] kubelet[2011]: F0127 01:28:45.165621 2011 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 17.219932] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.221611] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 17.221844] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 17.223666] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.225649] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 218607042014882019746546071316226537529089767050
kube# [ 17.225945] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.226186] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54374 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.226579] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-etcd-client.pem
kube# [ 17.239940] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 17.241374] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 17.243991] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.247045] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 17.247249] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 17.247635] systemd[1]: kube-apiserver.service: Succeeded.
kube# [ 17.248186] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 17.249679] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 17.250961] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 17.254744] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.254996] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.275608] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.277167] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.279930] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.282652] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.282852] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 341474848538271569432463041834098692008734198950
kube# [ 17.283169] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.283590] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54376 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.283902] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-addon-manager.pem
kube# [ 17.284218] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.286506] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.286794] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 420140138500454280903522078681331052702057060001
kube# [ 17.287187] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.287410] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54378 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.287699] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/service-account.pem
kube# [ 17.288322] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 315139239274716544656179481590630482153511247309
kube# [ 17.288616] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.288920] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54380 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.289276] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/cluster-admin.pem
kube# [ 17.289565] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.313356] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 17.314662] systemd[1]: Starting Kubernetes addon manager...
kube# [ 17.317960] certmgr[1923]: 2020/01/27 01:28:45 [ERROR] manager: exit status 3
kube# [ 17.318082] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.321245] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.322824] kube-controller-manager[2038]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 17.324611] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.328214] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.330130] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.330404] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 674451656203374001107963741514483024687981110678
kube# [ 17.330777] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.331175] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54382 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.334376] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet-client.pem
kube# [ 17.335891] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 140710308877988812430821780117046171776407250199
kube# [ 17.336239] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.336857] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54384 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.337223] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-proxy-client.pem
kube# [ 17.340740] kube-apiserver[2037]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 17.341086] kube-apiserver[2037]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 17.341586] kube-apiserver[2037]: I0127 01:28:45.290920 2037 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 17.341856] kube-apiserver[2037]: I0127 01:28:45.291692 2037 server.go:147] Version: v1.15.6
kube# [ 17.342154] kube-apiserver[2037]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 17.342474] kube-apiserver[2037]: Usage:
kube# [ 17.342677] kube-apiserver[2037]: kube-apiserver [flags]
kube# [ 17.344087] kube-apiserver[2037]: Generic flags:
kube# [ 17.344319] kube-apiserver[2037]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 17.344532] kube-apiserver[2037]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 17.344742] kube-apiserver[2037]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 17.345135] kube-apiserver[2037]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 17.345342] kube-apiserver[2037]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 17.345554] kube-apiserver[2037]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 17.345801] kube-apiserver[2037]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 17.346159] kube-apiserver[2037]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 17.347281] kube-apiserver[2037]: APIListChunking=true|false (BETA - default=true)
kube# [ 17.347558] kube-apiserver[2037]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 17.347756] kube-apiserver[2037]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 17.348207] kube-apiserver[2037]: AppArmor=true|false (BETA - default=true)
kube# [ 17.348467] kube-apiserver[2037]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 17.348649] kube-apiserver[2037]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 17.348855] kube-apiserver[2037]: BlockVolume=true|false (BETA - default=true)
kube# [ 17.349155] kube-apiserver[2037]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 17.349333] kube-apiserver[2037]: CPUManager=true|false (BETA - default=true)
kube# [ 17.349543] kube-apiserver[2037]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 17.349752] kube-apiserver[2037]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 17.350106] kube-apiserver[2037]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 17.350342] kube-apiserver[2037]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 17.350568] kube-apiserver[2037]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 17.350763] kube-apiserver[2037]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 17.351114] kube-apiserver[2037]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 17.351322] kube-apiserver[2037]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 17.351517] kube-apiserver[2037]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 17.351724] kube-apiserver[2037]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 17.379748] kube-apiserver[2037]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 17.380140] kube-apiserver[2037]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 17.380335] kube-apiserver[2037]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 17.380686] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.381001] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager.pem
kube# [ 17.381284] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.381501] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet.pem
kube# [ 17.382227] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.382663] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 354886023617478811752664144931389429121153006154
kube# [ 17.382966] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.383227] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54386 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.383438] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.383662] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 492105009388774930411205790655875546585992543668
kube# [ 17.384036] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.384278] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54388 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.384538] kube-apiserver[2037]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 17.384822] kube-apiserver[2037]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 17.385226] kube-apiserver[2037]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 17.385460] kube-apiserver[2037]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 17.385668] kube-apiserver[2037]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 17.385908] kube-apiserver[2037]: DevicePlugins=true|false (BETA - default=true)
kube# [ 17.386126] kube-apiserver[2037]: DryRun=true|false (BETA [ 17.456890] serial8250: too much work for irq4
kube# - default=true)
kube# [ 17.386369] kube-apiserver[2037]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 17.386615] kube-apiserver[2037]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 17.386950] kube-apiserver[2037]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 17.387263] kube-apiserver[2037]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 17.387476] kube-apiserver[2037]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 17.387678] kube-apiserver[2037]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 17.387957] kube-apiserver[2037]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 17.388142] kube-apiserver[2037]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 17.388389] kube-apiserver[2037]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 17.388614] kube-apiserver[2037]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 17.388842] kube-apiserver[2037]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 17.389182] kube-apiserver[2037]: MountContainers=true|false (ALPHA - default=false)
kube# [ 17.389432] kube-apiserver[2037]: NodeLease=true|false (BETA - default=true)
kube# [ 17.389633] kube-apiserver[2037]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 17.389854] kube-apiserver[2037]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 17.390149] kube-apiserver[2037]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 17.390351] kube-apiserver[2037]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 17.390601] kube-apiserver[2037]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 17.419281] kube-apiserver[2037]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 17.419449] kube-apiserver[2037]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 17.419664] kube-apiserver[2037]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 17.419923] kube-apiserver[2037]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 17.420113] kube-apiserver[2037]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 17.420300] kube-apiserver[2037]: RunAsGroup=true|false (BETA - default=true)
kube# [ 17.420513] kube-apiserver[2037]: RuntimeClass=true|false (BETA - default=true)
kube# [ 17.420714] kube-apiserver[2037]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 17.420985] kube-apiserver[2037]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 17.421159] kube-apiserver[2037]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 17.421355] kube-apiserver[2037]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 17.421579] kube-apiserver[2037]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 17.421984] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 17.422302] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.422531] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.422734] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver.pem
kube# [ 17.423085] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-proxy-client.pem
kube# [ 17.423315] kube-apiserver[2037]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 17.423573] kube-apiserver[2037]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 17.423784] kube-apiserver[2037]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 17.424123] kube-apiserver[2037]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 17.424315] kube-apiserver[2037]: Sysctls=true|false (BETA - default=true)
kube# [ 17.424522] kube-apiserver[2037]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 17.424733] kube-apiserver[2037]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 17.425074] kube-apiserver[2037]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 17.425260] kube-apiserver[2037]: TokenRequest=true|false (BETA - default=true)
kube# [ 17.425462] kube-apiserver[2037]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 17.425679] kube-apiserver[2037]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 17.425940] kube-apiserver[2037]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 17.426116] kube-apiserver[2037]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 17.426354] kube-apiserver[2037]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 17.426557] kube-apiserver[2037]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 17.426792] kube-apiserver[2037]: WinDSR=true|false (ALPHA - default=false)
kube# [ 17.427115] kube-apiserver[2037]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 17.427305] kube-apiserver[2037]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 17.427554] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.427830] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.428062] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 486675901900981832001884084685621385372869911772
kube# [ 17.454547] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.454727] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54390 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.455102] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 48030816993877032830812987327321338542509[ 17.513177] serial8250: too much work for irq4
kube# 4375914
kube# [ 17.455319] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.455504] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54392 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.455758] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 17.456487] kube-scheduler[1977]: I0127 01:28:45.372786 1977 serving.go:319] Generated self-signed cert in-memory
kube# [ 17.456758] kube-apiserver[2037]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 17.457160] kube-apiserver[2037]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 17.457350] kube-apiserver[2037]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 17.457573] kube-apiserver[2037]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 17.457772] kube-apiserver[2037]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 17.458142] kube-apiserver[2037]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 17.458359] kube-apiserver[2037]: Etcd flags:
kube# [ 17.458563] kube-apiserver[2037]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 17.459242] kube-apiserver[2037]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 17.459365] kube-apiserver[2037]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 17.459575] kube-apiserver[2037]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 17.459992] kube-apiserver[2037]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 17.460258] kube-apiserver[2037]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 17.460456] kube-apiserver[2037]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 17.460654] kube-apiserver[2037]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 17.460936] kube-apiserver[2037]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 17.461264] kube-apiserver[2037]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 17.461488] kube-apiserver[2037]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 17.461686] kube-apiserver[2037]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 17.461957] kube-apiserver[2037]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 17.462184] kube-apiserver[2037]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 17.489180] kube-apiserver[2037]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 17.489396] kube-apiserver[2037]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 17.489680] kube-apiserver[2037]: Secure serving flags:
kube# [ 17.489945] kube-apiserver[2037]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 17.490266] kube-apiserver[2037]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 17.490531] kube-apiserver[2037]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 17.490791] kube-apiserver[2037]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 17.491153] kube-apiserver[2037]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 17.491387] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.491640] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 391279109554118600268666840150317808280410740122
kube# [ 17.492156] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.492393] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54394 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.492593] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signature request received
kube# [ 17.492811] cfssl[1041]: 2020/01/27 01:28:45 [INFO] signed certificate with serial number 77674929584260420823904248822316859262163253105
kube# [ 17.493149] cfssl[1041]: 2020/01/27 01:28:45 [INFO] wrote response
kube# [ 17.493339] cfssl[1041]: 2020/01/27 01:28:45 [INFO] 192.168.1.1:54396 - "POST /api/v1/cfssl/authsign" 200
kube# [ 17.493587] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.493847] certmgr[1923]: 2020/01/27 01:28:45 [INFO] encoded CSR
kube# [ 17.494099] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/etcd.pem
kube# [ 17.494309] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/k[ 17.570198] serial8250: too much work for irq4
kube# ube-scheduler-client.pem
kube# [ 17.494541] kube-apiserver[2037]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 17.521215] kube-apiserver[2037]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 17.521397] kube-apiserver[2037]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 17.521602] kube-apiserver[2037]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 17.521816] kube-apiserver[2037]: Insecure serving flags:
kube# [ 17.522178] kube-apiserver[2037]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 17.522447] kube-apiserver[2037]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 17.522630] kube-apiserver[2037]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 17.522826] kube-apiserver[2037]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 17.523087] kube-apiserver[2037]: Auditing flags:
kube# [ 17.523291] kube-apiserver[2037]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 17.523485] kube-apiserver[2037]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 17.523689] kube-apiserver[2037]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 17.523963] kube-apiserver[2037]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 17.524242] kube-apiserver[2037]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 17.524442] kube-apiserver[2037]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 17.524646] kube-apiserver[2037]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 17.524849] kube-apiserver[2037]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 17.525093] kube-apiserver[2037]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 17.525347] kube-controller-manager[2038]: I0127 01:28:45.456102 2038 serving.go:319] Generated self-signed cert in-memory
kube# [ 17.525691] kube-apiserver[2037]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 17.526087] kube-apiserver[2037]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 17.526288] kube-apiserver[2037]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 17.526492] kube-apiserver[2037]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 17.553203] kube-apiserver[2037]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 17.553397] kube-apiserver[2037]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 17.553609] kube-apiserver[2037]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 17.553820] kube-apiserver[2037]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 17.554096] kube-apiserver[2037]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 17.554302] kube-apiserver[2037]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 17.554479] kube-apiserver[2037]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 17.554739] kube-apiserver[2037]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 17.555013] kube-apiserver[2037]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 17.555214] kube-apiserver[2037]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 17.555441] kube-apiserver[2037]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 17.555683] kube-apiserver[2037]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 17.555998] kube-apiserver[2037]: --audit-webhook-init[ 17.627882] serial8250: too much work for irq4
kube# ial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 17.556309] kube-apiserver[2037]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 17.556551] kube-apiserver[2037]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 17.556786] kube-apiserver[2037]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 17.557213] kube-apiserver[2037]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 17.557439] kube-apiserver[2037]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 17.557678] kube-apiserver[2037]: Features flags:
kube# [ 17.558104] kube-apiserver[2037]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 17.558356] kube-apiserver[2037]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 17.558614] kube-apiserver[2037]: Authentication flags:
kube# [ 17.558952] kube-apiserver[2037]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 17.587098] kube-apiserver[2037]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 17.587302] kube-apiserver[2037]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 17.587499] kube-apiserver[2037]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 17.587704] kube-apiserver[2037]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 17.587963] kube-apiserver[2037]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 17.588139] kube-apiserver[2037]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 17.588459] kube-apiserver[2037]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 17.588650] kube-apiserver[2037]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 17.588845] kube-apiserver[2037]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 17.589083] kube-apiserver[2037]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 17.589326] kube-apiserver[2037]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 17.589521] kube-apiserver[2037]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 17.589718] kube-apiserver[2037]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 17.590072] kube-apiserver[2037]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 17.590310] kube-apiserver[2037]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 17.590470] kube-apiserver[2037]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 17.617474] kube-apiserver[2037]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 17.617776] kube-apiserver[2037]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 17.618181] kube-apiserver[2037]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 17.618413] kube-apiserver[2037]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 17.618653] kube-apiserver[2037]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 17.618975] kube-apiserver[2037]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 17.619197] kube-apiserver[2037]: --service-account-lookup If true, validate ServiceA[ 17.686018] serial8250: too much work for irq4
kube# ccount tokens exist in etcd as part of authentication. (default true)
kube# [ 17.619583] kube-apiserver[2037]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 17.619837] kube-apiserver[2037]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 17.620097] kube-apiserver[2037]: Authorization flags:
kube# [ 17.620338] kube-apiserver[2037]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 17.620587] kube-apiserver[2037]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 17.620845] kube-apiserver[2037]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 17.621107] kube-apiserver[2037]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 17.621349] kube-apiserver[2037]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 17.621596] kube-apiserver[2037]: Cloud provider flags:
kube# [ 17.621837] kube-apiserver[2037]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 17.622122] kube-apiserver[2037]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 17.622357] kube-apiserver[2037]: Api enablement flags:
kube# [ 17.622668] kube-apiserver[2037]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 17.623132] kube-apiserver[2037]: Admission flags:
kube# [ 17.623374] kube-apiserver[2037]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 17.650767] kube-apiserver[2037]: --admission-control-config-file string File with admission control configuration.
kube# [ 17.651337] kube-apiserver[2037]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 17.651713] kube-apiserver[2037]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 17.652320] kube-apiserver[2037]: Misc flags:
kube# [ 17.652556] kube-apiserver[2037]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 17.652795] kube-apiserver[2037]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 17.653202] kube-apiserver[2037]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 17.653459] kube-apiserver[2037]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 17.680216] kube-apiserver[2037]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 17.680447] kube-apiserver[2037]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 17.680651] kube-apiserver[2037]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 17.680971] kube-apiserver[2037]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 17.681202] kube-apiserver[2037]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 17.681405] kube-apiserver[2037]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 17.681607] kube-apiserver[2037]: --kubelet-read-only-port uint DEPRECATED: kubelet port.[ 17.743546] serial8250: too much work for irq4
kube# (default 10255)
kube# [ 17.681819] kube-apiserver[2037]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 17.682100] kube-apiserver[2037]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 17.682278] kube-apiserver[2037]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 17.682471] kube-apiserver[2037]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 17.682670] kube-apiserver[2037]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 17.683029] kube-apiserver[2037]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 17.683302] kube-apiserver[2037]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 17.683522] kube-apiserver[2037]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 17.683736] kube-apiserver[2037]: Global flags:
kube# [ 17.684112] kube-apiserver[2037]: --alsologtostderr log to standard error as well as files
kube# [ 17.684309] kube-apiserver[2037]: -h, --help help for kube-apiserver
kube# [ 17.684520] kube-apiserver[2037]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 17.684709] kube-apiserver[2037]: --log-dir string If non-empty, write log files in this directory
kube# [ 17.685088] kube-apiserver[2037]: --log-file string If non-empty, use this log file
kube# [ 17.685283] kube-apiserver[2037]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 17.685483] kube-apiserver[2037]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 17.685698] kube-apiserver[2037]: --logtostderr log to standard error instead of files (default true)
kube# [ 17.686017] kube-apiserver[2037]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 17.686252] kube-apiserver[2037]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 17.713666] kube-apiserver[2037]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 17.713840] kube-apiserver[2037]: -v, --v Level number for the log level verbosity
kube# [ 17.714073] kube-apiserver[2037]: --version version[=true] Print version information and quit
kube# [ 17.714179] kube-apiserver[2037]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 17.714510] kube-apiserver[2037]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 17.714731] kube-scheduler[1977]: W0127 01:28:45.637468 1977 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 17.715173] kube-scheduler[1977]: W0127 01:28:45.637532 1977 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 17.715436] kube-scheduler[1977]: W0127 01:28:45.637550 1977 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 17.752823] systemd[1]: Stopping Kubernetes Controller Manager Service...
kube# [ 17.753527] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 17.754457] systemd[1]: kube-controller-manager.service: Succeeded.
kube# [ 17.754742] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 17.755607] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 17.756656] kube-scheduler[1977]: I0127 01:28:45.707028 1977 server.go:142] Version: v1.15.6
kube# [ 17.757123] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 17.758842] kube-scheduler[1977]: I0127 01:28:45.709209 1977 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
kube# [ 17.759436] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 17.760128] kube-scheduler[1977]: W0127 01:28:45.710455 1977 authorization.go:47] Authorization is disabled
kube# [ 17.760412] kube-scheduler[1977]: W0127 01:28:45.710482 1977 authentication.go:55] Authentication is disabled
kube# [ 17.760567] kube-scheduler[1977]: I0127 01:28:45.710507 1977 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
kube# [ 17.765492] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 17.768196] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.768458] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.771455] kube-scheduler[1977]: I0127 01:28:45.721238 1977 secure_serving.go:116] Serving securely on [::]:10259
kube# [ 17.774799] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2084]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 17.781807] systemd[1]: Starting etcd key-value store...
kube# [ 17.782624] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 17.784494] systemd[1]: Stopping Kubernetes Scheduler Service...
kube# [ 17.786234] systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
kube# [ 17.786472] systemd[1]: kubelet.service: Failed with result 'signal'.
kube# [ 17.786823] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 17.788695] systemd[1]: kube-scheduler.service: Succeeded.
kube# [ 17.789130] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 17.790248] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 17.793453] systemd[1]: kube-apiserver.service: Succeeded.
kube# [ 17.793742] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 17.796383] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 17.798406] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 17.800281] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.800542] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 17.801137] systemd[1]: kubelet.service: Start request repeated too quickly.
kube# [ 17.801576] systemd[1]: kubelet.service: Failed with result 'signal'.
kube# [ 17.802197] systemd[1]: Failed to start Kubernetes Kubelet Service.
kube# [ 17.804916] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.807487] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.809918] certmgr[1923]: 2020/01/27 01:28:45 [ERROR] manager: exit status 1
kube# [ 17.810130] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.810747] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/dak5nvsj8ab4dywrr2r96mfvvfvmfwav-apiserver-kubelet-api-admin-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.811251] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/q8x42ds4w9azhviqm26k5gzbs0g19wir-coredns-cr.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.811540] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/8zqicjics4fg5dg1g50aqsrllbf5hb41-coredns-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.811920] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/92hgw2wxa9bvyi58akkj8slr5i4pln34-kube-addon-manager-cluster-lister-cr.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.812162] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/n15m5qpvi0asyfi7356idb5ycmf5crcq-kube-addon-manager-cluster-lister-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.812385] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/l7irsk7dw8wrs7c1c4fw52rgh24lrsc7-kube-addon-manager-r.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.812591] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2057]: unable to recognize "/nix/store/rk35vqyf2mfdgrjg53swfnv9hdamj6sb-kube-addon-manager-rb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 17.812903] certmgr[1923]: 2020/01/27 01:28:45 [ERROR] manager: exit status 1
kube# [ 17.813147] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.827977] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 17.828249] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 17.828571] systemd[1]: Failed to start Kubernetes addon manager.
kube# [ 17.829039] systemd[1]: kube-addon-manager.service: Consumed 98ms CPU time, received 320B IP traffic, sent 480B IP traffic.
kube# [ 17.833277] certmgr[1923]: 2020/01/27 01:28:45 [ERROR] manager: exit status 1
kube# [ 17.833472] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 17.839000] etcd[2099]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 17.839240] etcd[2099]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 17.839676] etcd[2099]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 17.840341] etcd[2099]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 17.840723] etcd[2099]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 17.841126] etcd[2099]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 17.841505] etcd[2099]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 17.841902] etcd[2099]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 17.842234] etcd[2099]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 17.842597] etcd[2099]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 17.843009] etcd[2099]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 17.843358] etcd[2099]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 17.843691] etcd[2099]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 17.844114] etcd[2099]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 17.844458] etcd[2099]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 17.844811] etcd[2099]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 17.845191] etcd[2099]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 17.845526] etcd[2099]: etcd Version: 3.3.13
kube#
kube# [ 17.845945] etcd[2099]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 17.846285] etcd[2099]: Go Version: go1.12.9
kube#
kube# [ 17.846781] etcd[2099]: Go OS/Arch: linux/amd64
kube#
kube# [ 17.847116] etcd[2099]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 17.847457] etcd[2099]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 17.860364] etcd[2099]: listening for peers on https://127.0.0.1:2380
kube# [ 17.860614] etcd[2099]: listening for client requests on 127.0.0.1:2379
kube# [ 17.866150] kube-proxy[2107]: W0127 01:28:45.816191 2107 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 17.882151] etcd[2099]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 17.882453] etcd[2099]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 17.882767] kube-proxy[2107]: W0127 01:28:45.832621 2107 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.883337] etcd[2099]: name = kube.my.xzy
kube# [ 17.883919] kube-controller-manager[2082]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 17.884280] etcd[2099]: data dir = /var/lib/etcd
kube# [ 17.884520] etcd[2099]: member dir = /var/lib/etcd/member
kube# [ 17.884824] etcd[2099]: heartbeat = 100ms
kube# [ 17.885206] etcd[2099]: election = 1000ms
kube# [ 17.885565] etcd[2099]: snapshot count = 100000
kube# [ 17.886190] kube-apiserver[2106]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 17.886443] kube-apiserver[2106]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 17.886730] kube-apiserver[2106]: I0127 01:28:45.835776 2106 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 17.886975] kube-apiserver[2106]: I0127 01:28:45.836033 2106 server.go:147] Version: v1.15.6
kube# [ 17.887218] etcd[2099]: advertise client URLs = https://etcd.local:2379
kube# [ 17.887551] etcd[2099]: initial advertise peer URLs = https://etcd.local:2380
kube# [ 17.887936] etcd[2099]: initial cluster = kube.my.xzy=https://etcd.local:2380
kube# [ 17.893688] kube-proxy[2107]: W0127 01:28:45.844026 2107 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.895598] kube-proxy[2107]: W0127 01:28:45.845957 2107 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.897490] kube-proxy[2107]: W0127 01:28:45.847846 2107 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.899356] kube-proxy[2107]: W0127 01:28:45.849737 2107 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 17.901199] kube-proxy[2107]: W0127 01:28:45.851556 2107 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 17.908990] etcd[2099]: starting member d579d2a9b6a65847 in cluster cd74e8f1b6ca227e
kube# [ 17.909248] etcd[2099]: d579d2a9b6a65847 became follower at term 0
kube# [ 17.909591] etcd[2099]: newRaft d579d2a9b6a65847 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
kube# [ 17.910071] etcd[2099]: d579d2a9b6a65847 became follower at term 1
kube# [ 17.923766] kube-proxy[2107]: W0127 01:28:45.874094 2107 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
kube# [ 17.929632] etcd[2099]: simple token is not cryptographically signed
kube# [ 17.934297] etcd[2099]: starting server... [version: 3.3.13, cluster version: to_be_decided]
kube# [ 17.937463] etcd[2099]: d579d2a9b6a65847 as single-node; fast-forwarding 9 ticks (election ticks 10)
kube# [ 17.942517] etcd[2099]: added member d579d2a9b6a65847 [https://etcd.local:2380] to cluster cd74e8f1b6ca227e
kube# [ 17.944447] etcd[2099]: ClientTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = true, crl-file =
kube# [ 18.016485] etcd[2099]: d579d2a9b6a65847 is starting a new election at term 1
kube# [ 18.016791] etcd[2099]: d579d2a9b6a65847 became candidate at term 2
kube# [ 18.017177] etcd[2099]: d579d2a9b6a65847 received MsgVoteResp from d579d2a9b6a65847 at term 2
kube# [ 18.017553] etcd[2099]: d579d2a9b6a65847 became leader at term 2
kube# [ 18.017933] etcd[2099]: raft.node: d579d2a9b6a65847 elected leader d579d2a9b6a65847 at term 2
kube# [ 18.018437] etcd[2099]: setting up the initial cluster version to 3.3
kube# [ 18.018795] etcd[2099]: set the initial cluster version to 3.3
kube# [ 18.019121] etcd[2099]: enabled capabilities for version 3.3
kube# [ 18.019479] etcd[2099]: published {Name:kube.my.xzy ClientURLs:[https://etcd.local:2379]} to cluster cd74e8f1b6ca227e
kube# [ 18.019807] etcd[2099]: ready to serve client requests
kube# [ 18.024012] systemd[1]: Started etcd key-value store.
kube# [ 18.025200] etcd[2099]: serving client requests on 127.0.0.1:2379
kube# [ 18.026531] certmgr[1923]: 2020/01/27 01:28:45 [INFO] manager: certificate successfully processed
kube# [ 18.101836] kube-scheduler[2112]: I0127 01:28:46.051978 2112 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.280485] kube-apiserver[2106]: I0127 01:28:46.230745 2106 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.280667] kube-apiserver[2106]: I0127 01:28:46.230782 2106 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.284932] kube-apiserver[2106]: E0127 01:28:46.235271 2106 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.285031] kube-apiserver[2106]: E0127 01:28:46.235339 2106 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.285310] kube-apiserver[2106]: E0127 01:28:46.235367 2106 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.285629] kube-apiserver[2106]: E0127 01:28:46.235396 2106 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.285825] kube-apiserver[2106]: E0127 01:28:46.235431 2106 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.286038] kube-apiserver[2106]: E0127 01:28:46.235456 2106 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.286251] kube-apiserver[2106]: E0127 01:28:46.235482 2106 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.286362] kube-apiserver[2106]: E0127 01:28:46.235517 2106 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.286645] kube-apiserver[2106]: E0127 01:28:46.235562 2106 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.286937] kube-apiserver[2106]: E0127 01:28:46.235599 2106 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.287206] kube-apiserver[2106]: E0127 01:28:46.235619 2106 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.287403] kube-apiserver[2106]: E0127 01:28:46.235650 2106 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.287618] kube-apiserver[2106]: I0127 01:28:46.235669 2106 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.288020] kube-apiserver[2106]: I0127 01:28:46.235696 2106 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.306395] kube-apiserver[2106]: I0127 01:28:46.256775 2106 client.go:354] parsed scheme: ""
kube# [ 18.306512] kube-apiserver[2106]: I0127 01:28:46.256792 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.306771] kube-apiserver[2106]: I0127 01:28:46.256837 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.309432] kube-apiserver[2106]: I0127 01:28:46.259805 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.323448] kube-apiserver[2106]: I0127 01:28:46.273795 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.323643] kube-apiserver[2106]: I0127 01:28:46.273949 2106 client.go:354] parsed scheme: ""
kube# [ 18.323953] kube-apiserver[2106]: I0127 01:28:46.273985 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.326152] kube-apiserver[2106]: I0127 01:28:46.274040 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.326413] kube-apiserver[2106]: I0127 01:28:46.274075 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.326711] kube-controller-manager[2082]: I0127 01:28:46.275261 2082 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.328680] kube-apiserver[2106]: I0127 01:28:46.279044 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.389343] kube-apiserver[2106]: I0127 01:28:46.339701 2106 master.go:233] Using reconciler: lease
kube# [ 18.389706] kube-apiserver[2106]: I0127 01:28:46.340081 2106 client.go:354] parsed scheme: ""
kube# [ 18.390245] kube-apiserver[2106]: I0127 01:28:46.340102 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.390496] kube-apiserver[2106]: I0127 01:28:46.340159 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.390679] kube-apiserver[2106]: I0127 01:28:46.340202 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.398134] kube-apiserver[2106]: I0127 01:28:46.348513 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.401770] kube-apiserver[2106]: I0127 01:28:46.352146 2106 client.go:354] parsed scheme: ""
kube# [ 18.401908] kube-apiserver[2106]: I0127 01:28:46.352172 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.402202] kube-apiserver[2106]: I0127 01:28:46.352209 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.402896] kube-apiserver[2106]: I0127 01:28:46.352243 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.408306] kube-apiserver[2106]: I0127 01:28:46.358685 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.413561] kube-apiserver[2106]: I0127 01:28:46.363944 2106 client.go:354] parsed scheme: ""
kube# [ 18.413675] kube-apiserver[2106]: I0127 01:28:46.363962 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.414163] kube-apiserver[2106]: I0127 01:28:46.363987 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.414430] kube-apiserver[2106]: I0127 01:28:46.364016 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.419517] kube-apiserver[2106]: I0127 01:28:46.369887 2106 client.go:354] parsed scheme: ""
kube# [ 18.419645] kube-apiserver[2106]: I0127 01:28:46.369910 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.419997] kube-apiserver[2106]: I0127 01:28:46.369950 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.420361] kube-apiserver[2106]: I0127 01:28:46.370025 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.420555] kube-apiserver[2106]: I0127 01:28:46.370137 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.425451] kube-apiserver[2106]: I0127 01:28:46.375814 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.428229] kube-apiserver[2106]: I0127 01:28:46.378604 2106 client.go:354] parsed scheme: ""
kube# [ 18.428377] kube-apiserver[2106]: I0127 01:28:46.378624 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.428748] kube-apiserver[2106]: I0127 01:28:46.378658 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.429130] kube-apiserver[2106]: I0127 01:28:46.378698 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.434814] kube-apiserver[2106]: I0127 01:28:46.385191 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.436572] kube-apiserver[2106]: I0127 01:28:46.386950 2106 client.go:354] parsed scheme: ""
kube# [ 18.436770] kube-apiserver[2106]: I0127 01:28:46.386968 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.437138] kube-apiserver[2106]: I0127 01:28:46.386999 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.437402] kube-apiserver[2106]: I0127 01:28:46.387053 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.443135] kube-apiserver[2106]: I0127 01:28:46.393488 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.443643] kube-apiserver[2106]: I0127 01:28:46.393945 2106 client.go:354] parsed scheme: ""
kube# [ 18.444198] kube-apiserver[2106]: I0127 01:28:46.393972 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.444588] kube-apiserver[2106]: I0127 01:28:46.394053 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.445009] kube-apiserver[2106]: I0127 01:28:46.394134 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.452857] kube-apiserver[2106]: I0127 01:28:46.403207 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.453439] kube-apiserver[2106]: I0127 01:28:46.403784 2106 client.go:354] parsed scheme: ""
kube# [ 18.453747] kube-apiserver[2106]: I0127 01:28:46.403823 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.454281] kube-apiserver[2106]: I0127 01:28:46.403873 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.454576] kube-apiserver[2106]: I0127 01:28:46.403930 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.458642] kube-apiserver[2106]: I0127 01:28:46.409005 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.459067] kube-apiserver[2106]: I0127 01:28:46.409304 2106 client.go:354] parsed scheme: ""
kube# [ 18.459392] kube-apiserver[2106]: I0127 01:28:46.409380 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.459719] kube-apiserver[2106]: I0127 01:28:46.409415 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.460197] kube-apiserver[2106]: I0127 01:28:46.409455 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.465276] kube-apiserver[2106]: I0127 01:28:46.415648 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.465766] kube-apiserver[2106]: I0127 01:28:46.416104 2106 client.go:354] parsed scheme: ""
kube# [ 18.466209] kube-apiserver[2106]: I0127 01:28:46.416132 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.466515] kube-apiserver[2106]: I0127 01:28:46.416199 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.466818] kube-apiserver[2106]: I0127 01:28:46.416267 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.471974] kube-apiserver[2106]: I0127 01:28:46.422356 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.472635] kube-apiserver[2106]: I0127 01:28:46.422943 2106 client.go:354] parsed scheme: ""
kube# [ 18.473212] kube-apiserver[2106]: I0127 01:28:46.422981 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.473528] kube-apiserver[2106]: I0127 01:28:46.423063 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.473734] kube-apiserver[2106]: I0127 01:28:46.423126 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.477630] kube-apiserver[2106]: I0127 01:28:46.428000 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.478159] kube-apiserver[2106]: I0127 01:28:46.428525 2106 client.go:354] parsed scheme: ""
kube# [ 18.478489] kube-apiserver[2106]: I0127 01:28:46.428556 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.478802] kube-apiserver[2106]: I0127 01:28:46.428599 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.479203] kube-apiserver[2106]: I0127 01:28:46.428636 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.483080] kube-apiserver[2106]: I0127 01:28:46.433450 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.483712] kube-apiserver[2106]: I0127 01:28:46.434050 2106 client.go:354] parsed scheme: ""
kube# [ 18.484208] kube-apiserver[2106]: I0127 01:28:46.434084 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.484532] kube-apiserver[2106]: I0127 01:28:46.434158 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.484919] kube-apiserver[2106]: I0127 01:28:46.434222 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.493209] kube-apiserver[2106]: I0127 01:28:46.443570 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.493752] kube-apiserver[2106]: I0127 01:28:46.444127 2106 client.go:354] parsed scheme: ""
kube# [ 18.494155] kube-apiserver[2106]: I0127 01:28:46.444150 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.494436] kube-apiserver[2106]: I0127 01:28:46.444441 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.494638] kube-apiserver[2106]: I0127 01:28:46.444471 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.498906] kube-apiserver[2106]: I0127 01:28:46.449189 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.500200] kube-apiserver[2106]: I0127 01:28:46.450585 2106 client.go:354] parsed scheme: ""
kube# [ 18.500541] kube-apiserver[2106]: I0127 01:28:46.450604 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.500782] kube-apiserver[2106]: I0127 01:28:46.450661 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.501209] kube-apiserver[2106]: I0127 01:28:46.450721 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.505254] kube-apiserver[2106]: I0127 01:28:46.455490 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.505747] kube-apiserver[2106]: I0127 01:28:46.455802 2106 client.go:354] parsed scheme: ""
kube# [ 18.506206] kube-apiserver[2106]: I0127 01:28:46.455828 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.506490] kube-apiserver[2106]: I0127 01:28:46.455862 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.506735] kube-apiserver[2106]: I0127 01:28:46.455934 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.511441] kube-apiserver[2106]: I0127 01:28:46.461808 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.512012] kube-apiserver[2106]: I0127 01:28:46.462359 2106 client.go:354] parsed scheme: ""
kube# [ 18.512403] kube-apiserver[2106]: I0127 01:28:46.462393 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.512707] kube-apiserver[2106]: I0127 01:28:46.462441 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.513143] kube-apiserver[2106]: I0127 01:28:46.462483 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.518018] kube-apiserver[2106]: I0127 01:28:46.468221 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.521547] kube-apiserver[2106]: I0127 01:28:46.471935 2106 client.go:354] parsed scheme: ""
kube# [ 18.521671] kube-apiserver[2106]: I0127 01:28:46.471952 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.522088] kube-apiserver[2106]: I0127 01:28:46.471978 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.522437] kube-apiserver[2106]: I0127 01:28:46.472017 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.526507] kube-apiserver[2106]: I0127 01:28:46.476848 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.591830] kube-apiserver[2106]: I0127 01:28:46.542159 2106 client.go:354] parsed scheme: ""
kube# [ 18.592046] kube-apiserver[2106]: I0127 01:28:46.542201 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.592289] kube-apiserver[2106]: I0127 01:28:46.542250 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.592646] kube-apiserver[2106]: I0127 01:28:46.542288 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.597210] kube-apiserver[2106]: I0127 01:28:46.547572 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.597607] kube-apiserver[2106]: I0127 01:28:46.547956 2106 client.go:354] parsed scheme: ""
kube# [ 18.597953] kube-apiserver[2106]: I0127 01:28:46.547974 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.598318] kube-apiserver[2106]: I0127 01:28:46.548017 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.598652] kube-apiserver[2106]: I0127 01:28:46.548055 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.602462] kube-apiserver[2106]: I0127 01:28:46.552843 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.603342] kube-apiserver[2106]: I0127 01:28:46.553585 2106 client.go:354] parsed scheme: ""
kube# [ 18.603688] kube-apiserver[2106]: I0127 01:28:46.553730 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.604115] kube-apiserver[2106]: I0127 01:28:46.553810 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.604325] kube-apiserver[2106]: I0127 01:28:46.554071 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.608312] kube-apiserver[2106]: I0127 01:28:46.558701 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.608711] kube-apiserver[2106]: I0127 01:28:46.559059 2106 client.go:354] parsed scheme: ""
kube# [ 18.609168] kube-apiserver[2106]: I0127 01:28:46.559112 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.609414] kube-apiserver[2106]: I0127 01:28:46.559143 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.609602] kube-apiserver[2106]: I0127 01:28:46.559213 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.614067] kube-apiserver[2106]: I0127 01:28:46.564449 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.614668] kube-apiserver[2106]: I0127 01:28:46.564999 2106 client.go:354] parsed scheme: ""
kube# [ 18.614919] kube-apiserver[2106]: I0127 01:28:46.565052 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.615265] kube-apiserver[2106]: I0127 01:28:46.565100 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.615548] kube-apiserver[2106]: I0127 01:28:46.565160 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.621181] kube-apiserver[2106]: I0127 01:28:46.571564 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.621382] kube-apiserver[2106]: I0127 01:28:46.571761 2106 client.go:354] parsed scheme: ""
kube# [ 18.621677] kube-apiserver[2106]: I0127 01:28:46.571779 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.622020] kube-apiserver[2106]: I0127 01:28:46.571818 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.622239] kube-apiserver[2106]: I0127 01:28:46.571864 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.627201] kube-apiserver[2106]: I0127 01:28:46.577563 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.630241] kube-apiserver[2106]: I0127 01:28:46.580629 2106 client.go:354] parsed scheme: ""
kube# [ 18.630379] kube-apiserver[2106]: I0127 01:28:46.580646 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.630620] kube-apiserver[2106]: I0127 01:28:46.580674 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.630941] kube-apiserver[2106]: I0127 01:28:46.580711 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.635057] kube-apiserver[2106]: I0127 01:28:46.585395 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.635525] kube-apiserver[2106]: I0127 01:28:46.585900 2106 client.go:354] parsed scheme: ""
kube# [ 18.635726] kube-apiserver[2106]: I0127 01:28:46.585918 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.635977] kube-apiserver[2106]: I0127 01:28:46.585957 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.636216] kube-apiserver[2106]: I0127 01:28:46.586019 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.640436] kube-apiserver[2106]: I0127 01:28:46.590777 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.643386] kube-apiserver[2106]: I0127 01:28:46.593773 2106 client.go:354] parsed scheme: ""
kube# [ 18.643478] kube-apiserver[2106]: I0127 01:28:46.593790 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.643772] kube-apiserver[2106]: I0127 01:28:46.593818 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.644064] kube-apiserver[2106]: I0127 01:28:46.593848 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.648059] kube-apiserver[2106]: I0127 01:28:46.598436 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.648261] kube-apiserver[2106]: I0127 01:28:46.598646 2106 client.go:354] parsed scheme: ""
kube# [ 18.648508] kube-apiserver[2106]: I0127 01:28:46.598663 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.648780] kube-apiserver[2106]: I0127 01:28:46.598705 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.649123] kube-apiserver[2106]: I0127 01:28:46.598733 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.652984] kube-apiserver[2106]: I0127 01:28:46.603254 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.653615] kube-apiserver[2106]: I0127 01:28:46.603999 2106 client.go:354] parsed scheme: ""
kube# [ 18.653926] kube-apiserver[2106]: I0127 01:28:46.604018 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.654216] kube-apiserver[2106]: I0127 01:28:46.604051 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.654497] kube-apiserver[2106]: I0127 01:28:46.604092 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.658247] kube-apiserver[2106]: I0127 01:28:46.608626 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.658522] kube-apiserver[2106]: I0127 01:28:46.608901 2106 client.go:354] parsed scheme: ""
kube# [ 18.658835] kube-apiserver[2106]: I0127 01:28:46.608919 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.659112] kube-apiserver[2106]: I0127 01:28:46.608956 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.659410] kube-apiserver[2106]: I0127 01:28:46.608995 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.663775] kube-apiserver[2106]: I0127 01:28:46.614154 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.664305] kube-apiserver[2106]: I0127 01:28:46.614659 2106 client.go:354] parsed scheme: ""
kube# [ 18.664572] kube-apiserver[2106]: I0127 01:28:46.614676 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.664849] kube-apiserver[2106]: I0127 01:28:46.614718 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.665162] kube-apiserver[2106]: I0127 01:28:46.614753 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.671156] kube-apiserver[2106]: I0127 01:28:46.621458 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.673595] kube-apiserver[2106]: I0127 01:28:46.623983 2106 client.go:354] parsed scheme: ""
kube# [ 18.673717] kube-apiserver[2106]: I0127 01:28:46.623998 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.674347] kube-apiserver[2106]: I0127 01:28:46.624032 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.674674] kube-apiserver[2106]: I0127 01:28:46.624062 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.674960] kube-scheduler[2112]: W0127 01:28:46.624339 2112 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.675259] kube-scheduler[2112]: W0127 01:28:46.624392 2112 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.675574] kube-scheduler[2112]: W0127 01:28:46.624415 2112 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.689423] kube-scheduler[2112]: I0127 01:28:46.639766 2112 server.go:142] Version: v1.15.6
kube# [ 18.689665] kube-scheduler[2112]: I0127 01:28:46.639814 2112 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
kube# [ 18.690117] kube-apiserver[2106]: I0127 01:28:46.640474 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.690537] kube-scheduler[2112]: W0127 01:28:46.640880 2112 authorization.go:47] Authorization is disabled
kube# [ 18.690831] kube-scheduler[2112]: W0127 01:28:46.640913 2112 authentication.go:55] Authentication is disabled
kube# [ 18.691212] kube-scheduler[2112]: I0127 01:28:46.640935 2112 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
kube# [ 18.691468] kube-apiserver[2106]: I0127 01:28:46.640900 2106 client.go:354] parsed scheme: ""
kube# [ 18.691930] kube-apiserver[2106]: I0127 01:28:46.640920 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.692215] kube-apiserver[2106]: I0127 01:28:46.640951 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.692497] kube-apiserver[2106]: I0127 01:28:46.640988 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.692935] kube-scheduler[2112]: I0127 01:28:46.641378 2112 secure_serving.go:116] Serving securely on [::]:10259
kube# [ 18.696690] kube-apiserver[2106]: I0127 01:28:46.647022 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.697098] kube-apiserver[2106]: I0127 01:28:46.647444 2106 client.go:354] parsed scheme: ""
kube# [ 18.697301] kube-apiserver[2106]: I0127 01:28:46.647469 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.697576] kube-apiserver[2106]: I0127 01:28:46.647543 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.697963] kube-apiserver[2106]: I0127 01:28:46.647585 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.701997] kube-apiserver[2106]: I0127 01:28:46.652377 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.702314] kube-apiserver[2106]: I0127 01:28:46.652671 2106 client.go:354] parsed scheme: ""
kube# [ 18.702582] kube-apiserver[2106]: I0127 01:28:46.652696 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.702907] kube-apiserver[2106]: I0127 01:28:46.652730 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.703206] kube-apiserver[2106]: I0127 01:28:46.652775 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.707026] kube-apiserver[2106]: I0127 01:28:46.657301 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.707357] kube-apiserver[2106]: I0127 01:28:46.657651 2106 client.go:354] parsed scheme: ""
kube# [ 18.707664] kube-apiserver[2106]: I0127 01:28:46.657681 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.707917] kube-apiserver[2106]: I0127 01:28:46.657723 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.708134] kube-apiserver[2106]: I0127 01:28:46.657762 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.712501] kube-apiserver[2106]: I0127 01:28:46.662860 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.712833] kube-apiserver[2106]: I0127 01:28:46.663143 2106 client.go:354] parsed scheme: ""
kube# [ 18.713172] kube-apiserver[2106]: I0127 01:28:46.663161 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.713390] kube-apiserver[2106]: I0127 01:28:46.663200 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.713700] kube-apiserver[2106]: I0127 01:28:46.663262 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.718599] kube-apiserver[2106]: I0127 01:28:46.668964 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.719168] kube-apiserver[2106]: I0127 01:28:46.669504 2106 client.go:354] parsed scheme: ""
kube# [ 18.719458] kube-apiserver[2106]: I0127 01:28:46.669535 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.719735] kube-apiserver[2106]: I0127 01:28:46.669656 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.720182] kube-apiserver[2106]: I0127 01:28:46.669704 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.726243] kube-apiserver[2106]: I0127 01:28:46.676591 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.726547] kube-apiserver[2106]: I0127 01:28:46.676927 2106 client.go:354] parsed scheme: ""
kube# [ 18.727039] kube-apiserver[2106]: I0127 01:28:46.676947 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.727323] kube-apiserver[2106]: I0127 01:28:46.676997 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.727626] kube-apiserver[2106]: I0127 01:28:46.677031 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.731779] kube-apiserver[2106]: I0127 01:28:46.682135 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.732205] kube-apiserver[2106]: I0127 01:28:46.682568 2106 client.go:354] parsed scheme: ""
kube# [ 18.732483] kube-apiserver[2106]: I0127 01:28:46.682600 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.732790] kube-apiserver[2106]: I0127 01:28:46.682636 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.733183] kube-apiserver[2106]: I0127 01:28:46.682679 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.733479] kube-controller-manager[2082]: W0127 01:28:46.683218 2082 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.733765] kube-controller-manager[2082]: W0127 01:28:46.683267 2082 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.734130] kube-controller-manager[2082]: W0127 01:28:46.683294 2082 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.734319] kube-controller-manager[2082]: I0127 01:28:46.683414 2082 controllermanager.go:164] Version: v1.15.6
kube# [ 18.737117] kube-apiserver[2106]: I0127 01:28:46.687471 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.737441] kube-apiserver[2106]: I0127 01:28:46.687796 2106 client.go:354] parsed scheme: ""
kube# [ 18.737724] kube-apiserver[2106]: I0127 01:28:46.687813 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.738127] kube-apiserver[2106]: I0127 01:28:46.687864 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.738411] kube-apiserver[2106]: I0127 01:28:46.687921 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.742817] kube-apiserver[2106]: I0127 01:28:46.693182 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.750554] kube-controller-manager[2082]: I0127 01:28:46.700923 2082 secure_serving.go:116] Serving securely on 127.0.0.1:10252
kube# [ 18.750776] kube-controller-manager[2082]: I0127 01:28:46.700975 2082 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-controller-manager...
kube# [ 18.754609] kube-apiserver[2106]: I0127 01:28:46.704955 2106 client.go:354] parsed scheme: ""
kube# [ 18.754778] kube-apiserver[2106]: I0127 01:28:46.704986 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.755150] kube-apiserver[2106]: I0127 01:28:46.705046 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.755441] kube-apiserver[2106]: I0127 01:28:46.705100 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.759921] kube-apiserver[2106]: I0127 01:28:46.710085 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.760618] kube-apiserver[2106]: I0127 01:28:46.710812 2106 client.go:354] parsed scheme: ""
kube# [ 18.761163] kube-apiserver[2106]: I0127 01:28:46.710928 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.761377] kube-apiserver[2106]: I0127 01:28:46.710977 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.761558] kube-apiserver[2106]: I0127 01:28:46.711003 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.767034] kube-apiserver[2106]: I0127 01:28:46.716806 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.767174] kube-apiserver[2106]: I0127 01:28:46.717102 2106 client.go:354] parsed scheme: ""
kube# [ 18.767418] kube-apiserver[2106]: I0127 01:28:46.717123 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.767614] kube-apiserver[2106]: I0127 01:28:46.717161 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.767854] kube-apiserver[2106]: I0127 01:28:46.717264 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.772584] kube-apiserver[2106]: I0127 01:28:46.722961 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.772749] kube-apiserver[2106]: I0127 01:28:46.723087 2106 client.go:354] parsed scheme: ""
kube# [ 18.773091] kube-apiserver[2106]: I0127 01:28:46.723120 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.773379] kube-apiserver[2106]: I0127 01:28:46.723173 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.773695] kube-apiserver[2106]: I0127 01:28:46.723214 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.778053] kube-apiserver[2106]: I0127 01:28:46.728372 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.778250] kube-apiserver[2106]: I0127 01:28:46.728610 2106 client.go:354] parsed scheme: ""
kube# [ 18.778563] kube-apiserver[2106]: I0127 01:28:46.728630 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.778778] kube-apiserver[2106]: I0127 01:28:46.728658 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.779189] kube-apiserver[2106]: I0127 01:28:46.728708 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.783425] kube-apiserver[2106]: I0127 01:28:46.733742 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.785859] kube-apiserver[2106]: I0127 01:28:46.736241 2106 client.go:354] parsed scheme: ""
kube# [ 18.785995] kube-apiserver[2106]: I0127 01:28:46.736258 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.786268] kube-apiserver[2106]: I0127 01:28:46.736285 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.786459] kube-apiserver[2106]: I0127 01:28:46.736332 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.791844] kube-apiserver[2106]: I0127 01:28:46.742231 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.794598] kube-apiserver[2106]: I0127 01:28:46.744971 2106 client.go:354] parsed scheme: ""
kube# [ 18.794714] kube-apiserver[2106]: I0127 01:28:46.744995 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.795113] kube-apiserver[2106]: I0127 01:28:46.745032 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.795386] kube-apiserver[2106]: I0127 01:28:46.745081 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.799316] kube-apiserver[2106]: I0127 01:28:46.749696 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.799781] kube-apiserver[2106]: I0127 01:28:46.750156 2106 client.go:354] parsed scheme: ""
kube# [ 18.800251] kube-apiserver[2106]: I0127 01:28:46.750173 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.800536] kube-apiserver[2106]: I0127 01:28:46.750231 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.800854] kube-apiserver[2106]: I0127 01:28:46.750258 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.805292] kube-apiserver[2106]: I0127 01:28:46.755674 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.805623] kube-apiserver[2106]: I0127 01:28:46.756002 2106 client.go:354] parsed scheme: ""
kube# [ 18.805950] kube-apiserver[2106]: I0127 01:28:46.756027 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.806331] kube-apiserver[2106]: I0127 01:28:46.756070 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.806621] kube-apiserver[2106]: I0127 01:28:46.756122 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.810787] kube-apiserver[2106]: I0127 01:28:46.761164 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.811132] kube-apiserver[2106]: I0127 01:28:46.761501 2106 client.go:354] parsed scheme: ""
kube# [ 18.811411] kube-apiserver[2106]: I0127 01:28:46.761518 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.811733] kube-apiserver[2106]: I0127 01:28:46.761552 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.811995] kube-apiserver[2106]: I0127 01:28:46.761581 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.817475] kube-apiserver[2106]: I0127 01:28:46.767857 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.817900] kube-apiserver[2106]: I0127 01:28:46.768212 2106 client.go:354] parsed scheme: ""
kube# [ 18.818126] kube-apiserver[2106]: I0127 01:28:46.768234 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.818376] kube-apiserver[2106]: I0127 01:28:46.768290 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.818677] kube-apiserver[2106]: I0127 01:28:46.768373 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.823936] kube-apiserver[2106]: I0127 01:28:46.774296 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.824432] kube-apiserver[2106]: I0127 01:28:46.774813 2106 client.go:354] parsed scheme: ""
kube# [ 18.824653] kube-apiserver[2106]: I0127 01:28:46.774838 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.825003] kube-apiserver[2106]: I0127 01:28:46.774890 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.825249] kube-apiserver[2106]: I0127 01:28:46.774960 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.829503] kube-apiserver[2106]: I0127 01:28:46.779877 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.829911] kube-apiserver[2106]: I0127 01:28:46.780229 2106 client.go:354] parsed scheme: ""
kube# [ 18.830237] kube-apiserver[2106]: I0127 01:28:46.780248 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.830526] kube-apiserver[2106]: I0127 01:28:46.780299 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.830800] kube-apiserver[2106]: I0127 01:28:46.780417 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.834858] kube-apiserver[2106]: I0127 01:28:46.785242 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.835212] kube-apiserver[2106]: I0127 01:28:46.785570 2106 client.go:354] parsed scheme: ""
kube# [ 18.835478] kube-apiserver[2106]: I0127 01:28:46.785595 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.835709] kube-apiserver[2106]: I0127 01:28:46.785622 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.836118] kube-apiserver[2106]: I0127 01:28:46.785676 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.840032] kube-apiserver[2106]: I0127 01:28:46.790412 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.840449] kube-apiserver[2106]: I0127 01:28:46.790784 2106 client.go:354] parsed scheme: ""
kube# [ 18.840762] kube-apiserver[2106]: I0127 01:28:46.790806 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.841200] kube-apiserver[2106]: I0127 01:28:46.790859 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.841469] kube-apiserver[2106]: I0127 01:28:46.790899 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.845149] kube-apiserver[2106]: I0127 01:28:46.795456 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.847251] kube-apiserver[2106]: I0127 01:28:46.797632 2106 client.go:354] parsed scheme: ""
kube# [ 18.847373] kube-apiserver[2106]: I0127 01:28:46.797649 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.847596] kube-apiserver[2106]: I0127 01:28:46.797675 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.847845] kube-apiserver[2106]: I0127 01:28:46.797703 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.852610] kube-apiserver[2106]: I0127 01:28:46.802970 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.853000] kube-apiserver[2106]: I0127 01:28:46.803374 2106 client.go:354] parsed scheme: ""
kube# [ 18.853307] kube-apiserver[2106]: I0127 01:28:46.803396 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.853575] kube-apiserver[2106]: I0127 01:28:46.803467 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.853765] kube-apiserver[2106]: I0127 01:28:46.803550 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.858376] kube-apiserver[2106]: I0127 01:28:46.808758 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.858748] kube-apiserver[2106]: I0127 01:28:46.809100 2106 client.go:354] parsed scheme: ""
kube# [ 18.859137] kube-apiserver[2106]: I0127 01:28:46.809117 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.859336] kube-apiserver[2106]: I0127 01:28:46.809151 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.859638] kube-apiserver[2106]: I0127 01:28:46.809218 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.863771] kube-apiserver[2106]: I0127 01:28:46.814115 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.864213] kube-apiserver[2106]: I0127 01:28:46.814562 2106 client.go:354] parsed scheme: ""
kube# [ 18.864544] kube-apiserver[2106]: I0127 01:28:46.814585 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.864908] kube-apiserver[2106]: I0127 01:28:46.814619 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.865282] kube-apiserver[2106]: I0127 01:28:46.814651 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.868801] kube-apiserver[2106]: I0127 01:28:46.819160 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.869140] kube-apiserver[2106]: I0127 01:28:46.819501 2106 client.go:354] parsed scheme: ""
kube# [ 18.869439] kube-apiserver[2106]: I0127 01:28:46.819521 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.869717] kube-apiserver[2106]: I0127 01:28:46.819562 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.870155] kube-apiserver[2106]: I0127 01:28:46.819616 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.873799] kube-apiserver[2106]: I0127 01:28:46.824106 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.874504] kube-apiserver[2106]: I0127 01:28:46.824865 2106 client.go:354] parsed scheme: ""
kube# [ 18.874919] kube-apiserver[2106]: I0127 01:28:46.824887 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.875316] kube-apiserver[2106]: I0127 01:28:46.824960 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.875560] kube-apiserver[2106]: I0127 01:28:46.825031 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.879513] kube-apiserver[2106]: I0127 01:28:46.829898 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.880083] kube-apiserver[2106]: I0127 01:28:46.830445 2106 client.go:354] parsed scheme: ""
kube# [ 18.880363] kube-apiserver[2106]: I0127 01:28:46.830465 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.880755] kube-apiserver[2106]: I0127 01:28:46.830502 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.881183] kube-apiserver[2106]: I0127 01:28:46.830673 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.884849] kube-apiserver[2106]: I0127 01:28:46.835229 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.885258] kube-apiserver[2106]: I0127 01:28:46.835582 2106 client.go:354] parsed scheme: ""
kube# [ 18.885495] kube-apiserver[2106]: I0127 01:28:46.835610 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.885803] kube-apiserver[2106]: I0127 01:28:46.835654 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.886524] kube-apiserver[2106]: I0127 01:28:46.835696 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.890323] kube-apiserver[2106]: I0127 01:28:46.840646 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.890697] kube-apiserver[2106]: I0127 01:28:46.841049 2106 client.go:354] parsed scheme: ""
kube# [ 18.891201] kube-apiserver[2106]: I0127 01:28:46.841068 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.891550] kube-apiserver[2106]: I0127 01:28:46.841097 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.892108] kube-apiserver[2106]: I0127 01:28:46.841144 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.896772] kube-apiserver[2106]: I0127 01:28:46.847156 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.897573] kube-apiserver[2106]: I0127 01:28:46.847890 2106 client.go:354] parsed scheme: ""
kube# [ 18.897970] kube-apiserver[2106]: I0127 01:28:46.847917 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.898393] kube-apiserver[2106]: I0127 01:28:46.847982 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.898731] kube-apiserver[2106]: I0127 01:28:46.848057 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.904158] kube-apiserver[2106]: I0127 01:28:46.854541 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.904537] kube-apiserver[2106]: I0127 01:28:46.854863 2106 client.go:354] parsed scheme: ""
kube# [ 18.904790] kube-apiserver[2106]: I0127 01:28:46.854880 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.905147] kube-apiserver[2106]: I0127 01:28:46.854940 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.905451] kube-apiserver[2106]: I0127 01:28:46.854997 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.909235] kube-apiserver[2106]: I0127 01:28:46.859615 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.910510] kube-apiserver[2106]: I0127 01:28:46.860436 2106 client.go:354] parsed scheme: ""
kube# [ 18.910751] kube-apiserver[2106]: I0127 01:28:46.860473 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.911199] kube-apiserver[2106]: I0127 01:28:46.860665 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.911433] kube-apiserver[2106]: I0127 01:28:46.860842 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.917421] kube-apiserver[2106]: I0127 01:28:46.867781 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.917704] kube-apiserver[2106]: I0127 01:28:46.868086 2106 client.go:354] parsed scheme: ""
kube# [ 18.918010] kube-apiserver[2106]: I0127 01:28:46.868114 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.918370] kube-apiserver[2106]: I0127 01:28:46.868166 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.918710] kube-apiserver[2106]: I0127 01:28:46.868206 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.922846] kube-apiserver[2106]: I0127 01:28:46.873208 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.923228] kube-apiserver[2106]: I0127 01:28:46.873583 2106 client.go:354] parsed scheme: ""
kube# [ 18.923510] kube-apiserver[2106]: I0127 01:28:46.873602 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.923858] kube-apiserver[2106]: I0127 01:28:46.873629 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.924289] kube-apiserver[2106]: I0127 01:28:46.873677 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.928196] kube-apiserver[2106]: I0127 01:28:46.878567 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.997440] kube-apiserver[2106]: W0127 01:28:46.947773 2106 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
kube# [ 19.001527] kube-apiserver[2106]: W0127 01:28:46.951904 2106 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.003518] kube-apiserver[2106]: W0127 01:28:46.953907 2106 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.003928] kube-apiserver[2106]: W0127 01:28:46.954287 2106 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.005065] kube-apiserver[2106]: W0127 01:28:46.955446 2106 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
kube# [ 19.275947] kube-apiserver[2106]: I0127 01:28:47.226161 2106 client.go:354] parsed scheme: ""
kube# [ 19.276139] kube-apiserver[2106]: I0127 01:28:47.226187 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.276409] kube-apiserver[2106]: I0127 01:28:47.226219 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.276733] kube-apiserver[2106]: I0127 01:28:47.226282 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.280778] kube-apiserver[2106]: I0127 01:28:47.231146 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.445611] kube-apiserver[2106]: E0127 01:28:47.395937 2106 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.445816] kube-apiserver[2106]: E0127 01:28:47.395977 2106 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.446107] kube-apiserver[2106]: E0127 01:28:47.395996 2106 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.446301] kube-apiserver[2106]: E0127 01:28:47.396014 2106 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.446557] kube-apiserver[2106]: E0127 01:28:47.396036 2106 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.446824] kube-apiserver[2106]: E0127 01:28:47.396056 2106 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.447028] kube-apiserver[2106]: E0127 01:28:47.396071 2106 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.447262] kube-apiserver[2106]: E0127 01:28:47.396087 2106 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.447523] kube-apiserver[2106]: E0127 01:28:47.396152 2106 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.447742] kube-apiserver[2106]: E0127 01:28:47.396199 2106 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.448070] kube-apiserver[2106]: E0127 01:28:47.396225 2106 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.448307] kube-apiserver[2106]: E0127 01:28:47.396243 2106 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 19.448533] kube-apiserver[2106]: I0127 01:28:47.396265 2106 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 19.448840] kube-apiserver[2106]: I0127 01:28:47.396286 2106 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 19.449257] kube-apiserver[2106]: I0127 01:28:47.397300 2106 client.go:354] parsed scheme: ""
kube# [ 19.449528] kube-apiserver[2106]: I0127 01:28:47.397360 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.449731] kube-apiserver[2106]: I0127 01:28:47.397403 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.450069] kube-apiserver[2106]: I0127 01:28:47.397439 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.452094] kube-apiserver[2106]: I0127 01:28:47.402459 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.472155] kube-apiserver[2106]: I0127 01:28:47.422524 2106 client.go:354] parsed scheme: ""
kube# [ 19.472293] kube-apiserver[2106]: I0127 01:28:47.422551 2106 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.472586] kube-apiserver[2106]: I0127 01:28:47.422588 2106 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.472921] kube-apiserver[2106]: I0127 01:28:47.422626 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.477279] kube-apiserver[2106]: I0127 01:28:47.427654 2106 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.483743] kube-apiserver[2106]: I0127 01:28:48.433652 2106 secure_serving.go:116] Serving securely on [::]:443
kube# [ 20.484058] kube-apiserver[2106]: I0127 01:28:48.433711 2106 autoregister_controller.go:140] Starting autoregister controller
kube# [ 20.484396] kube-apiserver[2106]: I0127 01:28:48.433722 2106 cache.go:32] Waiting for caches to sync for autoregister controller
kube# [ 20.484595] kube-apiserver[2106]: I0127 01:28:48.433713 2106 controller.go:81] Starting OpenAPI AggregationController
kube# [ 20.484911] kube-apiserver[2106]: I0127 01:28:48.433817 2106 apiservice_controller.go:94] Starting APIServiceRegistrationController
kube# [ 20.485113] kube-apiserver[2106]: I0127 01:28:48.433855 2106 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
kube# [ 20.485322] kube-apiserver[2106]: I0127 01:28:48.433877 2106 available_controller.go:376] Starting AvailableConditionController
kube# [ 20.485528] kube-apiserver[2106]: I0127 01:28:48.433922 2106 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
kube# [ 20.493254] kube-apiserver[2106]: I0127 01:28:48.443600 2106 crd_finalizer.go:255] Starting CRDFinalizer
kube# [ 20.494779] kube-apiserver[2106]: I0127 01:28:48.445124 2106 controller.go:83] Starting OpenAPI controller
kube# [ 20.495156] kube-apiserver[2106]: I0127 01:28:48.445512 2106 customresource_discovery_controller.go:208] Starting DiscoveryController
kube# [ 20.497734] kube-apiserver[2106]: I0127 01:28:48.433972 2106 crdregistration_controller.go:112] Starting crd-autoregister controller
kube# [ 20.497950] kube-apiserver[2106]: I0127 01:28:48.448120 2106 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
kube# [ 20.500144] kube-apiserver[2106]: E0127 01:28:48.450506 2106 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.1, ResourceVersion: 0, AdditionalErrorMsg:
kube# [ 20.500385] kube-apiserver[2106]: I0127 01:28:48.450754 2106 naming_controller.go:288] Starting NamingConditionController
kube# [ 20.500853] kube-apiserver[2106]: I0127 01:28:48.450817 2106 establishing_controller.go:73] Starting EstablishingController
kube# [ 20.501157] kube-apiserver[2106]: I0127 01:28:48.450862 2106 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
kube# [ 20.516969] kube-scheduler[2112]: E0127 01:28:48.466158 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 20.517606] kube-proxy[2107]: W0127 01:28:48.467179 2107 node.go:113] Failed to retrieve node info: nodes "kube" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
kube# [ 20.518042] kube-proxy[2107]: I0127 01:28:48.467211 2107 server_others.go:143] Using iptables Proxier.
kube# [ 20.518495] kube-scheduler[2112]: E0127 01:28:48.466225 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 20.518858] kube-scheduler[2112]: E0127 01:28:48.466906 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 20.519004] kube-scheduler[2112]: E0127 01:28:48.466432 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 20.519377] kube-scheduler[2112]: E0127 01:28:48.467056 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 20.519630] kube-scheduler[2112]: E0127 01:28:48.467100 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 20.519997] kube-scheduler[2112]: E0127 01:28:48.467139 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 20.520392] kube-scheduler[2112]: E0127 01:28:48.467754 2112 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 20.542465] kube-proxy[2107]: W0127 01:28:48.492793 2107 proxier.go:316] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
kube# [ 20.544003] kube-controller-manager[2082]: E0127 01:28:48.492982 2082 leaderelection.go:324] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
kube# [ 20.544615] kube-scheduler[2112]: E0127 01:28:48.494581 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 20.544968] kube-scheduler[2112]: E0127 01:28:48.494582 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube# [ 20.554686] kube-proxy[2107]: I0127 01:28:48.505028 2107 server.go:534] Version: v1.15.6
kube# [ 20.558695] etcd[2099]: proto: no coders for int
kube# [ 20.558913] etcd[2099]: proto: no encoder for ValueSize int [GetProperties]
kube# [ 20.565494] kube-proxy[2107]: I0127 01:28:48.515768 2107 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
kube# [ 20.565656] kube-proxy[2107]: I0127 01:28:48.515823 2107 conntrack.go:52] Setting nf_conntrack_max to 524288
kube# [ 20.581233] systemd[1]: Started Kubernetes systemd probe.
kube# [ 20.583639] kube-apiserver[2106]: I0127 01:28:48.533958 2106 cache.go:39] Caches are synced for autoregister controller
kube# [ 20.583978] kube-apiserver[2106]: I0127 01:28:48.534282 2106 cache.go:39] Caches are synced for APIServiceRegistrationController controller
kube# [ 20.584416] kube-apiserver[2106]: I0127 01:28:48.534617 2106 cache.go:39] Caches are synced for AvailableConditionController controller
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube# [ 20.589906] kube-proxy[2107]: I0127 01:28:48.540222 2107 conntrack.go:83] Setting conntrack hashsize to 131072
kube: exit status 1
kube# [ 20.592382] systemd[1]: run-r2fcf878a108b477d83c00c45bf499bd0.scope: Succeeded.
kube# [ 20.597979] kube-apiserver[2106]: I0127 01:28:48.548343 2106 controller_utils.go:1036] Caches are synced for crd-autoregister controller
kube# [ 20.600072] kube-proxy[2107]: I0127 01:28:48.550417 2107 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
kube# [ 20.600366] kube-proxy[2107]: I0127 01:28:48.550462 2107 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
kube# [ 20.601002] kube-proxy[2107]: I0127 01:28:48.550666 2107 config.go:96] Starting endpoints config controller
(2.70 seconds)
kube# [ 20.601270] kube-proxy[2107]: I0127 01:28:48.550704 2107 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
kube# [ 20.601539] kube-proxy[2107]: I0127 01:28:48.550740 2107 config.go:187] Starting service config controller
kube# [ 20.601944] kube-proxy[2107]: I0127 01:28:48.550760 2107 controller_utils.go:1029] Waiting for caches to sync for service config controller
kube# [ 20.611375] kube-proxy[2107]: E0127 01:28:48.561719 2107 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 20.611517] kube-proxy[2107]: E0127 01:28:48.561747 2107 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# [ 20.619961] kube-proxy[2107]: E0127 01:28:48.570266 2107 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.15ed9a2521edd406", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube", UID:"kube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"kube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad1420d1b406, ext:2799994972, loc:(*time.Location)(0x2740d40)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad1420d1b406, ext:2799994972, loc:(*time.Location)(0x2740d40)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
kube# [ 21.482159] kube-apiserver[2106]: I0127 01:28:49.432470 2106 controller.go:107] OpenAPI AggregationController: Processing item
kube# [ 21.482393] kube-apiserver[2106]: I0127 01:28:49.432509 2106 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
kube# [ 21.482723] kube-apiserver[2106]: I0127 01:28:49.432562 2106 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
kube# [ 21.498727] kube-apiserver[2106]: I0127 01:28:49.448741 2106 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
kube# [ 21.502592] kube-apiserver[2106]: I0127 01:28:49.452971 2106 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
kube# [ 21.502678] kube-apiserver[2106]: I0127 01:28:49.452992 2106 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
kube# [ 21.518934] kube-scheduler[2112]: E0127 01:28:49.468971 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 21.520966] kube-scheduler[2112]: E0127 01:28:49.471344 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 21.522037] kube-scheduler[2112]: E0127 01:28:49.472408 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 21.523231] kube-scheduler[2112]: E0127 01:28:49.473578 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 21.524537] kube-scheduler[2112]: E0127 01:28:49.474888 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 21.525573] kube-scheduler[2112]: E0127 01:28:49.475933 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 21.526944] kube-scheduler[2112]: E0127 01:28:49.477202 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 21.527889] kube-scheduler[2112]: E0127 01:28:49.478236 2112 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 21.545600] kube-scheduler[2112]: E0127 01:28:49.495965 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 21.546693] kube-scheduler[2112]: E0127 01:28:49.497025 2112 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 21.612820] kube-proxy[2107]: E0127 01:28:49.562867 2107 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 21.613580] kube-proxy[2107]: E0127 01:28:49.563893 2107 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.06 seconds)
kube# [ 21.756309] kube-apiserver[2106]: I0127 01:28:49.706591 2106 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
kube# [ 21.780253] kube-apiserver[2106]: I0127 01:28:49.730611 2106 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
kube# [ 21.885895] kube-apiserver[2106]: W0127 01:28:49.836210 2106 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.1.1]
kube# [ 21.888213] kube-apiserver[2106]: I0127 01:28:49.838597 2106 controller.go:606] quota admission added evaluator for: endpoints
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 22.700978] kube-proxy[2107]: I0127 01:28:50.650934 2107 controller_utils.go:1036] Caches are synced for endpoints config controller
kube# [ 22.701149] kube-proxy[2107]: I0127 01:28:50.650935 2107 controller_utils.go:1036] Caches are synced for service config controller
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 23.392946] kube-scheduler[2112]: I0127 01:28:51.342906 2112 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
kube# [ 23.401836] kube-scheduler[2112]: I0127 01:28:51.352217 2112 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 24.712553] kube-controller-manager[2082]: I0127 01:28:52.662521 2082 leaderelection.go:245] successfully acquired lease kube-system/kube-controller-manager
kube# [ 24.712947] kube-controller-manager[2082]: I0127 01:28:52.662742 2082 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"48449fdf-4b36-479d-b94e-63c75aca8017", APIVersion:"v1", ResourceVersion:"150", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube_3a4bd212-5d1c-484c-b3f5-57f8bc4c0ad5 became leader
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 24.910750] nscd[1133]: 1133 checking for monitored file `/etc/netgroup': No such file or directory
kube# [ 24.955661] kube-controller-manager[2082]: I0127 01:28:52.906008 2082 plugins.go:103] No cloud provider specified.
kube# [ 24.957064] kube-controller-manager[2082]: I0127 01:28:52.907291 2082 controller_utils.go:1029] Waiting for caches to sync for tokens controller
kube# [ 24.962829] kube-apiserver[2106]: I0127 01:28:52.912716 2106 controller.go:606] quota admission added evaluator for: serviceaccounts
kube# [ 25.057391] kube-controller-manager[2082]: I0127 01:28:53.007604 2082 controller_utils.go:1036] Caches are synced for tokens controller
kube# [ 25.072711] kube-controller-manager[2082]: I0127 01:28:53.022984 2082 controllermanager.go:532] Started "podgc"
kube# [ 25.072943] kube-controller-manager[2082]: I0127 01:28:53.023076 2082 gc_controller.go:76] Starting GC controller
kube# [ 25.073203] kube-controller-manager[2082]: I0127 01:28:53.023105 2082 controller_utils.go:1029] Waiting for caches to sync for GC controller
kube# [ 25.087921] kube-controller-manager[2082]: I0127 01:28:53.038257 2082 controllermanager.go:532] Started "replicationcontroller"
kube# [ 25.088155] kube-controller-manager[2082]: I0127 01:28:53.038385 2082 replica_set.go:182] Starting replicationcontroller controller
kube# [ 25.088407] kube-controller-manager[2082]: I0127 01:28:53.038412 2082 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
kube# [ 25.099266] kube-controller-manager[2082]: I0127 01:28:53.049629 2082 controllermanager.go:532] Started "statefulset"
kube# [ 25.099477] kube-controller-manager[2082]: I0127 01:28:53.049693 2082 stateful_set.go:145] Starting stateful set controller
kube# [ 25.099714] kube-controller-manager[2082]: I0127 01:28:53.049708 2082 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
kube# [ 25.113832] kube-controller-manager[2082]: E0127 01:28:53.064182 2082 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
kube# [ 25.114044] kube-controller-manager[2082]: W0127 01:28:53.064215 2082 controllermanager.go:524] Skipping "service"
kube# [ 25.114331] kube-controller-manager[2082]: W0127 01:28:53.064232 2082 core.go:174] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
kube# [ 25.114550] kube-controller-manager[2082]: W0127 01:28:53.064244 2082 controllermanager.go:524] Skipping "route"
kube# [ 25.126389] kube-controller-manager[2082]: I0127 01:28:53.076722 2082 controllermanager.go:532] Started "pv-protection"
kube# [ 25.128341] kube-controller-manager[2082]: I0127 01:28:53.078719 2082 pv_protection_controller.go:82] Starting PV protection controller
kube# [ 25.128451] kube-controller-manager[2082]: I0127 01:28:53.078746 2082 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
kube# [ 25.138071] kube-controller-manager[2082]: I0127 01:28:53.088187 2082 controllermanager.go:532] Started "job"
kube# [ 25.138254] kube-controller-manager[2082]: I0127 01:28:53.088325 2082 job_controller.go:143] Starting job controller
kube# [ 25.138514] kube-controller-manager[2082]: I0127 01:28:53.088352 2082 controller_utils.go:1029] Waiting for caches to sync for job controller
kube# [ 25.216099] kube-controller-manager[2082]: I0127 01:28:53.166339 2082 controllermanager.go:532] Started "namespace"
kube# [ 25.217454] kube-controller-manager[2082]: I0127 01:28:53.167818 2082 namespace_controller.go:186] Starting namespace controller
kube# [ 25.217634] kube-controller-manager[2082]: I0127 01:28:53.167841 2082 controller_utils.go:1029] Waiting for caches to sync for namespace controller
kube# [ 25.461970] kube-controller-manager[2082]: I0127 01:28:53.412291 2082 controllermanager.go:532] Started "persistentvolume-expander"
kube# [ 25.462184] kube-controller-manager[2082]: I0127 01:28:53.412324 2082 expand_controller.go:300] Starting expand controller
kube# [ 25.462436] kube-controller-manager[2082]: W0127 01:28:53.412339 2082 controllermanager.go:524] Skipping "root-ca-cert-publisher"
kube# [ 25.462711] kube-controller-manager[2082]: I0127 01:28:53.412350 2082 controller_utils.go:1029] Waiting for caches to sync for expand controller
kube# [ 25.709152] kube-controller-manager[2082]: I0127 01:28:53.659515 2082 controllermanager.go:532] Started "endpoint"
kube# [ 25.709324] kube-controller-manager[2082]: I0127 01:28:53.659567 2082 endpoints_controller.go:166] Starting endpoint controller
kube# [ 25.709559] kube-controller-manager[2082]: I0127 01:28:53.659585 2082 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 25.959447] kube-controller-manager[2082]: I0127 01:28:53.909436 2082 controllermanager.go:532] Started "csrcleaner"
kube# [ 25.959620] kube-controller-manager[2082]: I0127 01:28:53.909457 2082 cleaner.go:81] Starting CSR cleaner controller
kube# [ 26.208699] kube-controller-manager[2082]: I0127 01:28:54.159049 2082 node_lifecycle_controller.go:291] Sending events to api server.
kube# [ 26.208976] kube-controller-manager[2082]: I0127 01:28:54.159225 2082 node_lifecycle_controller.go:324] Controller is using taint based evictions.
kube# [ 26.209199] kube-controller-manager[2082]: I0127 01:28:54.159276 2082 taint_manager.go:158] Sending events to api server.
kube# [ 26.209552] kube-controller-manager[2082]: I0127 01:28:54.159534 2082 node_lifecycle_controller.go:418] Controller will reconcile labels.
kube# [ 26.209784] kube-controller-manager[2082]: I0127 01:28:54.159560 2082 node_lifecycle_controller.go:431] Controller will taint node by condition.
kube# [ 26.210089] kube-controller-manager[2082]: I0127 01:28:54.159591 2082 controllermanager.go:532] Started "nodelifecycle"
kube# [ 26.210268] kube-controller-manager[2082]: I0127 01:28:54.159655 2082 node_lifecycle_controller.go:455] Starting node controller
kube# [ 26.210467] kube-controller-manager[2082]: I0127 01:28:54.159671 2082 controller_utils.go:1029] Waiting for caches to sync for taint controller
kube# [ 26.459079] kube-controller-manager[2082]: I0127 01:28:54.409354 2082 controllermanager.go:532] Started "persistentvolume-binder"
kube# [ 26.459258] kube-controller-manager[2082]: I0127 01:28:54.409411 2082 pv_controller_base.go:282] Starting persistent volume controller
kube# [ 26.459509] kube-controller-manager[2082]: I0127 01:28:54.409428 2082 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
kube# [ 26.709036] kube-controller-manager[2082]: I0127 01:28:54.659344 2082 controllermanager.go:532] Started "clusterrole-aggregation"
kube# [ 26.709239] kube-controller-manager[2082]: I0127 01:28:54.659410 2082 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
kube# [ 26.709524] kube-controller-manager[2082]: I0127 01:28:54.659428 2082 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 26.958842] kube-controller-manager[2082]: I0127 01:28:54.909091 2082 controllermanager.go:532] Started "pvc-protection"
kube# [ 26.959423] kube-controller-manager[2082]: I0127 01:28:54.909185 2082 pvc_protection_controller.go:100] Starting PVC protection controller
kube# [ 26.959611] kube-controller-manager[2082]: I0127 01:28:54.909201 2082 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
kube# [ 27.210662] kube-controller-manager[2082]: I0127 01:28:55.160935 2082 controllermanager.go:532] Started "daemonset"
kube# [ 27.210814] kube-controller-manager[2082]: I0127 01:28:55.160987 2082 daemon_controller.go:267] Starting daemon sets controller
kube# [ 27.211187] kube-controller-manager[2082]: I0127 01:28:55.161004 2082 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
kube# [ 27.909369] kube-controller-manager[2082]: I0127 01:28:55.859713 2082 controllermanager.go:532] Started "horizontalpodautoscaling"
kube# [ 27.909537] kube-controller-manager[2082]: I0127 01:28:55.859766 2082 horizontal.go:156] Starting HPA controller
kube# [ 27.909724] kube-controller-manager[2082]: I0127 01:28:55.859782 2082 controller_utils.go:1029] Waiting for caches to sync for HPA controller
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 28.064605] systemd[1]: kube-addon-manager.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 28.064949] systemd[1]: kube-addon-manager.service: Scheduled restart job, restart counter is at 2.
kube# [ 28.065277] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 28.065722] systemd[1]: kube-addon-manager.service: Consumed 98ms CPU time, received 320B IP traffic, sent 480B IP traffic.
kube# [ 28.069304] systemd[1]: Starting Kubernetes addon manager...
kube# [ 28.159332] kube-controller-manager[2082]: I0127 01:28:56.109293 2082 controllermanager.go:532] Started "ttl"
kube# [ 28.159510] kube-controller-manager[2082]: I0127 01:28:56.109370 2082 ttl_controller.go:116] Starting TTL controller
kube# [ 28.159733] kube-controller-manager[2082]: I0127 01:28:56.109388 2082 controller_utils.go:1029] Waiting for caches to sync for TTL controller
kube# [ 28.384143] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver:kubelet-api-admin created
kube# [ 28.387918] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: clusterrole.rbac.authorization.k8s.io/system:coredns created
kube# [ 28.392697] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
kube# [ 28.397511] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: clusterrole.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 28.401729] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 28.407958] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: role.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 28.414613] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2458]: rolebinding.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 28.417400] systemd[1]: Started Kubernetes addon manager.
kube# [ 28.427413] kube-addons[2478]: INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress ==
kube# [ 28.435839] kube-addons[2478]: INFO: == Kubernetes addon manager started at 2020-01-27T01:28:56+00:00 with ADDON_CHECK_INTERVAL_SEC=60 ==
kube# [ 28.611483] kube-controller-manager[2082]: W0127 01:28:56.561828 2082 shared_informer.go:364] resyncPeriod 48532713741857 is smaller than resyncCheckPeriod 51490720000679 and the informer has already started. Changing it to 51490720000679
kube# [ 28.611738] kube-controller-manager[2082]: I0127 01:28:56.561954 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
kube# [ 28.612085] kube-controller-manager[2082]: I0127 01:28:56.562060 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
kube# [ 28.612273] kube-controller-manager[2082]: I0127 01:28:56.562109 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
kube# [ 28.612541] kube-controller-manager[2082]: I0127 01:28:56.562153 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
kube# [ 28.612813] kube-controller-manager[2082]: I0127 01:28:56.562196 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
kube# [ 28.613145] kube-controller-manager[2082]: I0127 01:28:56.562250 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
kube# [ 28.613348] kube-controller-manager[2082]: I0127 01:28:56.562337 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
kube# [ 28.613602] kube-controller-manager[2082]: I0127 01:28:56.562370 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
kube# [ 28.613951] kube-controller-manager[2082]: I0127 01:28:56.562398 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
kube# [ 28.614315] kube-controller-manager[2082]: I0127 01:28:56.562444 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
kube# [ 28.614524] kube-controller-manager[2082]: I0127 01:28:56.562488 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
kube# [ 28.614748] kube-controller-manager[2082]: I0127 01:28:56.562552 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
kube# [ 28.615074] kube-controller-manager[2082]: I0127 01:28:56.562585 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
kube# [ 28.615471] kube-controller-manager[2082]: I0127 01:28:56.562621 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
kube# [ 28.615767] kube-controller-manager[2082]: I0127 01:28:56.562669 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
kube# [ 28.616081] kube-controller-manager[2082]: I0127 01:28:56.562696 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
kube# [ 28.616282] kube-controller-manager[2082]: I0127 01:28:56.562762 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
kube# [ 28.616484] kube-controller-manager[2082]: I0127 01:28:56.562857 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
kube# [ 28.616751] kube-controller-manager[2082]: I0127 01:28:56.562930 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.extensions
kube# [ 28.617032] kube-controller-manager[2082]: I0127 01:28:56.563001 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
kube# [ 28.617253] kube-controller-manager[2082]: I0127 01:28:56.563038 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
kube# [ 28.617603] kube-controller-manager[2082]: I0127 01:28:56.563095 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
kube# [ 28.617850] kube-controller-manager[2082]: I0127 01:28:56.563145 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
kube# [ 28.618092] kube-controller-manager[2082]: I0127 01:28:56.563179 2082 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
kube# [ 28.618310] kube-controller-manager[2082]: I0127 01:28:56.563208 2082 controllermanager.go:532] Started "resourcequota"
kube# [ 28.618550] kube-controller-manager[2082]: I0127 01:28:56.563236 2082 resource_quota_controller.go:271] Starting resource quota controller
kube# [ 28.618819] kube-controller-manager[2082]: I0127 01:28:56.563263 2082 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 28.619138] kube-controller-manager[2082]: I0127 01:28:56.563341 2082 resource_quota_monitor.go:303] QuotaMonitor running
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 29.009508] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 29.011032] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.06 seconds)
kube# [ 29.212941] kube-controller-manager[2082]: I0127 01:28:57.162333 2082 garbagecollector.go:128] Starting garbage collector controller
kube# [ 29.213129] kube-controller-manager[2082]: I0127 01:28:57.162347 2082 controllermanager.go:532] Started "garbagecollector"
kube# [ 29.213449] kube-controller-manager[2082]: I0127 01:28:57.162361 2082 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 29.213666] kube-controller-manager[2082]: I0127 01:28:57.162386 2082 graph_builder.go:280] GraphBuilder running
kube# [ 29.229717] kube-controller-manager[2082]: I0127 01:28:57.180081 2082 controllermanager.go:532] Started "deployment"
kube# [ 29.229984] kube-controller-manager[2082]: I0127 01:28:57.180175 2082 deployment_controller.go:152] Starting deployment controller
kube# [ 29.230193] kube-controller-manager[2082]: I0127 01:28:57.180190 2082 controller_utils.go:1029] Waiting for caches to sync for deployment controller
kube# [ 29.459443] kube-controller-manager[2082]: I0127 01:28:57.409785 2082 controllermanager.go:532] Started "disruption"
kube# [ 29.459612] kube-controller-manager[2082]: I0127 01:28:57.409838 2082 disruption.go:333] Starting disruption controller
kube# [ 29.459923] kube-controller-manager[2082]: I0127 01:28:57.409856 2082 controller_utils.go:1029] Waiting for caches to sync for disruption controller
kube# [ 29.583114] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 29.584533] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 29.709270] kube-controller-manager[2082]: I0127 01:28:57.659564 2082 controllermanager.go:532] Started "cronjob"
kube# [ 29.709482] kube-controller-manager[2082]: W0127 01:28:57.659616 2082 controllermanager.go:524] Skipping "csrsigning"
kube# [ 29.709768] kube-controller-manager[2082]: I0127 01:28:57.659652 2082 cronjob_controller.go:96] Starting CronJob Manager
kube# [ 29.858375] kube-controller-manager[2082]: I0127 01:28:57.808697 2082 node_ipam_controller.go:94] Sending events to api server.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 30.155433] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.156753] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 30.724847] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.726161] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 31.298040] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 31.299359] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 31.865157] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 31.866549] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 32.432804] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 32.434231] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 33.001486] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.002803] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 33.573556] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.574992] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 34.143109] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.144386] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 34.716463] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.717889] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 35.288683] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 35.290075] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 35.857681] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 35.859298] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 36.433244] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 36.434555] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 37.006595] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 37.008096] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 37.577681] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 37.579166] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 38.148201] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.149758] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 38.719376] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.720746] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 39.305713] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 39.307749] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 39.860827] kube-controller-manager[2082]: I0127 01:29:07.810822 2082 range_allocator.go:78] Sending events to api server.
kube# [ 39.861166] kube-controller-manager[2082]: I0127 01:29:07.810921 2082 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses.
kube# [ 39.861513] kube-controller-manager[2082]: I0127 01:29:07.810965 2082 controllermanager.go:532] Started "nodeipam"
kube# [ 39.861771] kube-controller-manager[2082]: I0127 01:29:07.811047 2082 node_ipam_controller.go:162] Starting ipam controller
kube# [ 39.862256] kube-controller-manager[2082]: I0127 01:29:07.811078 2082 controller_utils.go:1029] Waiting for caches to sync for node controller
kube# [ 39.877540] kube-controller-manager[2082]: W0127 01:29:07.827862 2082 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
kube# [ 39.877791] kube-controller-manager[2082]: E0127 01:29:07.827895 2082 plugins.go:590] Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec: permission denied
kube# [ 39.878144] kube-controller-manager[2082]: I0127 01:29:07.828091 2082 controllermanager.go:532] Started "attachdetach"
kube# [ 39.878422] kube-controller-manager[2082]: I0127 01:29:07.828194 2082 attach_detach_controller.go:335] Starting attach detach controller
kube# [ 39.878672] kube-controller-manager[2082]: I0127 01:29:07.828214 2082 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
kube# [ 39.890202] kube-controller-manager[2082]: I0127 01:29:07.840538 2082 controllermanager.go:532] Started "serviceaccount"
kube# [ 39.890440] kube-controller-manager[2082]: W0127 01:29:07.840569 2082 controllermanager.go:511] "bootstrapsigner" is disabled
kube# [ 39.890814] kube-controller-manager[2082]: W0127 01:29:07.840578 2082 controllermanager.go:511] "tokencleaner" is disabled
kube# [ 39.891178] kube-controller-manager[2082]: I0127 01:29:07.840672 2082 serviceaccounts_controller.go:117] Starting service account controller
kube# [ 39.891507] kube-controller-manager[2082]: I0127 01:29:07.840701 2082 controller_utils.go:1029] Waiting for caches to sync for service account controller
kube# [ 39.893740] kube-addons[2478]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 39.895816] kube-controller-manager[2082]: I0127 01:29:07.846182 2082 node_lifecycle_controller.go:77] Sending events to api server
kube# [ 39.896234] kube-controller-manager[2082]: E0127 01:29:07.846248 2082 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
kube# [ 39.896567] kube-controller-manager[2082]: W0127 01:29:07.846261 2082 controllermanager.go:524] Skipping "cloud-node-lifecycle"
kube# [ 39.896819] kube-controller-manager[2082]: W0127 01:29:07.846300 2082 controllermanager.go:524] Skipping "ttl-after-finished"
kube# [ 39.902542] kube-addons[2478]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 39.910040] kube-controller-manager[2082]: I0127 01:29:07.860411 2082 controllermanager.go:532] Started "replicaset"
kube# [ 39.910217] kube-controller-manager[2082]: I0127 01:29:07.860433 2082 replica_set.go:182] Starting replicaset controller
kube# [ 39.910409] kube-controller-manager[2082]: I0127 01:29:07.860461 2082 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
kube# [ 39.915240] kube-controller-manager[2082]: I0127 01:29:07.865618 2082 controllermanager.go:532] Started "csrapproving"
kube# [ 39.915694] kube-controller-manager[2082]: I0127 01:29:07.865868 2082 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 39.919952] kube-controller-manager[2082]: I0127 01:29:07.870275 2082 certificate_controller.go:113] Starting certificate controller
kube# [ 39.920192] kube-controller-manager[2082]: I0127 01:29:07.870297 2082 controller_utils.go:1029] Waiting for caches to sync for certificate controller
kube# [ 39.925119] kube-controller-manager[2082]: I0127 01:29:07.875407 2082 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 39.938661] kube-controller-manager[2082]: I0127 01:29:07.889023 2082 controller_utils.go:1036] Caches are synced for job controller
kube# [ 39.959202] kube-controller-manager[2082]: I0127 01:29:07.909561 2082 controller_utils.go:1036] Caches are synced for TTL controller
kube# [ 39.960889] kube-controller-manager[2082]: I0127 01:29:07.911248 2082 controller_utils.go:1036] Caches are synced for node controller
kube# [ 39.961018] kube-controller-manager[2082]: I0127 01:29:07.911267 2082 range_allocator.go:157] Starting range CIDR allocator
kube# [ 39.961271] kube-controller-manager[2082]: I0127 01:29:07.911281 2082 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller
kube# [ 39.972912] kube-controller-manager[2082]: I0127 01:29:07.923269 2082 controller_utils.go:1036] Caches are synced for GC controller
kube# [ 39.988280] kube-controller-manager[2082]: I0127 01:29:07.938631 2082 controller_utils.go:1036] Caches are synced for ReplicationController controller
kube# [ 39.990505] kube-controller-manager[2082]: I0127 01:29:07.940896 2082 controller_utils.go:1036] Caches are synced for service account controller
kube# [ 40.009457] kube-controller-manager[2082]: I0127 01:29:07.959717 2082 controller_utils.go:1036] Caches are synced for endpoint controller
kube# [ 40.009808] kube-controller-manager[2082]: I0127 01:29:07.959994 2082 controller_utils.go:1036] Caches are synced for HPA controller
kube# [ 40.010353] kube-controller-manager[2082]: I0127 01:29:07.960701 2082 controller_utils.go:1036] Caches are synced for ReplicaSet controller
kube# [ 40.010905] kube-controller-manager[2082]: I0127 01:29:07.961228 2082 controller_utils.go:1036] Caches are synced for daemon sets controller
kube# [ 40.017652] kube-controller-manager[2082]: I0127 01:29:07.968032 2082 controller_utils.go:1036] Caches are synced for namespace controller
kube# [ 40.020154] kube-controller-manager[2082]: I0127 01:29:07.970538 2082 controller_utils.go:1036] Caches are synced for certificate controller
kube# [ 40.028553] kube-controller-manager[2082]: I0127 01:29:07.978925 2082 controller_utils.go:1036] Caches are synced for PV protection controller
kube# [ 40.061126] kube-controller-manager[2082]: I0127 01:29:08.011445 2082 controller_utils.go:1036] Caches are synced for cidrallocator controller
kube# [ 40.330030] kube-controller-manager[2082]: I0127 01:29:08.280393 2082 controller_utils.go:1036] Caches are synced for deployment controller
kube# [ 40.359693] kube-controller-manager[2082]: I0127 01:29:08.310012 2082 controller_utils.go:1036] Caches are synced for disruption controller
kube# [ 40.359937] kube-controller-manager[2082]: I0127 01:29:08.310043 2082 disruption.go:341] Sending events to api server.
kube# [ 40.475927] kube-addons[2478]: INFO: == Default service account in the kube-system namespace has token default-token-kwf2n ==
kube# [ 40.481383] kube-addons[2478]: find: ‘/etc/kubernetes/admission-controls’: No such file or directory
kube# [ 40.486355] kube-addons[2478]: INFO: == Entering periodical apply loop at 2020-01-27T01:29:08+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 40.563364] kube-addons[2478]: INFO: Leader is kube
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 40.609329] kube-controller-manager[2082]: I0127 01:29:08.559585 2082 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
kube# [ 40.678125] kube-controller-manager[2082]: I0127 01:29:08.628416 2082 controller_utils.go:1036] Caches are synced for attach detach controller
kube# [ 40.697833] kube-addons[2478]: error: no objects passed to create
kube# [ 40.699575] kube-controller-manager[2082]: I0127 01:29:08.649922 2082 controller_utils.go:1036] Caches are synced for stateful set controller
kube# [ 40.704042] kube-addons[2478]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:29:08+00:00 ==
kube# [ 40.704207] kube-addons[2478]: INFO: == Reconciling with deprecated label ==
kube# [ 40.712188] kube-controller-manager[2082]: I0127 01:29:08.662563 2082 controller_utils.go:1036] Caches are synced for garbage collector controller
kube# [ 40.712299] kube-controller-manager[2082]: I0127 01:29:08.662590 2082 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
kube# [ 40.713097] kube-controller-manager[2082]: I0127 01:29:08.663475 2082 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 40.715762] kube-controller-manager[2082]: I0127 01:29:08.666122 2082 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 40.725271] kube-controller-manager[2082]: I0127 01:29:08.675632 2082 controller_utils.go:1036] Caches are synced for garbage collector controller
kube# [ 40.759047] kube-controller-manager[2082]: I0127 01:29:08.709390 2082 controller_utils.go:1036] Caches are synced for PVC protection controller
kube# [ 40.759367] kube-controller-manager[2082]: I0127 01:29:08.709744 2082 controller_utils.go:1036] Caches are synced for persistent volume controller
kube# [ 40.762172] kube-controller-manager[2082]: I0127 01:29:08.712515 2082 controller_utils.go:1036] Caches are synced for expand controller
kube# [ 40.809504] kube-controller-manager[2082]: I0127 01:29:08.759837 2082 controller_utils.go:1036] Caches are synced for taint controller
kube# [ 40.809727] kube-controller-manager[2082]: I0127 01:29:08.759896 2082 taint_manager.go:182] Starting NoExecuteTaintManager
kube# [ 40.854006] kube-apiserver[2106]: I0127 01:29:08.803984 2106 controller.go:606] quota admission added evaluator for: deployments.extensions
kube# [ 40.860892] kube-apiserver[2106]: I0127 01:29:08.811149 2106 controller.go:606] quota admission added evaluator for: replicasets.apps
kube# [ 40.862943] kube-controller-manager[2082]: I0127 01:29:08.812890 2082 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"45dc5137-f283-4239-9578-2150ded4087e", APIVersion:"apps/v1", ResourceVersion:"270", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7cb9b6dd8f to 2
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 41.878337] kube-controller-manager[2082]: I0127 01:29:09.828024 2082 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"1b62239c-251a-406a-86c2-d749bd745dcd", APIVersion:"apps/v1", ResourceVersion:"271", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-l792r
kube# [ 41.880523] kube-controller-manager[2082]: I0127 01:29:09.830822 2082 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"1b62239c-251a-406a-86c2-d749bd745dcd", APIVersion:"apps/v1", ResourceVersion:"271", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-x2slm
kube# [ 41.898476] kube-scheduler[2112]: E0127 01:29:09.848499 2112 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 41.901778] kube-scheduler[2112]: E0127 01:29:09.852154 2112 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 42.083612] kube-addons[2478]: configmap/coredns created
kube# [ 42.083752] kube-addons[2478]: deployment.extensions/coredns created
kube# [ 42.084209] kube-addons[2478]: serviceaccount/coredns created
kube# [ 42.084423] kube-addons[2478]: service/kube-dns created
kube# [ 42.085572] kube-addons[2478]: INFO: == Reconciling with addon-manager label ==
kube# [ 42.216726] kube-addons[2478]: error: no objects passed to apply
kube# [ 42.222576] kube-addons[2478]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:29:10+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 100.301524] kube-addons[2478]: INFO: Leader is kube
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 100.430468] kube-addons[2478]: error: no objects passed to create
kube# [ 100.435205] kube-addons[2478]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:30:08+00:00 ==
kube# [ 100.436438] kube-addons[2478]: INFO: == Reconciling with deprecated label ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 101.776484] kube-addons[2478]: configmap/coredns unchanged
kube# [ 101.777700] kube-addons[2478]: deployment.extensions/coredns unchanged
kube# [ 101.778613] kube-addons[2478]: serviceaccount/coredns unchanged
kube# [ 101.779534] kube-addons[2478]: service/kube-dns unchanged
kube# [ 101.780686] kube-addons[2478]: INFO: == Reconciling with addon-manager label ==
kube# [ 101.915095] kube-addons[2478]: error: no objects passed to apply
kube# [ 101.920635] kube-addons[2478]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:30:09+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 108.690722] kube-scheduler[2112]: E0127 01:30:16.640723 2112 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube# [ 108.692798] kube-scheduler[2112]: E0127 01:30:16.640882 2112 scheduler.go:485] error selecting node for pod: no nodes available to schedule pods
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 160.999613] kube-addons[2478]: INFO: Leader is kube
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 161.129368] kube-addons[2478]: error: no objects passed to create
kube# [ 161.134321] kube-addons[2478]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:31:09+00:00 ==
kube# [ 161.135818] kube-addons[2478]: INFO: == Reconciling with deprecated label ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube# [ 162.485947] kube-addons[2478]: configmap/coredns unchanged
kube# [ 162.487027] kube-addons[2478]: deployment.extensions/coredns unchanged
kube# [ 162.488097] kube-addons[2478]: serviceaccount/coredns unchanged
kube# [ 162.489030] kube-addons[2478]: service/kube-dns unchanged
kube# [ 162.489951] kube-addons[2478]: INFO: == Reconciling with addon-manager label ==
kube# [ 162.622947] kube-addons[2478]: error: no objects passed to apply
kube# [ 162.628775] kube-addons[2478]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:31:10+00:00 ==
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error from server (NotFound): nodes "kube.my.xzy" not found
kube: exit status 1
(0.05 seconds)
error: interrupted by the user
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment