Skip to content

Instantly share code, notes, and snippets.

@xavierzwirtz
Created January 27, 2020 01:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xavierzwirtz/8ad3ff131ec390c4270cbca002526b5f to your computer and use it in GitHub Desktop.
Save xavierzwirtz/8ad3ff131ec390c4270cbca002526b5f to your computer and use it in GitHub Desktop.
success
these derivations will be built:
/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv
/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv
/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv
/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv
/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv
/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv
/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv
/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv
/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv
/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv
/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv
/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv
/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv
these paths will be fetched (112.93 MiB download, 449.40 MiB unpacked):
/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook
/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0
/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9
/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9
/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux
/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625
/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2
/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4
/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin
/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev
/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev
/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook
/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin
/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum
/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36
/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01
/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2
/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev
/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1
/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0
/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source
/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3
/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1
/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin
/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1
/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3
/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source
/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12
/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux
/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib
/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver
/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev
/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev
/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0
/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin
/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3
/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev
/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4
/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2
copying path '/nix/store/k3n5hvqb2lkx1z7cyyb5wsc6q6zhndlp-jquery-1.11.3' from 'https://cache.nixos.org'...
copying path '/nix/store/86kxh5v2mggj4ghy8l7khqdffhwixhhn-jquery-ui-1.11.4' from 'https://cache.nixos.org'...
copying path '/nix/store/j8fq1ksp37w88rx80blzazldi17f3x7s-gnumake-4.2.1' from 'https://cache.nixos.org'...
copying path '/nix/store/axlxp2c9pqpy196jcncy7i0alpp8q4yn-libxslt-1.1.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/2xwxj5qrrc71asdk1wyq19nz9k845pzs-patchelf-0.9' from 'https://cache.nixos.org'...
copying path '/nix/store/apfgni3w7sd7qnnzws0ky8j40sbigy4m-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/8cgm2dl5grnhddyknc3d06f7f2r30jf0-libxml2-2.9.9-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/4rmwdzcypzbs05kbkcxrp6k0ijmqhldv-perl5.30.0-XML-Writer-0.625' from 'https://cache.nixos.org'...
copying path '/nix/store/cwym8n7lkp02df7qf41j0gldgagzvjn4-netpbm-10.82.01' from 'https://cache.nixos.org'...
copying path '/nix/store/cnrpqd2i7sz8xxxjv3dspn75bhqwv01i-perl5.30.0-Term-ReadLine-Gnu-1.36' from 'https://cache.nixos.org'...
copying path '/nix/store/zbwhp0jrf8y33l187yjs5j002lwl30d7-vde2-2.3.2' from 'https://cache.nixos.org'...
copying path '/nix/store/wvd3r9r8a2w3v1vcjbw1avfcbzv9aspq-libcacard-2.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/0dshs4vdqivr9l3cnf244rizk3w6rk20-virglrenderer-0.7.0' from 'https://cache.nixos.org'...
copying path '/nix/store/jg0mniv6b69lfbb4fix0qdlf8fj22pdh-usbredir-0.8.0' from 'https://cache.nixos.org'...
copying path '/nix/store/ggbrpajhaxmzc840ky35zsjva9nilypv-spice-0.14.2' from 'https://cache.nixos.org'...
copying path '/nix/store/xbf40646brxmk2j59yc5ybq3zfhsdzkk-jq-1.6-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/62nx464pw43wx3fvg2dnfsaijl7nvvzq-jshon-20160111.2' from 'https://cache.nixos.org'...
copying path '/nix/store/xhmbbqfl63slc37fl94h33n6ny6ky69a-pigz-2.4' from 'https://cache.nixos.org'...
copying path '/nix/store/w3zk97m66b45grjabblijbfdhl4s82pc-nettle-3.4.1-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/kdzap6v930z3bj8h47jfk9hgasrqmhky-pcre2-10.33-bin' from 'https://cache.nixos.org'...
copying path '/nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source' from 'https://cache.nixos.org'...
copying path '/nix/store/q17zhi1pbfxr2k5mwc2pif258ib1bwag-autogen-5.18.12' from 'https://cache.nixos.org'...
copying path '/nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source' from 'https://cache.nixos.org'...
copying path '/nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source' from 'https://cache.nixos.org'...
copying path '/nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12' from 'https://cache.nixos.org'...
copying path '/nix/store/2yj27w7if3m362np4znnyip6v4y44fsz-go-1.12.9' from 'https://cache.nixos.org'...
copying path '/nix/store/x7vqi78gkhb3n1n1c4w4bgkakbyv5sq0-lndir-1.0.3' from 'https://cache.nixos.org'...
copying path '/nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5' from 'https://cache.nixos.org'...
copying path '/nix/store/06nq4z17fh43wrbn6hl1yq7bzs99lpr1-hook' from 'https://cache.nixos.org'...
copying path '/nix/store/wl2iq6bx1k3j8wa5qqygra102k3nlijw-libxml2-2.9.9-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/h0bxpn54jvvm4qi0y57im3086flzqj7z-pcre2-10.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/3g2pkmc1s9ycjaxaqc5hrzmq05r5ywbi-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/qghrkvk86f9llfkcr1bxsypqbw1a4qmw-stdenv-linux' from 'https://cache.nixos.org'...
copying path '/nix/store/ryavpa9pbwf4w2j0q8jq7x6scy5igvxw-autogen-5.18.12-lib' from 'https://cache.nixos.org'...
copying path '/nix/store/976mm1v0m126d932c53iqwd7clx3ycka-libxslt-1.1.33-dev' from 'https://cache.nixos.org'...
copying path '/nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1' from 'https://cache.nixos.org'...
building '/nix/store/ppdf7hillsy84h2l2qb30q1in698lwss-kubenix-generated.json.drv'...
building '/nix/store/3lgbww299v2mka7p9by84yxdd341wwzx-nginx-config.json.drv'...
building '/nix/store/zlbpl3x8s1siq093g34li4f0cxrq8r8n-store-path-to-layer.sh.drv'...
copying path '/nix/store/x664lr92z3lccfh28p7axk4jv6250fpi-gnutls-3.6.11.1-bin' from 'https://cache.nixos.org'...
building '/nix/store/24lw3dnbif9p6r11mq5nk6z3rgr209cb-bulk-layers.drv'...
building '/nix/store/ycwgfi9bgpq0dnxx8fc7732h1gjz9r8x-closure.drv'...
copying path '/nix/store/aa7d477nrc0w14lqmib8619bc83csm2m-gnutls-3.6.11.1-dev' from 'https://cache.nixos.org'...
building '/nix/store/vgcij363wdayqvxhh5d7g0db6p1qvvrc-closure-paths.drv'...
copying path '/nix/store/m97z0dr68wn36n8860dfvaa7w1qfrk30-vte-0.56.3' from 'https://cache.nixos.org'...
copying path '/nix/store/l8yj41cr5c6mx3cp4xazgxf49f14adhg-qemu-host-cpu-only-for-vm-tests-4.0.1' from 'https://cache.nixos.org'...
copying path '/nix/store/s834pvkk1dc10a6f0x5fljvah8rkd6d0-nixos-test-driver' from 'https://cache.nixos.org'...
building '/nix/store/qv5icsq2i5d8x58bh1d7b8iyiq0f2w21-run-nixos-vm.drv'...
building '/nix/store/s9a75xw41s9rv4wbdh7y8gprxg13szg4-nixos-vm.drv'...
copying path '/nix/store/blwx4aab2ygxhall7kwrdyb3nwk04bcm-tarsum' from 'https://cache.nixos.org'...
building '/nix/store/8xsaixzaafay1vrbiif69as8l69jyh9i-nginx-customisation-layer.drv'...
building '/nix/store/rasm8f1pr0miss2w0v9p2gb29w5jcwra-nginx-granular-docker-layers.drv'...
Packing layer...
Computing layer checksum...
Creating layer #1 for /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27
Creating layer #2 for /nix/store/xvxsbvbi7ckccz4pz2j6np7czadgjy2x-zlib-1.2.11
Creating layer #3 for /nix/store/n55nxs8xxdwkwv4kqh99pdnyqxp0d1zg-libpng-apng-1.6.37
Creating layer #4 for /nix/store/0ykbl0k34cfh80gvawqy5f8v1yq7pph8-bzip2-1.0.6.0.1
Creating layer #5 for /nix/store/s7j9n1wccws4kgigknl4rfqpyjxy544y-libjpeg-turbo-2.0.3
Creating layer #6 for /nix/store/w4snc9q1ns3rqg8zykkh9ric1d92akwd-dejavu-fonts-minimal-2.37
Creating layer #7 for /nix/store/nzb33937sf9031ik3v7c8d039lnviglk-freetype-2.10.1
Creating layer #8 for /nix/store/784rh7jrfhagbkydjfrv68h9x3g4gqmk-gcc-8.3.0-lib
Creating layer #9 for /nix/store/blykn8wlxh1n91dzxizyxvkygmd911cx-xz-5.2.4
tar: Removing leading `/' from member names
Creating layer #10 for /nix/store/lp6xmsg44yflzd3rv2qc4dc0m9y0qr2n-expat-2.2.7
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #11 for /nix/store/9r9px061ymn6r8wdzgdhbm7sdb5b0dri-fontconfig-2.12.6
tar: Creating layer #12 for /nix/store/yydyda5cz2x74pqp643q2r3p6ipy6d9b-giflib-5.1.4
Removing leading `/' from member names
Creating layer #13 for /nix/store/nl4l9vkbvpp5jblr7kycx2qqchbnn98a-libtiff-4.0.10
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #14 for /nix/store/5zvqxjp62ahwvgqm4y4x9p9ym112hljj-libxml2-2.9.9
tar: tar: Removing leading `/' from member namesRemoving leading `/' from member names
Creating layer #15 for /nix/store/6mhw8asq3ciinkky6mqq6qn6sfxrkgks-fontconfig-2.12.6-lib
tar: Removing leading `/' from member names
Creating layer #16 for /nix/store/vwydn02iqfg7xp1a6rhpyhs8vl9v2b6d-libwebp-1.0.3
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #17 for /nix/store/8g88npivsfhzfwzpw2j35wzzf2lbjf71-gd-2.2.5
tar: Removing leading `/' from member names
Creating layer #18 for /nix/store/01j21isxi1wn8vsjpbhlplyw1ddyypjm-geoip-1.6.12
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #19 for /nix/store/g42rl3xfqml0yrh5yjdfy4rfdpk1cc7y-libxslt-1.1.33
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #20 for /nix/store/z9vsvmll45kjdf7j9h0vlxjjya6yxgc0-openssl-1.1.1d
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #21 for /nix/store/6p4kq0v91y90jv5zqb4gri38c47wxglj-pcre-8.43
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #22 for /nix/store/4w2zbpv9ihl36kbpp6w5d1x33gp5ivfh-source
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Creating layer #23 for /nix/store/jsqrk045m09i136mgcfjfai8i05nq14c-source /nix/store/n14bjnksgk2phl8n69m4yabmds7f0jj2-source /nix/store/k9cgcprirg5zyjsdmd503lqj2mhvxqnc-nginx-1.16.1 /nix/store/27hpjxyy26v0bpp7x8g72nddcv6nv3hw-bulk-layers /nix/store/gskazlyrm0f1bbcngy04f8m07lm2wsqf-nginx-config.json /nix/store/n8w8r7z1z962scfcc1h7rsdqnaf5xncc-closure
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
Finished building layer 'nginx-granular-docker-layers'
building '/nix/store/98a62c975gx89jmpy3knx0z276yh036y-docker-image-nginx.tar.gz.drv'...
Cooking the image...
Finished.
building '/nix/store/mh8nqz1waq0gj2zapp9lsqszxng04q9r-nixos-test-driver-nginx-deployment.drv'...
building '/nix/store/ac0l0kff56ya4bj07gf5a47p97mlgj5z-vm-test-run-nginx-deployment.drv'...
starting VDE switch for network 1
running the VM test script
starting all VMs
kube: starting vm
kube# Formatting '/build/vm-state-kube/kube.qcow2', fmt=qcow2 size=4294967296 cluster_size=65536 lazy_refcounts=off refcount_bits=16
kube: QEMU running (pid 9)
(0.06 seconds)
kube: waiting for success: kubectl get node kube.my.xzy | grep -w Ready
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube: waiting for the VM to finish booting
kube# cSeaBIOS (version rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org)
kube#
kube#
kube# iPXE (http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+7FF90FD0+7FEF0FD0 C980
kube#
kube#
kube#
kube#
kube# iPXE (http://ipxe.org) 00:08.0 CA80 PCI2.10 PnP PMM 7FF90FD0 7FEF0FD0 CA80
kube#
kube#
kube# Booting from ROM...
kube# Probing EDD (edd=off to disable)... oc[ 0.000000] Linux version 4.19.95 (nixbld@localhost) (gcc version 8.3.0 (GCC)) #1-NixOS SMP Sun Jan 12 11:17:30 UTC 2020
kube# [ 0.000000] Command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.000000] x86/fpu: x87 FPU will use FXSAVE
kube# [ 0.000000] BIOS-provided physical RAM map:
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable
kube# [ 0.000000] BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
kube# [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
kube# [ 0.000000] NX (Execute Disable) protection: active
kube# [ 0.000000] SMBIOS 2.8 present.
kube# [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
kube# [ 0.000000] Hypervisor detected: KVM
kube# [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
kube# [ 0.000000] kvm-clock: cpu 0, msr 3c75f001, primary cpu clock
kube# [ 0.000000] kvm-clock: using sched offset of 525717314 cycles
kube# [ 0.000001] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
kube# [ 0.000002] tsc: Detected 3499.998 MHz processor
kube# [ 0.000945] last_pfn = 0x7ffdc max_arch_pfn = 0x400000000
kube# [ 0.000981] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
kube# [ 0.002676] found SMP MP-table at [mem 0x000f5980-0x000f598f]
kube# [ 0.002770] Scanning 1 areas for low memory corruption
kube# [ 0.002861] RAMDISK: [mem 0x7f63e000-0x7ffcffff]
kube# [ 0.002868] ACPI: Early table checksum verification disabled
kube# [ 0.002895] ACPI: RSDP 0x00000000000F5940 000014 (v00 BOCHS )
kube# [ 0.002897] ACPI: RSDT 0x000000007FFE152E 000030 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
kube# [ 0.002900] ACPI: FACP 0x000000007FFE1392 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
kube# [ 0.002902] ACPI: DSDT 0x000000007FFDFA80 001912 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
kube# [ 0.002904] ACPI: FACS 0x000000007FFDFA40 000040
kube# [ 0.002905] ACPI: APIC 0x000000007FFE1406 0000F0 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
kube# [ 0.002907] ACPI: HPET 0x000000007FFE14F6 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
kube# [ 0.003139] No NUMA configuration found
kube# [ 0.003141] Faking a node at [mem 0x0000000000000000-0x000000007ffdbfff]
kube# [ 0.003143] NODE_DATA(0) allocated [mem 0x7ffd8000-0x7ffdbfff]
kube# [ 0.003157] Zone ranges:
kube# [ 0.003158] DMA [mem 0x0000000000001000-0x0000000000ffffff]
kube# [ 0.003159] DMA32 [mem 0x0000000001000000-0x000000007ffdbfff]
kube# [ 0.003160] Normal empty
kube# [ 0.003161] Movable zone start for each node
kube# [ 0.003162] Early memory node ranges
kube# [ 0.003162] node 0: [mem 0x0000000000001000-0x000000000009efff]
kube# [ 0.003163] node 0: [mem 0x0000000000100000-0x000000007ffdbfff]
kube# [ 0.003394] Reserved but unavailable: 98 pages
kube# [ 0.003395] Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff]
kube# [ 0.013381] ACPI: PM-Timer IO Port: 0x608
kube# [ 0.013392] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
kube# [ 0.013414] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
kube# [ 0.013416] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
kube# [ 0.013417] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
kube# [ 0.013417] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
kube# [ 0.013418] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
kube# [ 0.013419] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
kube# [ 0.013422] Using ACPI (MADT) for SMP configuration information
kube# [ 0.013423] ACPI: HPET id: 0x8086a201 base: 0xfed00000
kube# [ 0.013429] smpboot: Allowing 16 CPUs, 0 hotplug CPUs
kube# [ 0.013443] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
kube# [ 0.013444] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
kube# [ 0.013445] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
kube# [ 0.013445] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
kube# [ 0.013447] [mem 0x80000000-0xfeffbfff] available for PCI devices
kube# [ 0.013448] Booting paravirtualized kernel on KVM
kube# [ 0.013450] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
kube# [ 0.069418] random: get_random_bytes called from start_kernel+0x93/0x4ca with crng_init=0
kube# [ 0.069424] setup_percpu: NR_CPUS:384 nr_cpumask_bits:384 nr_cpu_ids:16 nr_node_ids:1
kube# [ 0.069992] percpu: Embedded 44 pages/cpu s142424 r8192 d29608 u262144
kube# [ 0.070016] KVM setup async PF for cpu 0
kube# [ 0.070019] kvm-stealtime: cpu 0, msr 7d016180
kube# [ 0.070024] Built 1 zonelists, mobility grouping on. Total pages: 515941
kube# [ 0.070025] Policy zone: DMA32
kube# [ 0.070027] Kernel command line: console=ttyS0 panic=1 boot.panic_on_fail loglevel=7 net.ifnames=0 init=/nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62/init regInfo=/nix/store/zafnvn8vcyp713dmyk4qfs4961rp2ysz-closure-info/registration console=ttyS0
kube# [ 0.073851] Memory: 2028748K/2096616K available (10252K kernel code, 1140K rwdata, 1904K rodata, 1448K init, 764K bss, 67868K reserved, 0K cma-reserved)
kube# [ 0.074106] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1
kube# [ 0.074110] ftrace: allocating 28577 entries in 112 pages
kube# [ 0.081118] rcu: Hierarchical RCU implementation.
kube# [ 0.081119] rcu: RCU event tracing is enabled.
kube# [ 0.081120] rcu: RCU restricting CPUs from NR_CPUS=384 to nr_cpu_ids=16.
kube# [ 0.081121] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16
kube# [ 0.082627] NR_IRQS: 24832, nr_irqs: 552, preallocated irqs: 16
kube# [ 0.086500] Console: colour VGA+ 80x25
kube# [ 0.135797] console [ttyS0] enabled
kube# [ 0.136131] ACPI: Core revision 20180810
kube# [ 0.136641] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
kube# [ 0.137547] APIC: Switch to symmetric I/O mode setup
kube# [ 0.138121] x2apic enabled
kube# [ 0.138509] Switched APIC routing to physical x2apic.
kube# [ 0.139642] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
kube# [ 0.140214] tsc: Marking TSC unstable due to TSCs unsynchronized
kube# [ 0.140791] Calibrating delay loop (skipped) preset value.. 6999.99 BogoMIPS (lpj=3499998)
kube# [ 0.141786] pid_max: default: 32768 minimum: 301
kube# [ 0.142238] Security Framework initialized
kube# [ 0.142624] Yama: becoming mindful.
kube# [ 0.142797] AppArmor: AppArmor initialized
kube# [ 0.143478] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
kube# [ 0.144005] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
kube# [ 0.144791] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.145788] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
kube# [ 0.146640] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
kube# [ 0.146786] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
kube# [ 0.147787] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
kube# [ 0.148594] Spectre V2 : Mitigation: Full AMD retpoline
kube# [ 0.148786] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
kube# [ 0.149900] Freeing SMP alternatives memory: 28K
kube# [ 0.253751] smpboot: CPU0: AMD Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
kube# [ 0.253785] Performance Events: AMD PMU driver.
kube# [ 0.253788] ... version: 0
kube# [ 0.254164] ... bit width: 48
kube# [ 0.254786] ... generic registers: 4
kube# [ 0.255160] ... value mask: 0000ffffffffffff
kube# [ 0.255786] ... max period: 00007fffffffffff
kube# [ 0.256278] ... fixed-purpose events: 0
kube# [ 0.256660] ... event mask: 000000000000000f
kube# [ 0.256823] rcu: Hierarchical SRCU implementation.
kube# [ 0.257923] smp: Bringing up secondary CPUs ...
kube# [ 0.258401] x86: Booting SMP configuration:
kube# [ 0.258787] .... node #0, CPUs: #1
kube# [ 0.056028] kvm-clock: cpu 1, msr 3c75f041, secondary cpu clock
kube# [ 0.259685] KVM setup async PF for cpu 1
kube# [ 0.259785] kvm-stealtime: cpu 1, msr 7d056180
kube# [ 0.260833] #2
kube# [ 0.056028] kvm-clock: cpu 2, msr 3c75f081, secondary cpu clock
kube# [ 0.261315] KVM setup async PF for cpu 2
kube# [ 0.261691] kvm-stealtime: cpu 2, msr 7d096180
kube# [ 0.262832] #3
kube# [ 0.056028] kvm-clock: cpu 3, msr 3c75f0c1, secondary cpu clock
kube# [ 0.263314] KVM setup async PF for cpu 3
kube# [ 0.263692] kvm-stealtime: cpu 3, msr 7d0d6180
kube# [ 0.264987] #4
kube# [ 0.056028] kvm-clock: cpu 4, msr 3c75f101, secondary cpu clock
kube# [ 0.265542] KVM setup async PF for cpu 4
kube# [ 0.265712] kvm-stealtime: cpu 4, msr 7d116180
kube# [ 0.266830] #5
kube# [ 0.056028] kvm-clock: cpu 5, msr 3c75f141, secondary cpu clock
kube# [ 0.267331] KVM setup async PF for cpu 5
kube# [ 0.267691] kvm-stealtime: cpu 5, msr 7d156180
kube# [ 0.268822] #6
kube# [ 0.056028] kvm-clock: cpu 6, msr 3c75f181, secondary cpu clock
kube# [ 0.269300] KVM setup async PF for cpu 6
kube# [ 0.269697] kvm-stealtime: cpu 6, msr 7d196180
kube# [ 0.270824] #7
kube# [ 0.056028] kvm-clock: cpu 7, msr 3c75f1c1, secondary cpu clock
kube# [ 0.271300] KVM setup async PF for cpu 7
kube# [ 0.271694] kvm-stealtime: cpu 7, msr 7d1d6180
kube# [ 0.272809] #8
kube# [ 0.056028] kvm-clock: cpu 8, msr 3c75f201, secondary cpu clock
kube# [ 0.273357] KVM setup async PF for cpu 8
kube# [ 0.273723] kvm-stealtime: cpu 8, msr 7d216180
kube# [ 0.274818] #9
kube# [ 0.056028] kvm-clock: cpu 9, msr 3c75f241, secondary cpu clock
kube# [ 0.275297] KVM setup async PF for cpu 9
kube# [ 0.275697] kvm-stealtime: cpu 9, msr 7d256180
kube# [ 0.275825] #10
kube# [ 0.056028] kvm-clock: cpu 10, msr 3c75f281, secondary cpu clock
kube# [ 0.277181] KVM setup async PF for cpu 10
kube# [ 0.277725] kvm-stealtime: cpu 10, msr 7d296180
kube# [ 0.277825] #11
kube# [ 0.056028] kvm-clock: cpu 11, msr 3c75f2c1, secondary cpu clock
kube# [ 0.279101] KVM setup async PF for cpu 11
kube# [ 0.279716] kvm-stealtime: cpu 11, msr 7d2d6180
kube# [ 0.279830] #12
kube# [ 0.056028] kvm-clock: cpu 12, msr 3c75f301, secondary cpu clock
kube# [ 0.281013] KVM setup async PF for cpu 12
kube# [ 0.281707] kvm-stealtime: cpu 12, msr 7d316180
kube# [ 0.281823] #13
kube# [ 0.056028] kvm-clock: cpu 13, msr 3c75f341, secondary cpu clock
kube# [ 0.282905] KVM setup async PF for cpu 13
kube# [ 0.283707] kvm-stealtime: cpu 13, msr 7d356180
kube# [ 0.283823] #14
kube# [ 0.056028] kvm-clock: cpu 14, msr 3c75f381, secondary cpu clock
kube# [ 0.284799] KVM setup async PF for cpu 14
kube# [ 0.285740] kvm-stealtime: cpu 14, msr 7d396180
kube# [ 0.285823] #15
kube# [ 0.056028] kvm-clock: cpu 15, msr 3c75f3c1, secondary cpu clock
kube# [ 0.286319] KVM setup async PF for cpu 15
kube# [ 0.286733] kvm-stealtime: cpu 15, msr 7d3d6180
kube# [ 0.287790] smp: Brought up 1 node, 16 CPUs
kube# [ 0.288191] smpboot: Max logical packages: 16
kube# [ 0.288786] smpboot: Total of 16 processors activated (111999.93 BogoMIPS)
kube# [ 0.289998] devtmpfs: initialized
kube# [ 0.290817] x86/mm: Memory block size: 128MB
kube# [ 0.291337] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
kube# [ 0.291793] futex hash table entries: 4096 (order: 6, 262144 bytes)
kube# [ 0.292899] pinctrl core: initialized pinctrl subsystem
kube# [ 0.293840] NET: Registered protocol family 16
kube# [ 0.294298] audit: initializing netlink subsys (disabled)
kube# [ 0.294800] audit: type=2000 audit(1580088706.841:1): state=initialized audit_enabled=0 res=1
kube# [ 0.295610] cpuidle: using governor menu
kube# [ 0.295964] ACPI: bus type PCI registered
kube# [ 0.296347] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
kube# [ 0.296841] PCI: Using configuration type 1 for base access
kube# [ 0.298181] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
kube# [ 0.299025] ACPI: Added _OSI(Module Device)
kube# [ 0.299786] ACPI: Added _OSI(Processor Device)
kube# [ 0.300200] ACPI: Added _OSI(3.0 _SCP Extensions)
kube# [ 0.300786] ACPI: Added _OSI(Processor Aggregator Device)
kube# [ 0.301286] ACPI: Added _OSI(Linux-Dell-Video)
kube# [ 0.301786] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
kube# [ 0.302645] ACPI: 1 ACPI AML tables successfully acquired and loaded
kube# [ 0.303887] ACPI: Interpreter enabled
kube# [ 0.304245] ACPI: (supports S0 S3 S4 S5)
kube# [ 0.304787] ACPI: Using IOAPIC for interrupt routing
kube# [ 0.305253] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
kube# [ 0.305842] ACPI: Enabled 2 GPEs in block 00 to 0F
kube# [ 0.308036] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
kube# [ 0.308625] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
kube# [ 0.308788] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
kube# [ 0.309789] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
kube# [ 0.310843] acpiphp: Slot [3] registered
kube# [ 0.311253] acpiphp: Slot [4] registered
kube# [ 0.311805] acpiphp: Slot [5] registered
kube# [ 0.312188] acpiphp: Slot [6] registered
kube# [ 0.312578] acpiphp: Slot [7] registered
kube# [ 0.312803] acpiphp: Slot [8] registered
kube# [ 0.313187] acpiphp: Slot [9] registered
kube# [ 0.313577] acpiphp: Slot [10] registered
kube# [ 0.313804] acpiphp: Slot [11] registered
kube# [ 0.314204] acpiphp: Slot [12] registered
kube# [ 0.314803] acpiphp: Slot [13] registered
kube# [ 0.315192] acpiphp: Slot [14] registered
kube# [ 0.315589] acpiphp: Slot [15] registered
kube# [ 0.315803] acpiphp: Slot [16] registered
kube# [ 0.316193] acpiphp: Slot [17] registered
kube# [ 0.316803] acpiphp: Slot [18] registered
kube# [ 0.317191] acpiphp: Slot [19] registered
kube# [ 0.317587] acpiphp: Slot [20] registered
kube# [ 0.317803] acpiphp: Slot [21] registered
kube# [ 0.318193] acpiphp: Slot [22] registered
kube# [ 0.318803] acpiphp: Slot [23] registered
kube# [ 0.319191] acpiphp: Slot [24] registered
kube# [ 0.319585] acpiphp: Slot [25] registered
kube# [ 0.319803] acpiphp: Slot [26] registered
kube# [ 0.320191] acpiphp: Slot [27] registered
kube# [ 0.320803] acpiphp: Slot [28] registered
kube# [ 0.321191] acpiphp: Slot [29] registered
kube# [ 0.321586] acpiphp: Slot [30] registered
kube# [ 0.321803] acpiphp: Slot [31] registered
kube# [ 0.322180] PCI host bridge to bus 0000:00
kube# [ 0.322565] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
kube# [ 0.322786] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
kube# [ 0.323786] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
kube# [ 0.324479] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
kube# [ 0.324786] pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
kube# [ 0.325786] pci_bus 0000:00: root bus resource [bus 00-ff]
kube# [ 0.330240] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
kube# [ 0.330786] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
kube# [ 0.331393] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
kube# [ 0.331786] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
kube# [ 0.336484] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI
kube# [ 0.336791] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
kube# [ 0.411254] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
kube# [ 0.411838] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
kube# [ 0.412407] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
kube# [ 0.412831] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
kube# [ 0.413393] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
kube# [ 0.414239] pci 0000:00:02.0: vgaarb: setting as boot VGA device
kube# [ 0.414357] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
kube# [ 0.414791] pci 0000:00:02.0: vgaarb: bridge control possible
kube# [ 0.415325] vgaarb: loaded
kube# [ 0.415800] PCI: Using ACPI for IRQ routing
kube# [ 0.416382] NetLabel: Initializing
kube# [ 0.416712] NetLabel: domain hash size = 128
kube# [ 0.416786] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
kube# [ 0.417333] NetLabel: unlabeled traffic allowed by default
kube# [ 0.417822] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
kube# [ 0.418798] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
kube# [ 0.419256] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
kube# [ 0.423821] clocksource: Switched to clocksource kvm-clock
kube# [ 0.429217] VFS: Disk quotas dquot_6.6.0
kube# [ 0.429605] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
kube# [ 0.430308] AppArmor: AppArmor Filesystem Enabled
kube# [ 0.430758] pnp: PnP ACPI init
kube# [ 0.431242] pnp: PnP ACPI: found 6 devices
kube# [ 0.438044] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
kube# [ 0.438958] NET: Registered protocol family 2
kube# [ 0.439471] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes)
kube# [ 0.440217] TCP established hash table entries: 16384 (order: 5, 131072 bytes)
kube# [ 0.440919] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
kube# [ 0.441556] TCP: Hash tables configured (established 16384 bind 16384)
kube# [ 0.442192] UDP hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.442754] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
kube# [ 0.443395] NET: Registered protocol family 1
kube# [ 0.443832] pci 0000:00:01.0: PIIX3: Enabling Passive Release
kube# [ 0.444369] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
kube# [ 0.444930] pci 0000:00:01.0: Activating ISA DMA hang workarounds
kube# [ 0.454124] PCI Interrupt Link [LNKD] enabled at IRQ 11
kube# [ 0.463540] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x6c3 took 17613 usecs
kube# [ 0.464269] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
kube# [ 0.465130] Trying to unpack rootfs image as initramfs...
kube# [ 0.545238] Freeing initrd memory: 9800K
kube# [ 0.545729] Scanning for low memory corruption every 60 seconds
kube# [ 0.546696] Initialise system trusted keyrings
kube# [ 0.547190] workingset: timestamp_bits=40 max_order=19 bucket_order=0
kube# [ 0.548327] zbud: loaded
kube# [ 0.549362] Key type asymmetric registered
kube# [ 0.549751] Asymmetric key parser 'x509' registered
kube# [ 0.550223] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
kube# [ 0.550982] io scheduler noop registered
kube# [ 0.551375] io scheduler deadline registered
kube# [ 0.551782] io scheduler cfq registered (default)
kube# [ 0.552233] io scheduler mq-deadline registered
kube# [ 0.552652] io scheduler kyber registered
kube# [ 0.553495] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
kube# [ 0.577031] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
kube# [ 0.579662] brd: module loaded
kube# [ 0.580557] mce: Using 10 MCE banks
kube# [ 0.580910] sched_clock: Marking stable (525870538, 55028847)->(586309977, -5410592)
kube# [ 0.582054] registered taskstats version 1
kube# [ 0.582446] Loading compiled-in X.509 certificates
kube# [ 0.582921] zswap: loaded using pool lzo/zbud
kube# [ 0.583663] AppArmor: AppArmor sha1 policy hashing enabled
kube# [ 0.585829] Freeing unused kernel image memory: 1448K
kube# [ 0.594795] Write protecting the kernel read-only data: 14336k
kube# [ 0.595973] Freeing unused kernel image memory: 2012K
kube# [ 0.596522] Freeing unused kernel image memory: 144K
kube# [ 0.596995] Run /init as init process
kube#
kube# <<< NixOS Stage 1 >>>
kube#
kube# loading module virtio_balloon...
kube# loading module virtio_console...
kube# loading module virtio_rng...
kube# loading module dm_mod...
kube# [ 0.621803] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: dm-devel@redhat.com
kube# running udev...
kube# [ 0.625158] systemd-udevd[181]: Starting version 243
kube# [ 0.625978] systemd-udevd[182]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 0.627244] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 0.628626] systemd-udevd[182]: /nix/store/936zacvhbd3zy281ghpdbrngwxc9h89s-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 0.639998] rtc_cmos 00:00: RTC can wake from S4
kube# [ 0.641003] rtc_cmos 00:00: registered as rtc0
kube# [ 0.641558] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
kube# [ 0.643065] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
kube# [ 0.644491] serio: i8042 KBD port at 0x60,0x64 irq 1
kube# [ 0.644968] serio: i8042 AUX port at 0x60,0x64 irq 12
kube# [ 0.649326] SCSI subsystem initialized
kube# [ 0.650473] ACPI: bus type USB registered
kube# [ 0.650908] usbcore: registered new interface driver usbfs
kube# [ 0.651424] usbcore: registered new interface driver hub
kube# [ 0.651590] PCI Interrupt Link [LNKC] enabled at IRQ 10
kube# [ 0.652058] usbcore: registered new device driver usb
kube# [ 0.656878] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
kube# [ 0.657934] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
kube# [ 0.660226] uhci_hcd: USB Universal Host Controller Interface driver
kube# [ 0.662662] scsi host0: ata_piix
kube# [ 0.663100] scsi host1: ata_piix
kube# [ 0.663443] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1c0 irq 14
kube# [ 0.664107] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1c8 irq 15
kube# [ 0.671222] uhci_hcd 0000:00:01.2: UHCI Host Controller
kube# [ 0.672056] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
kube# [ 0.672848] uhci_hcd 0000:00:01.2: detected 2 ports
kube# [ 0.673350] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c0c0
kube# [ 0.673881] random: fast init done
kube# [ 0.674085] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19
kube# [ 0.674439] random: crng init done
kube# [ 0.675246] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
kube# [ 0.675248] usb usb1: Product: UHCI Host Controller
kube# [ 0.677186] usb usb1: Manufacturer: Linux 4.19.95 uhci_hcd
kube# [ 0.677697] usb usb1: SerialNumber: 0000:00:01.2
kube# [ 0.678222] hub 1-0:1.0: USB hub found
kube# [ 0.678584] hub 1-0:1.0: 2 ports detected
kube# [ 0.682552] PCI Interrupt Link [LNKA] enabled at IRQ 10
kube# [ 0.692316] PCI Interrupt Link [LNKB] enabled at IRQ 11
kube# [ 0.748890] 9pnet: Installing 9P2000 support
kube# [ 0.752647] virtio_blk virtio8: [vda] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB)
kube# [ 0.817507] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
kube# [ 0.818934] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
kube# [ 0.842306] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
kube# [ 0.843001] cdrom: Uniform CD-ROM driver Revision: 3.20
kube# [ 1.002826] usb 1-1: new full-speed USB device number 2 using uhci_hcd
kube# kbd_mode: KDSKBMODE: Inappropriate ioctl for device
kube# %Gstarting device mapper and LVM...
kube# [ 1.111626] clocksource: Switched to clocksource acpi_pm
kube# mke2fs 1.45.3 (14-Jul-2019)
kube# Creating filesystem with 1048576 4k blocks and 262144 inodes
kube# Filesystem UUID: 7bebe9b8-fa3d-4594-8da3-cbe53d432bff
kube# Superblock backups stored on blocks:
kube# 32768, 98304, 163840, 229376, 294912, 819200, 884736
kube#
kube# Allocating group tables: 0/32 done
kube# Writing inode tables: 0/32 done
kube# Creating journal (16384 blocks): done
kube# Writing superblocks and filesystem accounting information: 0/32 [ 1.172268] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
kube# [ 1.173114] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
kube# [ 1.173877] usb 1-1: Product: QEMU USB Tablet
kube# [ 1.174328] usb 1-1: Manufacturer: QEMU
kube# [ 1.174709] usb 1-1: SerialNumber: 28754-0000:00:01.2-1
kube# [ 1.183254] hidraw: raw HID events driver (C) Jiri Kosina
kube# [ 1.189512] usbcore: registered new interface driver usbhid
kube# [ 1.190120] usbhid: USB HID core driver
kube# [ 1.191386] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2
kube# [ 1.192610] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
kube# done
kube#
kube# checking /dev/vda...
kube# fsck (busybox 1.30.1)
kube# [fsck.ext4 (1) -- /mnt-root/] fsck.ext4 -a /dev/vda
kube# /dev/vda: clean, 11/262144 files, 36942/1048576 blocks
kube# mounting /dev/vda on /...
kube# [ 1.296625] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null)
kube# mounting store on /nix/.ro-store...
kube# [ 1.308676] FS-Cache: Loaded
kube# [ 1.311557] 9p: Installing v9fs 9p2000 file system support
kube# [ 1.312256] FS-Cache: Netfs '9p' registered for caching
kube# mounting tmpfs on /nix/.rw-store...
kube# mounting shared on /tmp/shared...
kube# mounting xchg on /tmp/xchg...
kube# mounting overlay filesystem on /nix/store...
kube#
kube# <<< NixOS Stage 2 >>>
kube#
kube# [ 1.461253] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 1.462378] booting system configuration /nix/store/6s71ag4g9kx14hql5snisc48a3l5yj3w-nixos-system-kube-19.09.1861.eb65d1dae62
kube# running activation script...
kube# setting up /etc...
kube# starting systemd...
kube# [ 2.600424] systemd[1]: Inserted module 'autofs4'
kube# [ 2.623752] NET: Registered protocol family 10
kube# [ 2.624628] Segment Routing with IPv6
kube# [ 2.635719] systemd[1]: systemd 243 running in system mode. (+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT +UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
kube# [ 2.637894] systemd[1]: Detected virtualization kvm.
kube# [ 2.638417] systemd[1]: Detected architecture x86-64.
kube# [ 2.645605] systemd[1]: Set hostname to <kube>.
kube# [ 2.647691] systemd[1]: Initializing machine ID from random generator.
kube# [ 2.687225] systemd-fstab-generator[617]: Checking was requested for "store", but it is not a device.
kube# [ 2.689835] systemd-fstab-generator[617]: Checking was requested for "shared", but it is not a device.
kube# [ 2.691271] systemd-fstab-generator[617]: Checking was requested for "xchg", but it is not a device.
kube# [ 2.916350] systemd[1]: /nix/store/0vscs3kafrn5z3g1bwdgabsdnii8kszz-unit-cfssl.service/cfssl.service:16: StateDirectory= path is absolute, ignoring: /var/lib/cfssl
kube# [ 2.929086] systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
kube# [ 2.930733] systemd[1]: Created slice kubernetes.slice.
kube# [ 2.931937] systemd[1]: Created slice system-getty.slice.
kube# [ 2.932874] systemd[1]: Created slice User and Session Slice.
kube# [ 2.974622] EXT4-fs (vda): re-mounted. Opts: (null)
kube# [ 2.978502] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
kube# [ 2.990573] tun: Universal TUN/TAP device driver, 1.6
kube# [ 2.997396] loop: module loaded
kube# [ 3.001705] Bridge firewalling registered
kube# [ 3.182444] audit: type=1325 audit(1580088709.069:2): table=filter family=2 entries=12
kube# [ 3.196233] audit: type=1325 audit(1580088709.076:3): table=filter family=10 entries=12
kube# [ 3.197132] audit: type=1300 audit(1580088709.076:3): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=29 a2=40 a3=1faffa0 items=0 ppid=639 pid=677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.200583] audit: type=1327 audit(1580088709.076:3): proctitle=6970367461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D7000746370002D2D73796E002D6A004C4F47002D2D6C6F672D6C6576656C00696E666F002D2D6C6F672D707265666978007265667573656420636F6E6E656374696F6E3A20
kube# [ 3.215451] audit: type=1325 audit(1580088709.102:4): table=filter family=2 entries=13
kube# [ 3.216405] audit: type=1300 audit(1580088709.102:4): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=0 a2=40 a3=1b7f850 items=0 ppid=639 pid=679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/nix/store/vvc9a2w2y1fg4xzf1rpxa8jwv5d4amh6-iptables-1.8.3/bin/xtables-legacy-multi" subj==unconfined key=(null)
kube# [ 3.220598] audit: type=1327 audit(1580088709.102:4): proctitle=69707461626C6573002D77002D41006E69786F732D66772D6C6F672D726566757365002D6D00706B74747970650000002D2D706B742D7479706500756E6963617374002D6A006E69786F732D66772D726566757365
kube# [ 3.223092] audit: type=1130 audit(1580088709.103:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-journald comm="systemd" exe="/nix/store/lqhv9pl3cp8vcgfq0w2ms5l3pg7a6ga3-systemd-243.3/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
kube# [ 3.171528] systemd-modules-load[629]: Failed to find module 'gcov-proc'
kube# [ 3.173638] systemd-modules-load[629]: Inserted module 'bridge'
kube# [ 3.174915] systemd-modules-load[629]: Inserted module 'macvlan'
kube# [ 3.176110] systemd-modules-load[629]: Inserted module 'tap'
kube# [ 3.177246] systemd-modules-load[629]: Inserted module 'tun'
kube# [ 3.178342] systemd-modules-load[629]: Inserted module 'loop'
kube# [ 3.233949] audit: type=1325 audit(1580088709.120:6): table=filter family=10 entries=13
kube# [ 3.179476] systemd-modules-load[629]: Inserted module 'br_netfilter'
kube# [ 3.181591] systemd-udevd[636]: Network interface NamePolicy= disabled on kernel command line, ignoring.
kube# [ 3.183166] systemd[1]: Starting Flush Journal to Persistent Storage...
kube# [ 3.184606] systemd-udevd[636]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 Invalid value for OPTIONS key, ignoring: 'event_timeout=180'
kube# [ 3.186688] systemd-udevd[636]: /nix/store/8w316wmy13r2yblac0lj188704pyimxp-udev-rules/11-dm-lvm.rules:40 The line takes no effect, ignoring.
kube# [ 3.250732] systemd-journald[628]: Received client request to flush runtime journal.
kube# [ 3.241817] systemd[1]: Started udev Kernel Device Manager.
kube# [ 3.243724] systemd[1]: Started Flush Journal to Persistent Storage.
kube# [ 3.245439] systemd[1]: Starting Create Volatile Files and Directories...
kube# [ 3.309172] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
kube# [ 3.312079] ACPI: Power Button [PWRF]
kube# [ 3.266782] systemd[1]: Started Create Volatile Files and Directories.
kube# [ 3.268353] systemd[1]: Starting Rebuild Journal Catalog...
kube# [ 3.269681] systemd[1]: Starting Update UTMP about System Boot/Shutdown...
kube# [ 3.329690] Linux agpgart interface v0.103
kube# [ 3.288964] systemd[1]: Started Update UTMP about System Boot/Shutdown.
kube# [ 3.300546] systemd[1]: Started Rebuild Journal Catalog.
kube# [ 3.301692] systemd[1]: Starting Update is Completed...
kube# [ 3.312842] systemd[1]: Started Update is Completed.
kube# [ 3.397975] parport_pc 00:04: reported by Plug and Play ACPI
kube# [ 3.398656] parport0: PC-style at 0x378, irq 7 [PCSPP(,...)]
kube# [ 3.425170] Floppy drive(s): fd0 is 2.88M AMI BIOS
kube# [ 3.438381] FDC 0 is a S82078B
kube# [ 3.488671] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
kube# [ 3.450565] systemd-udevd[692]: Using default interface naming scheme 'v243'.
kube# [ 3.454172] systemd-udevd[692]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.533352] mousedev: PS/2 mouse device common for all mice
kube# [ 3.491545] systemd-udevd[706]: Using default interface naming scheme 'v243'.
kube# [ 3.492988] systemd-udevd[706]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 3.513137] systemd[1]: Found device Virtio network device.
kube# [ 3.570935] [drm] Found bochs VGA, ID 0xb0c0.
kube# [ 3.572018] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebd0000.
kube# [ 3.574283] [TTM] Zone kernel: Available graphics memory: 1021090 kiB
kube# [ 3.575899] [TTM] Initializing pool allocator
kube# [ 3.577103] [TTM] Initializing DMA pool allocator
kube# [ 3.549549] systemd[1]: Found device /dev/ttyS0.
kube# [ 3.622815] fbcon: bochsdrmfb (fb0) is primary device
kube# [ 3.637723] powernow_k8: Power state transitions not supported
kube# [ 3.637738] powernow_k8: Power state transitions not supported
kube# [ 3.637754] powernow_k8: Power state transitions not supported
kube# [ 3.637777] powernow_k8: Power state transitions not supported
kube# [ 3.637811] powernow_k8: Power state transitions not supported
kube# [ 3.637829] powernow_k8: Power state transitions not supported
kube# [ 3.637841] powernow_k8: Power state transitions not supported
kube# [ 3.637847] powernow_k8: Power state transitions not supported
kube# [ 3.637859] powernow_k8: Power state transitions not supported
kube# [ 3.637869] powernow_k8: Power state transitions not supported
kube# [ 3.637882] powernow_k8: Power state transitions not supported
kube# [ 3.637894] powernow_k8: Power state transitions not supported
kube# [ 3.637897] powernow_k8: Power state transitions not supported
kube# [ 3.637914] powernow_k8: Power state transitions not supported
kube# [ 3.637932] powernow_k8: Power state transitions not supported
kube# [ 3.637940] powernow_k8: Power state transitions not supported
kube# [ 3.710493] Console: switching to colour frame buffer device 128x48
kube# [ 3.804562] bochs-drm 0000:00:02.0: fb0: bochsdrmfb frame buffer device
kube# [ 3.810807] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0
kube# [ 3.840408] powernow_k8: Power state transitions not supported
kube# [ 3.841152] powernow_k8: Power state transitions not supported
kube# [ 3.842042] powernow_k8: Power state transitions not supported
kube# [ 3.842672] powernow_k8: Power state transitions not supported
kube# [ 3.843418] powernow_k8: Power state transitions not supported
kube# [ 3.844055] powernow_k8: Power state transitions not supported
kube# [ 3.844695] powernow_k8: Power state transitions not supported
kube# [ 3.845364] powernow_k8: Power state transitions not supported
kube# [ 3.846025] powernow_k8: Power state transitions not supported
kube# [ 3.846707] powernow_k8: Power state transitions not supported
kube# [ 3.847512] powernow_k8: Power state transitions not supported
kube# [ 3.848386] powernow_k8: Power state transitions not supported
kube# [ 3.849027] powernow_k8: Power state transitions not supported
kube# [ 3.849906] powernow_k8: Power state transitions not supported
kube# [ 3.850597] powernow_k8: Power state transitions not supported
kube# [ 3.851226] powernow_k8: Power state transitions not supported
kube# [ 3.853117] EDAC MC: Ver: 3.0.0
kube# [ 3.855766] MCE: In-kernel MCE decoding enabled.
kube# [ 3.846189] systemd[1]: Started Firewall.
kube# [ 3.903470] powernow_k8: Power state transitions not supported
kube# [ 3.904412] powernow_k8: Power state transitions not supported
kube# [ 3.905041] powernow_k8: Power state transitions not supported
kube# [ 3.905690] powernow_k8: Power state transitions not supported
kube# [ 3.906312] powernow_k8: Power state transitions not supported
kube# [ 3.906935] powernow_k8: Power state transitions not supported
kube# [ 3.907567] powernow_k8: Power state transitions not supported
kube# [ 3.908272] powernow_k8: Power state transitions not supported
kube# [ 3.908893] powernow_k8: Power state transitions not supported
kube# [ 3.909516] powernow_k8: Power state transitions not supported
kube# [ 3.910137] powernow_k8: Power state transitions not supported
kube# [ 3.910785] powernow_k8: Power state transitions not supported
kube# [ 3.911422] powernow_k8: Power state transitions not supported
kube# [ 3.912174] powernow_k8: Power state transitions not supported
kube# [ 3.912821] powernow_k8: Power state transitions not supported
kube# [ 3.913414] powernow_k8: Power state transitions not supported
kube# [ 3.944282] powernow_k8: Power state transitions not supported
kube# [ 3.945144] powernow_k8: Power state transitions not supported
kube# [ 3.945821] powernow_k8: Power state transitions not supported
kube# [ 3.946592] powernow_k8: Power state transitions not supported
kube# [ 3.947356] powernow_k8: Power state transitions not supported
kube# [ 3.948000] powernow_k8: Power state transitions not supported
kube# [ 3.948619] powernow_k8: Power state transitions not supported
kube# [ 3.949334] powernow_k8: Power state transitions not supported
kube# [ 3.949967] powernow_k8: Power state transitions not supported
kube# [ 3.950571] powernow_k8: Power state transitions not supported
kube# [ 3.951208] powernow_k8: Power state transitions not supported
kube# [ 3.951874] powernow_k8: Power state transitions not supported
kube# [ 3.952526] powernow_k8: Power state transitions not supported
kube# [ 3.953196] powernow_k8: Power state transitions not supported
kube# [ 3.953846] powernow_k8: Power state transitions not supported
kube# [ 3.954421] powernow_k8: Power state transitions not supported
kube# [ 3.993716] powernow_k8: Power state transitions not supported
kube# [ 3.994413] powernow_k8: Power state transitions not supported
kube# [ 3.995034] powernow_k8: Power state transitions not supported
kube# [ 3.995674] powernow_k8: Power state transitions not supported
kube# [ 3.996277] powernow_k8: Power state transitions not supported
kube# [ 3.996918] powernow_k8: Power state transitions not supported
kube# [ 3.997531] powernow_k8: Power state transitions not supported
kube# [ 3.998287] powernow_k8: Power state transitions not supported
kube# [ 3.998958] powernow_k8: Power state transitions not supported
kube# [ 3.999638] powernow_k8: Power state transitions not supported
kube# [ 4.000262] powernow_k8: Power state transitions not supported
kube# [ 4.000907] powernow_k8: Power state transitions not supported
kube# [ 4.001569] powernow_k8: Power state transitions not supported
kube# [ 4.002206] powernow_k8: Power state transitions not supported
kube# [ 4.002839] powernow_k8: Power state transitions not supported
kube# [ 4.003428] powernow_k8: Power state transitions not supported
kube# [ 4.050450] powernow_k8: Power state transitions not supported
kube# [ 4.051076] powernow_k8: Power state transitions not supported
kube# [ 4.051737] powernow_k8: Power state transitions not supported
kube# [ 4.052391] powernow_k8: Power state transitions not supported
kube# [ 4.053064] powernow_k8: Power state transitions not supported
kube# [ 4.053846] powernow_k8: Power state transitions not supported
kube# [ 4.054473] powernow_k8: Power state transitions not supported
kube# [ 4.055164] powernow_k8: Power state transitions not supported
kube# [ 4.055810] powernow_k8: Power state transitions not supported
kube# [ 4.056448] powernow_k8: Power state transitions not supported
kube# [ 4.057084] powernow_k8: Power state transitions not supported
kube# [ 4.057678] powernow_k8: Power state transitions not supported
kube# [ 4.058332] powernow_k8: Power state transitions not supported
kube# [ 4.059066] powernow_k8: Power state transitions not supported
kube# [ 4.059752] powernow_k8: Power state transitions not supported
kube# [ 4.060363] powernow_k8: Power state transitions not supported
kube# [ 4.099442] powernow_k8: Power state transitions not supported
kube# [ 4.100138] powernow_k8: Power state transitions not supported
kube# [ 4.100773] powernow_k8: Power state transitions not supported
kube# [ 4.101403] powernow_k8: Power state transitions not supported
kube# [ 4.102049] powernow_k8: Power state transitions not supported
kube# [ 4.102680] powernow_k8: Power state transitions not supported
kube# [ 4.103296] powernow_k8: Power state transitions not supported
kube# [ 4.103979] powernow_k8: Power state transitions not supported
kube# [ 4.104615] powernow_k8: Power state transitions not supported
kube# [ 4.105242] powernow_k8: Power state transitions not supported
kube# [ 4.105867] powernow_k8: Power state transitions not supported
kube# [ 4.106461] powernow_k8: Power state transitions not supported
kube# [ 4.107135] powernow_k8: Power state transitions not supported
kube# [ 4.107772] powernow_k8: Power state transitions not supported
kube# [ 4.108389] powernow_k8: Power state transitions not supported
kube# [ 4.109003] powernow_k8: Power state transitions not supported
kube# [ 4.148238] powernow_k8: Power state transitions not supported
kube# [ 4.148979] powernow_k8: Power state transitions not supported
kube# [ 4.149635] powernow_k8: Power state transitions not supported
kube# [ 4.150373] powernow_k8: Power state transitions not supported
kube# [ 4.151005] powernow_k8: Power state transitions not supported
kube# [ 4.151636] powernow_k8: Power state transitions not supported
kube# [ 4.152260] powernow_k8: Power state transitions not supported
kube# [ 4.152884] powernow_k8: Power state transitions not supported
kube# [ 4.153512] powernow_k8: Power state transitions not supported
kube# [ 4.154131] powernow_k8: Power state transitions not supported
kube# [ 4.154752] powernow_k8: Power state transitions not supported
kube# [ 4.155453] powernow_k8: Power state transitions not supported
kube# [ 4.156061] powernow_k8: Power state transitions not supported
kube# [ 4.156773] powernow_k8: Power state transitions not supported
kube# [ 4.157409] powernow_k8: Power state transitions not supported
kube# [ 4.158000] powernow_k8: Power state transitions not supported
kube# [ 4.193423] powernow_k8: Power state transitions not supported
kube# [ 4.194244] powernow_k8: Power state transitions not supported
kube# [ 4.194881] powernow_k8: Power state transitions not supported
kube# [ 4.195491] powernow_k8: Power state transitions not supported
kube# [ 4.196137] powernow_k8: Power state transitions not supported
kube# [ 4.196797] powernow_k8: Power state transitions not supported
kube# [ 4.197411] powernow_k8: Power state transitions not supported
kube# [ 4.198042] powernow_k8: Power state transitions not supported
kube# [ 4.198643] powernow_k8: Power state transitions not supported
kube# [ 4.199270] powernow_k8: Power state transitions not supported
kube# [ 4.199882] powernow_k8: Power state transitions not supported
kube# [ 4.200485] powernow_k8: Power state transitions not supported
kube# [ 4.201179] powernow_k8: Power state transitions not supported
kube# [ 4.201819] powernow_k8: Power state transitions not supported
kube# [ 4.202430] powernow_k8: Power state transitions not supported
kube# [ 4.203045] powernow_k8: Power state transitions not supported
kube# [ 4.156129] systemd-udevd[698]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 4.164268] systemd[1]: Found device /dev/hvc0.
kube# [ 4.231294] powernow_k8: Power state transitions not supported
kube# [ 4.232156] powernow_k8: Power state transitions not supported
kube# [ 4.232848] powernow_k8: Power state transitions not supported
kube# [ 4.233486] powernow_k8: Power state transitions not supported
kube# [ 4.234110] powernow_k8: Power state transitions not supported
kube# [ 4.234840] powernow_k8: Power state transitions not supported
kube# [ 4.235572] powernow_k8: Power state transitions not supported
kube# [ 4.236228] powernow_k8: Power state transitions not supported
kube# [ 4.236853] powernow_k8: Power state transitions not supported
kube# [ 4.237490] powernow_k8: Power state transitions not supported
kube# [ 4.238532] powernow_k8: Power state transitions not supported
kube# [ 4.239549] powernow_k8: Power state transitions not supported
kube# [ 4.240663] powernow_k8: Power state transitions not supported
kube# [ 4.241846] powernow_k8: Power state transitions not supported
kube# [ 4.242797] powernow_k8: Power state transitions not supported
kube# [ 4.243683] powernow_k8: Power state transitions not supported
kube# [ 4.247400] ppdev: user-space parallel port driver
kube# [ 4.251306] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
kube# [ 4.208029] udevadm[638]: systemd-udev-settle.service is deprecated.
kube# [ 4.276184] powernow_k8: Power state transitions not supported
kube# [ 4.277483] powernow_k8: Power state transitions not supported
kube# [ 4.279266] powernow_k8: Power state transitions not supported
kube# [ 4.279995] powernow_k8: Power state transitions not supported
kube# [ 4.281147] powernow_k8: Power state transitions not supported
kube# [ 4.282368] powernow_k8: Power state transitions not supported
kube# [ 4.283589] powernow_k8: Power state transitions not supported
kube# [ 4.285231] powernow_k8: Power state transitions not supported
kube# [ 4.286590] powernow_k8: Power state transitions not supported
kube# [ 4.287684] powernow_k8: Power state transitions not supported
kube# [ 4.288904] powernow_k8: Power state transitions not supported
kube# [ 4.289952] powernow_k8: Power state transitions not supported
kube# [ 4.291095] powernow_k8: Power state transitions not supported
kube# [ 4.292212] powernow_k8: Power state transitions not supported
kube# [ 4.293298] powernow_k8: Power state transitions not supported
kube# [ 4.294898] powernow_k8: Power state transitions not supported
kube# [ 4.477681] powernow_k8: Power state transitions not supported
kube# [ 4.478793] powernow_k8: Power state transitions not supported
kube# [ 4.480210] powernow_k8: Power state transitions not supported
kube# [ 4.480887] powernow_k8: Power state transitions not supported
kube# [ 4.482425] powernow_k8: Power state transitions not supported
kube# [ 4.483930] powernow_k8: Power state transitions not supported
kube# [ 4.484714] powernow_k8: Power state transitions not supported
kube# [ 4.486008] powernow_k8: Power state transitions not supported
kube# [ 4.487337] powernow_k8: Power state transitions not supported
kube# [ 4.488557] powernow_k8: Power state transitions not supported
kube# [ 4.490012] powernow_k8: Power state transitions not supported
kube# [ 4.491129] powernow_k8: Power state transitions not supported
kube# [ 4.492268] powernow_k8: Power state transitions not supported
kube# [ 4.493642] powernow_k8: Power state transitions not supported
kube# [ 4.495040] powernow_k8: Power state transitions not supported
kube# [ 4.496376] powernow_k8: Power state transitions not supported
kube# [ 4.525321] powernow_k8: Power state transitions not supported
kube# [ 4.526337] powernow_k8: Power state transitions not supported
kube# [ 4.527951] powernow_k8: Power state transitions not supported
kube# [ 4.529149] powernow_k8: Power state transitions not supported
kube# [ 4.530356] powernow_k8: Power state transitions not supported
kube# [ 4.531563] powernow_k8: Power state transitions not supported
kube# [ 4.532851] powernow_k8: Power state transitions not supported
kube# [ 4.534048] powernow_k8: Power state transitions not supported
kube# [ 4.535326] powernow_k8: Power state transitions not supported
kube# [ 4.536237] powernow_k8: Power state transitions not supported
kube# [ 4.537354] powernow_k8: Power state transitions not supported
kube# [ 4.538593] powernow_k8: Power state transitions not supported
kube# [ 4.539497] powernow_k8: Power state transitions not supported
kube# [ 4.540600] powernow_k8: Power state transitions not supported
kube# [ 4.541618] powernow_k8: Power state transitions not supported
kube# [ 4.542782] powernow_k8: Power state transitions not supported
kube# [ 4.567271] powernow_k8: Power state transitions not supported
kube# [ 4.567890] powernow_k8: Power state transitions not supported
kube# [ 4.568534] powernow_k8: Power state transitions not supported
kube# [ 4.569162] powernow_k8: Power state transitions not supported
kube# [ 4.569797] powernow_k8: Power state transitions not supported
kube# [ 4.570486] powernow_k8: Power state transitions not supported
kube# [ 4.571114] powernow_k8: Power state transitions not supported
kube# [ 4.571756] powernow_k8: Power state transitions not supported
kube# [ 4.572373] powernow_k8: Power state transitions not supported
kube# [ 4.572991] powernow_k8: Power state transitions not supported
kube# [ 4.573590] powernow_k8: Power state transitions not supported
kube# [ 4.574241] powernow_k8: Power state transitions not supported
kube# [ 4.574863] powernow_k8: Power state transitions not supported
kube# [ 4.575458] powernow_k8: Power state transitions not supported
kube# [ 4.576072] powernow_k8: Power state transitions not supported
kube# [ 4.576801] powernow_k8: Power state transitions not supported
kube# [ 4.599200] powernow_k8: Power state transitions not supported
kube# [ 4.599899] powernow_k8: Power state transitions not supported
kube# [ 4.600517] powernow_k8: Power state transitions not supported
kube# [ 4.601165] powernow_k8: Power state transitions not supported
kube# [ 4.601814] powernow_k8: Power state transitions not supported
kube# [ 4.602516] powernow_k8: Power state transitions not supported
kube# [ 4.603152] powernow_k8: Power state transitions not supported
kube# [ 4.603744] powernow_k8: Power state transitions not supported
kube# [ 4.604362] powernow_k8: Power state transitions not supported
kube# [ 4.604963] powernow_k8: Power state transitions not supported
kube# [ 4.605581] powernow_k8: Power state transitions not supported
kube# [ 4.606205] powernow_k8: Power state transitions not supported
kube# [ 4.606813] powernow_k8: Power state transitions not supported
kube# [ 4.607413] powernow_k8: Power state transitions not supported
kube# [ 4.608028] powernow_k8: Power state transitions not supported
kube# [ 4.608648] powernow_k8: Power state transitions not supported
kube# [ 4.631504] powernow_k8: Power state transitions not supported
kube# [ 4.632180] powernow_k8: Power state transitions not supported
kube# [ 4.632897] powernow_k8: Power state transitions not supported
kube# [ 4.633517] powernow_k8: Power state transitions not supported
kube# [ 4.634128] powernow_k8: Power state transitions not supported
kube# [ 4.634764] powernow_k8: Power state transitions not supported
kube# [ 4.635397] powernow_k8: Power state transitions not supported
kube# [ 4.636006] powernow_k8: Power state transitions not supported
kube# [ 4.636625] powernow_k8: Power state transitions not supported
kube# [ 4.637228] powernow_k8: Power state transitions not supported
kube# [ 4.637865] powernow_k8: Power state transitions not supported
kube# [ 4.638473] powernow_k8: Power state transitions not supported
kube# [ 4.639082] powernow_k8: Power state transitions not supported
kube# [ 4.639693] powernow_k8: Power state transitions not supported
kube# [ 4.640302] powernow_k8: Power state transitions not supported
kube# [ 4.641003] powernow_k8: Power state transitions not supported
kube# [ 4.631672] systemd[1]: Started udev Wait for Complete Device Initialization.
kube# [ 4.632709] systemd[1]: Reached target System Initialization.
kube# [ 4.633399] systemd[1]: Started Daily Cleanup of Temporary Directories.
kube# [ 4.634165] systemd[1]: Reached target Timers.
kube# [ 4.634804] systemd[1]: Listening on D-Bus System Message Bus Socket.
kube# [ 4.635752] systemd[1]: Starting Docker Socket for the API.
kube# [ 4.636697] systemd[1]: Listening on Nix Daemon Socket.
kube# [ 4.637830] systemd[1]: Listening on Docker Socket for the API.
kube# [ 4.638534] systemd[1]: Reached target Sockets.
kube# [ 4.639079] systemd[1]: Reached target Basic System.
kube# [ 4.639868] systemd[1]: Starting Kernel Auditing...
kube# [ 4.640898] systemd[1]: Started backdoor.service.
kube# [ 4.642132] systemd[1]: Starting DHCP Client...
kube# [ 4.643850] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 4.645313] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.646735] systemd[1]: Starting resolvconf update...
kube# connecting to host...
kube# [ 4.656069] nscd[790]: 790 monitoring file `/etc/passwd` (1)
kube# [ 4.656607] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[785]: touch: cannot touch '/var/lib/kubernetes/secrets/apitoken.secret': No such file or directory
kube# [ 4.656984] nscd[790]: 790 monitoring directory `/etc` (2)
kube# [ 4.657372] nscd[790]: 790 monitoring file `/etc/group` (3)
kube# [ 4.657621] nscd[790]: 790 monitoring directory `/etc` (2)
kube# [ 4.659562] nscd[790]: 790 monitoring file `/etc/hosts` (4)
kube# [ 4.660002] nscd[790]: 790 monitoring directory `/etc` (2)
kube# [ 4.660456] nscd[790]: 790 disabled inotify-based monitoring for file `/etc/resolv.conf': No such file or directory
kube# [ 4.660702] nscd[790]: 790 stat failed for file `/etc/resolv.conf'; will try again later: No such file or directory
kube# [ 4.663024] nscd[790]: 790 monitoring file `/etc/services` (5)
kube# [ 4.663289] nscd[790]: 790 monitoring directory `/etc` (2)
kube# [ 4.663540] nscd[790]: 790 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.663939] nscd[790]: 790 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.666218] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[785]: /nix/store/s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start: line 16: /var/lib/kubernetes/secrets/ca.pem: No such file or directory
kube# [ 4.669419] 3j5xawpr21sl93gg17ng2xhw943msvhn-audit-disable[782]: No rules
kube# [ 4.671449] dhcpcd[784]: dev: loaded udev
kube# [ 4.676105] systemd[1]: Started Kernel Auditing.
kube# [ 4.737141] 8021q: 802.1Q VLAN Support v1.8
kube# [ 4.697722] systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
kube# [ 4.699323] systemd[1]: Started D-Bus System Message Bus.
kube# [ 4.711442] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[785]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 4.711665] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[785]: Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Couldn't connect to server
kube: connected to guest root shell
kube# [ 4.778504] cfg80211: Loading compiled-in X.509 certificates for regulatory database
kube# sh: cannot set terminal process group (-1): Inappropriate ioctl for device
kube# sh: no job control in this shell
kube# [ 4.792968] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
kube: (connecting took 5.35 seconds)
(5.35 seconds)
kube# [ 4.795401] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
kube# [ 4.797653] cfg80211: failed to load regulatory.db
kube# [ 4.747113] dbus-daemon[821]: dbus[821]: Unknown username "systemd-timesync" in message bus configuration file
kube# [ 4.783448] systemd[1]: kube-certmgr-bootstrap.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 4.783874] systemd[1]: kube-certmgr-bootstrap.service: Failed with result 'exit-code'.
kube# [ 4.786421] systemd[1]: nscd.service: Succeeded.
kube# [ 4.786655] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.787992] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.795820] nscd[852]: 852 monitoring file `/etc/passwd` (1)
kube# [ 4.796169] nscd[852]: 852 monitoring directory `/etc` (2)
kube# [ 4.796587] nscd[852]: 852 monitoring file `/etc/group` (3)
kube# [ 4.797110] nscd[852]: 852 monitoring directory `/etc` (2)
kube# [ 4.797459] nscd[852]: 852 monitoring file `/etc/hosts` (4)
kube# [ 4.797717] nscd[852]: 852 monitoring directory `/etc` (2)
kube# [ 4.798180] nscd[852]: 852 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.798550] nscd[852]: 852 monitoring directory `/etc` (2)
kube# [ 4.798930] nscd[852]: 852 monitoring file `/etc/services` (6)
kube# [ 4.799292] nscd[852]: 852 monitoring directory `/etc` (2)
kube# [ 4.801698] nscd[852]: 852 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.801999] nscd[852]: 852 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.807898] systemd[1]: Started resolvconf update.
kube# [ 4.808257] systemd[1]: Reached target Network (Pre).
kube# [ 4.810117] systemd[1]: Starting Address configuration of eth1...
kube# [ 4.811847] systemd[1]: Starting Link configuration of eth1...
kube# [ 4.812351] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.812807] systemd[1]: Reached target Host and Network Name Lookups.
kube# [ 4.813255] systemd[1]: Reached target User and Group Name Lookups.
kube# [ 4.815275] systemd[1]: Starting Login Service...
kube# [ 4.820599] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[865]: Configuring link...
kube# [ 4.834334] mn1g2a6qvkb8wddqmf7bgnb00q634fh2-unit-script-network-addresses-eth1-start[864]: adding address 192.168.1.1/24... done
kube# [ 4.890719] 8021q: adding VLAN 0 to HW filter on device eth1
kube# [ 4.837657] hyzgkj4862kyjdfrp1qq8vmmrm85zlm6-unit-script-network-link-eth1-start[865]: bringing up interface... done
kube# [ 4.839598] systemd[1]: Started Link configuration of eth1.
kube# [ 4.839964] systemd[1]: Reached target All Network Interfaces (deprecated).
kube# [ 4.845240] systemd[1]: Started Address configuration of eth1.
kube# [ 4.846433] systemd[1]: Starting Networking Setup...
kube# [ 4.895399] nscd[852]: 852 monitored file `/etc/resolv.conf` was written to
kube# [ 4.906406] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 4.917594] systemd[1]: Started Networking Setup.
kube# [ 4.918990] systemd[1]: Starting Extra networking commands....
kube# [ 4.920015] systemd[1]: nscd.service: Succeeded.
kube# [ 4.920369] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 4.922161] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 4.926863] systemd[1]: Started Extra networking commands..
kube# [ 4.927073] systemd[1]: Reached target Network.
kube# [ 4.928456] systemd[1]: Starting CFSSL CA API server...
kube# [ 4.929730] systemd[1]: Starting etcd key-value store...
kube# [ 4.931207] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 4.932947] nscd[923]: 923 monitoring file `/etc/passwd` (1)
kube# [ 4.933372] systemd[1]: Starting Kubernetes addon manager...
kube# [ 4.933705] nscd[923]: 923 monitoring directory `/etc` (2)
kube# [ 4.934093] nscd[923]: 923 monitoring file `/etc/group` (3)
kube# [ 4.934353] nscd[923]: 923 monitoring directory `/etc` (2)
kube# [ 4.934642] nscd[923]: 923 monitoring file `/etc/hosts` (4)
kube# [ 4.935101] nscd[923]: 923 monitoring directory `/etc` (2)
kube# [ 4.935418] nscd[923]: 923 monitoring file `/etc/resolv.conf` (5)
kube# [ 4.935701] nscd[923]: 923 monitoring directory `/etc` (2)
kube# [ 4.936056] nscd[923]: 923 monitoring file `/etc/services` (6)
kube# [ 4.936355] nscd[923]: 923 monitoring directory `/etc` (2)
kube# [ 4.938241] nscd[923]: 923 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 4.938509] nscd[923]: 923 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 4.941932] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 4.942342] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 4.944713] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 4.946691] systemd[1]: Starting Permit User Sessions...
kube# [ 4.952822] systemd[1]: Started Name Service Cache Daemon.
kube# [ 4.966912] systemd[1]: Started Permit User Sessions.
kube# [ 4.968963] systemd[1]: Started Getty on tty1.
kube# [ 4.969198] systemd[1]: Reached target Login Prompts.
kube# [ 5.054057] systemd[866]: systemd-logind.service: Executable /sbin/modprobe missing, skipping: No such file or directory
kube# [ 5.247165] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] generate received request
kube# [ 5.247424] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] received CSR
kube# [ 5.247820] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] generating key: rsa-2048
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 5.290280] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[926]: Error in configuration:
kube# [ 5.290696] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[926]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 5.291115] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[926]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 5.291328] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[926]: * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 5.302640] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 5.303029] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 5.303367] systemd[1]: Failed to start Kubernetes addon manager.
kube: exit status 1
(5.92 seconds)
kube# [ 5.313072] systemd-logind[959]: New seat seat0.
kube# [ 5.314984] systemd-logind[959]: Watching system buttons on /dev/input/event2 (Power Button)
kube# [ 5.315270] systemd-logind[959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
kube# [ 5.316430] systemd[1]: Started Login Service.
kube# [ 5.370092] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] encoded CSR
kube# [ 5.372825] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] signed certificate with serial number 193351744302464464584122014074867956729930605333
kube# [ 5.382312] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] generate received request
kube# [ 5.382628] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] received CSR
kube# [ 5.383009] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] generating key: rsa-2048
kube# [ 5.399650] etcd[924]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 5.400068] etcd[924]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.400488] etcd[924]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 5.400930] etcd[924]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 5.401271] etcd[924]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 5.401593] etcd[924]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 5.402030] etcd[924]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 5.402352] etcd[924]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 5.402659] etcd[924]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.403548] etcd[924]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 5.403995] etcd[924]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 5.404319] etcd[924]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 5.404630] etcd[924]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 5.405120] etcd[924]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 5.405533] etcd[924]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.405921] etcd[924]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 5.406167] etcd[924]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 5.406517] etcd[924]: etcd Version: 3.3.13
kube#
kube# [ 5.406890] etcd[924]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 5.407164] etcd[924]: Go Version: go1.12.9
kube#
kube# [ 5.407484] etcd[924]: Go OS/Arch: linux/amd64
kube#
kube# [ 5.407807] etcd[924]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 5.408135] etcd[924]: failed to detect default host (could not find default route)
kube# [ 5.408464] etcd[924]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 5.408843] etcd[924]: open /var/lib/kubernetes/secrets/etcd.pem: no such file or directory
kube# [ 5.419096] systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 5.445484] systemd[1]: etcd.service: Failed with result 'exit-code'.
kube# [ 5.445717] systemd[1]: Failed to start etcd key-value store.
kube# [ 5.553548] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] encoded CSR
kube# [ 5.556268] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [INFO] signed certificate with serial number 574851683323024977228611978087759032837913493539
kube# [ 5.556478] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: 2020/01/27 01:31:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
kube# [ 5.556712] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: websites. For more information see the Baseline Requirements for the Issuance and Management
kube# [ 5.557122] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
kube# [ 5.557413] a0hdc8swj85g2w9kppyba63kln6l5b3k-unit-script-cfssl-pre-start[922]: specifically, section 10.2.3 ("Information Requirements").
kube# [ 5.579974] systemd[1]: Started CFSSL CA API server.
kube# [ 5.591393] cfssl[1033]: 2020/01/27 01:31:51 [INFO] Initializing signer
kube# [ 5.591638] cfssl[1033]: 2020/01/27 01:31:51 [WARNING] couldn't initialize ocsp signer: open : no such file or directory
kube# [ 5.592031] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/newcert' is enabled
kube# [ 5.592272] cfssl[1033]: 2020/01/27 01:31:51 [INFO] bundler API ready
kube# [ 5.592481] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/bundle' is enabled
kube# [ 5.592693] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/info' is enabled
kube# [ 5.593037] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/scaninfo' is enabled
kube# [ 5.593294] cfssl[1033]: 2020/01/27 01:31:51 [WARNING] endpoint '/' is disabled: could not locate box "static"
kube# [ 5.593489] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/scan' is enabled
kube# [ 5.593685] cfssl[1033]: 2020/01/27 01:31:51 [WARNING] endpoint 'revoke' is disabled: cert db not configured (missing -db-config)
kube# [ 5.594016] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/sign' is enabled
kube# [ 5.594216] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/authsign' is enabled
kube# [ 5.594597] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/gencrl' is enabled
kube# [ 5.594861] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/init_ca' is enabled
kube# [ 5.595199] cfssl[1033]: 2020/01/27 01:31:51 [WARNING] endpoint 'crl' is disabled: cert db not configured (missing -db-config)
kube# [ 5.595475] cfssl[1033]: 2020/01/27 01:31:51 [INFO] setting up key / CSR generator
kube# [ 5.595721] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/newkey' is enabled
kube# [ 5.596026] cfssl[1033]: 2020/01/27 01:31:51 [INFO] endpoint '/api/v1/cfssl/certinfo' is enabled
kube# [ 5.596223] cfssl[1033]: 2020/01/27 01:31:51 [WARNING] endpoint 'ocspsign' is disabled: signer not initialized
kube# [ 5.596416] cfssl[1033]: 2020/01/27 01:31:51 [INFO] Handler set up complete.
kube# [ 5.596602] cfssl[1033]: 2020/01/27 01:31:51 [INFO] Now listening on https://0.0.0.0:8888
kube# [ 5.656919] kube-proxy[928]: W0127 01:31:51.600081 928 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 5.674371] kube-proxy[928]: W0127 01:31:51.617805 928 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.678723] kube-proxy[928]: W0127 01:31:51.622261 928 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.680424] kube-proxy[928]: W0127 01:31:51.623972 928 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.682195] kube-proxy[928]: W0127 01:31:51.625714 928 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.683969] kube-proxy[928]: W0127 01:31:51.627508 928 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.685702] kube-proxy[928]: W0127 01:31:51.629269 928 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 5.697118] kube-proxy[928]: F0127 01:31:51.640644 928 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 5.703060] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 5.703283] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 5.855375] 8021q: adding VLAN 0 to HW filter on device eth0
kube# [ 5.802263] dhcpcd[784]: eth0: waiting for carrier
kube# [ 5.802571] dhcpcd[784]: eth0: carrier acquired
kube# [ 5.813417] dhcpcd[784]: DUID 00:01:00:01:25:c0:fa:07:52:54:00:12:34:56
kube# [ 5.813596] dhcpcd[784]: eth0: IAID 00:12:34:56
kube# [ 5.814047] dhcpcd[784]: eth0: adding address fe80::5054:ff:fe12:3456
kube# [ 6.039190] kube-apiserver[925]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 6.039452] kube-apiserver[925]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 6.039823] kube-apiserver[925]: I0127 01:31:51.982487 925 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 6.043275] kube-apiserver[925]: I0127 01:31:51.986806 925 server.go:147] Version: v1.15.6
kube# [ 6.045490] kube-apiserver[925]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 6.053321] kube-apiserver[925]: Usage:
kube# [ 6.053565] kube-apiserver[925]: kube-apiserver [flags]
kube# [ 6.053847] kube-apiserver[925]: Generic flags:
kube# [ 6.054063] kube-apiserver[925]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 6.054250] kube-apiserver[925]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 6.054460] kube-apiserver[925]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 6.054628] kube-apiserver[925]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.055000] kube-apiserver[925]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 6.055186] kube-apiserver[925]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 6.055468] kube-apiserver[925]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 6.055676] kube-apiserver[925]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 6.056023] kube-apiserver[925]: APIListChunking=true|false (BETA - default=true)
kube# [ 6.056201] kube-apiserver[925]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 6.056408] kube-apiserver[925]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 6.056645] kube-apiserver[925]: AppArmor=true|false (BETA - default=true)
kube# [ 6.056948] kube-apiserver[925]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 6.057150] kube-apiserver[925]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 6.057348] kube-apiserver[925]: BlockVolume=true|false (BETA - default=true)
kube# [ 6.057542] kube-apiserver[925]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 6.057745] kube-apiserver[925]: CPUManager=true|false (BETA - default=true)
kube# [ 6.057979] kube-apiserver[925]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 6.058172] kube-apiserver[925]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 6.058380] kube-apiserver[925]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 6.058588] kube-apiserver[925]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 6.058821] kube-apiserver[925]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 6.059009] kube-apiserver[925]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 6.059204] kube-apiserver[925]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 6.059400] kube-apiserver[925]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 6.059592] kube-apiserver[925]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 6.059843] kube-apiserver[925]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 6.060008] kube-apiserver[925]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 6.060247] kube-apiserver[925]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 6.060512] kube-apiserver[925]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 6.086440] kube-apiserver[925]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 6.086539] kube-apiserver[925]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 6.086899] kube-apiserver[925]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 6.087201] kube-apiserver[925]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 6.087411] kube-apiserver[925]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 6.087649] kube-apiserver[925]: DevicePlugins=true|false (BETA - default=true)
kube# [ 6.088042] kube-apiserver[925]: DryRun=true|false (BETA - default=true)
kube# [ 6.088244] kube-apiserver[925]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 6.088448] kube-apiserver[925]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 6.088643] kube-apiserver[925]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 6.089006] kube-apiserver[925]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 6.089192] kube-apiserver[925]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 6.089392] kube-apiserver[925]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 6.089583] kube-apiserver[925]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 6.089861] kube-apiserver[925]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 6.090151] kube-apiserver[925]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 6.090351] kube-apiserver[925]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 6.090543] kube-apiserver[925]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 6.090895] kube-apiserver[925]: MountContainers=true|false (ALPHA - default=false)
kube# [ 6.091088] kube-apiserver[925]: NodeLease=true|false (BETA - default=true)
kube# [ 6.091299] kube-apiserver[925]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 6.091494] kube-apiserver[925]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 6.091706] kube-apiserver[925]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 6.092060] kube-apiserver[925]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 6.092256] kube-apiserver[925]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 6.092452] kube-apiserver[925]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 6.092643] kube-apiserver[925]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 6.092990] kube-apiserver[925]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 6.093175] kube-apiserver[925]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 6.093367] kube-apiserver[925]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 6.093566] kube-apiserver[925]: RunAsGroup=true|false (BETA - default=true)
kube# [ 6.093877] kube-apiserver[925]: RuntimeClass=true|false (BETA - default=true)
kube# [ 6.094161] kube-apiserver[925]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 6.094430] kube-apiserver[925]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 6.094647] kube-apiserver[925]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 6.094901] kube-apiserver[925]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 6.095083] kube-apiserver[925]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 6.119749] kube-apiserver[925]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 6.120000] kube-apiserver[925]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 6.120195] kube-apiserver[925]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 6.120394] kube-apiserver[925]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 6.120582] kube-apiserver[925]: Sysctls=true|false (BETA - default=true)
kube# [ 6.120842] kube-apiserver[925]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 6.121104] kube-apiserver[925]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 6.121292] kube-apiserver[925]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 6.121485] kube-apiserver[925]: TokenRequest=true|false (BETA - default=true)
kube# [ 6.121689] kube-apiserver[925]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 6.122007] kube-apiserver[925]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 6.122196] kube-apiserver[925]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 6.122384] kube-apiserver[925]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 6.122576] kube-apiserver[925]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 6.122811] kube-apiserver[925]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 6.122992] kube-apiserver[925]: WinDSR=true|false (ALPHA - default=false)
kube# [ 6.123179] kube-apiserver[925]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 6.123375] kube-apiserver[925]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 6.123626] dhcpcd[784]: eth0: soliciting a DHCP lease
kube# [ 6.124051] kube-apiserver[925]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "defa[ 6.193709] serial8250: too much work for irq4
kube# ult")
kube# [ 6.124275] kube-apiserver[925]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 6.124468] kube-apiserver[925]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 6.124655] kube-apiserver[925]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 6.125003] kube-apiserver[925]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 6.125187] kube-apiserver[925]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 6.125386] kube-apiserver[925]: Etcd flags:
kube# [ 6.125578] kube-apiserver[925]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 6.125842] kube-apiserver[925]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 6.126101] kube-apiserver[925]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 6.126287] kube-apiserver[925]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 6.151197] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.151552] kube-apiserver[925]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 6.151749] kube-apiserver[925]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 6.151965] kube-apiserver[925]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 6.152176] kube-apiserver[925]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 6.152345] kube-apiserver[925]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 6.152542] kube-apiserver[925]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 6.152727] kube-apiserver[925]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 6.153029] kube-apiserver[925]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 6.153240] kube-apiserver[925]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 6.153436] kube-apiserver[925]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 6.153655] kube-apiserver[925]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 6.154143] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 6.154407] kube-apiserver[925]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 6.154665] kube-apiserver[925]: Secure serving flags:
kube# [ 6.154955] kube-apiserver[925]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 6.155258] kube-apiserver[925]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 6.155443] kube-apiserver[925]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 6.155634] kube-apiserver[925]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 6.155940] kube-apiserver[925]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 6.156189] kube-apiserver[925]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 6.179960] kube-apiserver[925]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 6.180136] kube-apiserver[925]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 6.180327] kube-apiserver[925]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wi[ 6.244729] serial8250: too much work for irq4
kube# ldcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 6.180552] kube-apiserver[925]: Insecure serving flags:
kube# [ 6.180738] kube-apiserver[925]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 6.181086] kube-apiserver[925]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.181321] kube-apiserver[925]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 6.181534] kube-apiserver[925]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 6.181753] kube-apiserver[925]: Auditing flags:
kube# [ 6.182044] kube-apiserver[925]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 6.182223] kube-apiserver[925]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 6.182414] kube-apiserver[925]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 6.182523] kube-apiserver[925]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 6.182722] kube-apiserver[925]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 6.183037] kube-apiserver[925]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 6.183240] kube-apiserver[925]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 6.183426] kube-apiserver[925]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 6.183638] kube-apiserver[925]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 6.183993] kube-apiserver[925]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 6.184239] kube-apiserver[925]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 6.211015] kube-apiserver[925]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 6.211277] kube-apiserver[925]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 6.211475] kube-apiserver[925]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.211702] kube-apiserver[925]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.212055] kube-apiserver[925]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.212260] kube-apiserver[925]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 6.212484] kube-apiserver[925]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 6.212754] kube-apiserver[925]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 6.213038] kube-apiserver[925]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 6.213364] kube-apiserver[925]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 6.213559] kube-apiserver[925]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 6.213836] kube-apiserver[925]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 6.214027] kube-apiserver[925]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 6.214242] kube-apiserver[925]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 6.214461] kube-apiserver[925]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 6.214711] kube-apiserver[925]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 6.215059] kube-apiserver[925]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 6.215263] kube-apiserver[925]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 6.215481] kube-apiserver[925]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 6.215700] kube-apiserver[925]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 6.216055] kube-apiserver[925]: Features flags:
kube# [ 6.216314] kube-apiserver[925]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 6.243302] kube-apiserver[925]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 6.243512] kube-apiserver[925]: Authentication flags:
kube# [ 6.243729] kube-apiserver[925]: --anonymous-auth Enables anonymous [ 6.302457] serial8250: too much work for irq4
kube# requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 6.243958] kube-apiserver[925]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 6.244178] kube-apiserver[925]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 6.244413] kube-apiserver[925]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 6.244627] kube-apiserver[925]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 6.244967] kube-apiserver[925]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 6.245220] kube-apiserver[925]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 6.245474] kube-apiserver[925]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 6.245746] kube-apiserver[925]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 6.246038] kube-apiserver[925]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 6.246302] kube-apiserver[925]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 6.246534] kube-apiserver[925]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 6.246751] kube-apiserver[925]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 6.246992] kube-apiserver[925]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 6.247209] kube-apiserver[925]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 6.247472] kube-apiserver[925]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 6.272143] kube-apiserver[925]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 6.272350] kube-apiserver[925]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 6.272544] kube-apiserver[925]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 6.272743] kube-apiserver[925]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 6.272957] kube-apiserver[925]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 6.273154] kube-apiserver[925]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 6.273352] kube-apiserver[925]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 6.273557] kube-apiserver[925]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 6.273918] kube-apiserver[925]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 6.274179] kube-apiserver[925]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 6.274376] kube-apiserver[925]: Authorization flags:
kube# [ 6.274601] kube-apiserver[925]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 6.274843] kube-apiserver[925]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 6.275021] kube-apiserver[925]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 6.275212] kube-apiserver[925]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 6.275412] kube-apiserver[925]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 6.275655] kube-apiserver[925]: Cloud provider flags:
kube# [ 6.275957] kube-apiserver[925]: --cl[ 6.355005] serial8250: too much work for irq4
kube# oud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 6.276146] kube-apiserver[925]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 6.276353] kube-apiserver[925]: Api enablement flags:
kube# [ 6.276591] kube-apiserver[925]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 6.302329] kube-apiserver[925]: Admission flags:
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 6.302527] kube-apiserver[925]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 6.302860] kube-apiserver[925]: --admission-control-config-file string File with admission control configuration.
kube# [ 6.303185] kube-apiserver[925]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.303461] kube-apiserver[925]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 6.303677] kube-apiserver[925]: Misc flags:
kube# [ 6.304005] kube-apiserver[925]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 6.328638] kube-apiserver[925]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 6.328955] kube-apiserver[925]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 6.329145] kube-apiserver[925]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 6.329383] kube-apiserver[925]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 6.329585] kube-apiserver[925]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 6.329845] kube-apiserver[925]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 6.330149] kube-apiserver[925]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 6.330415] kube-apiserver[925]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 6.330614] kube-apiserver[925]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 6.330963] kube-apiserver[925]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 6.331145] kube-apiserver[925]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 6.331342] kube-apiserver[925]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 6.331543] kube-apiserver[925]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 6.331750] kube-apiserver[925]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 6.331976] kube-apiserver[925]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 6.332216] kube-apiserver[925]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 6.332471] kube-apiserver[925]: --servi[ 6.409628] serial8250: too much work for irq4
kube# ce-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 6.332665] kube-apiserver[925]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 6.332998] kube-apiserver[925]: Global flags:
kube# [ 6.333185] kube-apiserver[925]: --alsologtostderr log to standard error as well as files
kube# [ 6.333377] kube-apiserver[925]: -h, --help help for kube-apiserver
kube# [ 6.333580] kube-apiserver[925]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 6.333824] kube-apiserver[925]: --log-dir string If non-empty, write log files in this directory
kube# [ 6.334007] kube-apiserver[925]: --log-file string If non-empty, use this log file
kube# [ 6.334210] kube-apiserver[925]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 6.362463] kube-apiserver[925]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 6.362662] kube-apiserver[925]: --logtostderr log to standard error instead of files (default true)
kube# [ 6.363024] kube-apiserver[925]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 6.363217] kube-apiserver[925]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 6.363416] kube-apiserver[925]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 6.363619] kube-apiserver[925]: -v, --v Level number for the log level verbosity
kube# [ 6.363857] kube-apiserver[925]: --version version[=true] Print version information and quit
kube# [ 6.364045] kube-apiserver[925]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 6.364484] kube-apiserver[925]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 6.374698] kube-scheduler[929]: I0127 01:31:52.317880 929 serving.go:319] Generated self-signed cert in-memory
kube# [ 6.437846] NET: Registered protocol family 17
kube# [ 6.386683] kube-controller-manager[927]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 6.401027] dhcpcd[784]: eth0: soliciting an IPv6 router
kube# [ 6.401384] dhcpcd[784]: eth0: offered 10.0.2.15 from 10.0.2.2
kube# [ 6.401883] dhcpcd[784]: eth0: leased 10.0.2.15 for 86400 seconds
kube# [ 6.402326] dhcpcd[784]: eth0: adding route to 10.0.2.0/24
kube# [ 6.402644] dhcpcd[784]: eth0: adding default route via 10.0.2.2
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.12 seconds)
kube# [ 6.466832] nscd[923]: 923 monitored file `/etc/resolv.conf` was written to
kube# [ 6.477890] systemd[1]: Stopping Name Service Cache Daemon...
kube# [ 6.489989] systemd[1]: nscd.service: Succeeded.
kube# [ 6.490304] systemd[1]: Stopped Name Service Cache Daemon.
kube# [ 6.491957] systemd[1]: Starting Name Service Cache Daemon...
kube# [ 6.499672] nscd[1159]: 1159 monitoring file `/etc/passwd` (1)
kube# [ 6.499963] nscd[1159]: 1159 monitoring directory `/etc` (2)
kube# [ 6.500413] nscd[1159]: 1159 monitoring file `/etc/group` (3)
kube# [ 6.500749] nscd[1159]: 1159 monitoring directory `/etc` (2)
kube# [ 6.501062] nscd[1159]: 1159 monitoring file `/etc/hosts` (4)
kube# [ 6.501386] nscd[1159]: 1159 monitoring directory `/etc` (2)
kube# [ 6.501695] nscd[1159]: 1159 monitoring file `/etc/resolv.conf` (5)
kube# [ 6.502167] nscd[1159]: 1159 monitoring directory `/etc` (2)
kube# [ 6.502467] nscd[1159]: 1159 monitoring file `/etc/services` (6)
kube# [ 6.502833] nscd[1159]: 1159 monitoring directory `/etc` (2)
kube# [ 6.504279] nscd[1159]: 1159 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory
kube# [ 6.504497] nscd[1159]: 1159 stat failed for file `/etc/netgroup'; will try again later: No such file or directory
kube# [ 6.504897] dhcpcd[784]: Failed to reload-or-try-restart ntpd.service: Unit ntpd.service not found.
kube# [ 6.506946] dhcpcd[784]: Failed to reload-or-try-restart openntpd.service: Unit openntpd.service not found.
kube# [ 6.507217] dhcpcd[784]: Failed to reload-or-try-restart chronyd.service: Unit chronyd.service not found.
kube# [ 6.511420] dhcpcd[784]: forked to background, child pid 1160
kube# [ 6.512190] systemd[1]: Started Name Service Cache Daemon.
kube# [ 6.519125] systemd[1]: Started DHCP Client.
kube# [ 6.519470] systemd[1]: Reached target Network is Online.
kube# [ 6.520568] systemd[1]: Starting certmgr...
kube# [ 6.521558] systemd[1]: Starting Docker Application Container Engine...
kube# [ 6.630472] kube-controller-manager[927]: I0127 01:31:52.573784 927 serving.go:319] Generated self-signed cert in-memory
kube# [ 6.635266] kube-controller-manager[927]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-controller-manager-client.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem for kube-controller-manager due to open /var/lib/kubernetes/secrets/kube-controller-manager-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.640564] systemd[1]: kube-controller-manager.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.640880] systemd[1]: kube-controller-manager.service: Failed with result 'exit-code'.
kube# [ 6.671198] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1171]: 2020/01/27 01:31:52 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 6.671403] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1171]: 2020/01/27 01:31:52 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 6.675061] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1171]: 2020/01/27 01:31:52 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 6.679389] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1171]: 2020/01/27 01:31:52 [ERROR] cert: failed to fetch remote CA: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.679511] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1171]: Failed: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 6.682868] systemd[1]: certmgr.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 6.683130] systemd[1]: certmgr.service: Failed with result 'exit-code'.
kube# [ 6.683433] systemd[1]: Failed to start certmgr.
kube# [ 6.728337] kube-scheduler[929]: W0127 01:31:52.671809 929 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 6.728610] kube-scheduler[929]: W0127 01:31:52.671854 929 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 6.729051] kube-scheduler[929]: W0127 01:31:52.671885 929 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 6.735922] kube-scheduler[929]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 6.740500] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 6.740680] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube# [ 6.741086] systemd[1]: kube-scheduler.service: Consumed 1.014s CPU time, no IP traffic.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 7.667112] dhcpcd[1160]: eth0: Router Advertisement from fe80::2
kube# [ 7.667400] dhcpcd[1160]: eth0: adding address fec0::5054:ff:fe12:3456/64
kube# [ 7.667802] dhcpcd[1160]: eth0: adding route to fec0::/64
kube# [ 7.668094] dhcpcd[1160]: eth0: adding default route via fe80::2
kube# [ 7.689955] dockerd[1172]: time="2020-01-27T01:31:53.633153430Z" level=info msg="Starting up"
kube# [ 7.701922] dockerd[1172]: time="2020-01-27T01:31:53.645463375Z" level=info msg="libcontainerd: started new containerd process" pid=1212
kube# [ 7.703267] dockerd[1172]: time="2020-01-27T01:31:53.646840086Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 7.703384] dockerd[1172]: time="2020-01-27T01:31:53.646861039Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 7.703655] dockerd[1172]: time="2020-01-27T01:31:53.646892048Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 7.704017] dockerd[1172]: time="2020-01-27T01:31:53.646922499Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.253941] dockerd[1172]: time="2020-01-27T01:31:54.197372626Z" level=info msg="starting containerd" revision=.m version=
kube# [ 8.254181] dockerd[1172]: time="2020-01-27T01:31:54.197653667Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
kube# [ 8.254464] dockerd[1172]: time="2020-01-27T01:31:54.197722112Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.254734] dockerd[1172]: time="2020-01-27T01:31:54.197847547Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.254991] dockerd[1172]: time="2020-01-27T01:31:54.197870734Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
kube# [ 8.266275] dockerd[1172]: time="2020-01-27T01:31:54.209842088Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.266380] dockerd[1172]: time="2020-01-27T01:31:54.209869466Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
kube# [ 8.266621] dockerd[1172]: time="2020-01-27T01:31:54.209939307Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.266989] dockerd[1172]: time="2020-01-27T01:31:54.210050494Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
kube# [ 8.267188] dockerd[1172]: time="2020-01-27T01:31:54.210159446Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.267424] dockerd[1172]: time="2020-01-27T01:31:54.210178443Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
kube# [ 8.267623] dockerd[1172]: time="2020-01-27T01:31:54.210216158Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
kube# [ 8.267851] dockerd[1172]: time="2020-01-27T01:31:54.210227332Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /run/current-system/kernel-modules/lib/modules/4.19.95\n": exit status 1"
kube# [ 8.268086] dockerd[1172]: time="2020-01-27T01:31:54.210246608Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
kube# [ 8.301141] dockerd[1172]: time="2020-01-27T01:31:54.244703222Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
kube# [ 8.301251] dockerd[1172]: time="2020-01-27T01:31:54.244732276Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
kube# [ 8.301545] dockerd[1172]: time="2020-01-27T01:31:54.244761330Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
kube# [ 8.301820] dockerd[1172]: time="2020-01-27T01:31:54.244775578Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
kube# [ 8.302070] dockerd[1172]: time="2020-01-27T01:31:54.244789825Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
kube# [ 8.302282] dockerd[1172]: time="2020-01-27T01:31:54.244803794Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
kube# [ 8.302467] dockerd[1172]: time="2020-01-27T01:31:54.244818041Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
kube# [ 8.302667] dockerd[1172]: time="2020-01-27T01:31:54.244832848Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
kube# [ 8.303005] dockerd[1172]: time="2020-01-27T01:31:54.244845978Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
kube# [ 8.303197] dockerd[1172]: time="2020-01-27T01:31:54.244873076Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
kube# [ 8.303397] dockerd[1172]: time="2020-01-27T01:31:54.244951578Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
kube# [ 8.303593] dockerd[1172]: time="2020-01-27T01:31:54.245014994Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
kube# [ 8.315394] dockerd[1172]: time="2020-01-27T01:31:54.258961459Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
kube# [ 8.315547] dockerd[1172]: time="2020-01-27T01:31:54.258989954Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
kube# [ 8.315882] dockerd[1172]: time="2020-01-27T01:31:54.259024875Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
kube# [ 8.316142] dockerd[1172]: time="2020-01-27T01:31:54.259038564Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
kube# [ 8.316395] dockerd[1172]: time="2020-01-27T01:31:54.259068456Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
kube# [ 8.316633] dockerd[1172]: time="2020-01-27T01:31:54.259081865Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
kube# [ 8.317046] dockerd[1172]: time="2020-01-27T01:31:54.259099465Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
kube# [ 8.317317] dockerd[1172]: time="2020-01-27T01:31:54.259130196Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
kube# [ 8.317674] dockerd[1172]: time="2020-01-27T01:31:54.259165954Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
kube# [ 8.318029] dockerd[1172]: time="2020-01-27T01:31:54.259198640Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
kube# [ 8.318292] dockerd[1172]: time="2020-01-27T01:31:54.259223224Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
kube# [ 8.329935] dockerd[1172]: time="2020-01-27T01:31:54.273502413Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
kube# [ 8.330025] dockerd[1172]: time="2020-01-27T01:31:54.273526159Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
kube# [ 8.330298] dockerd[1172]: time="2020-01-27T01:31:54.273541245Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
kube# [ 8.330550] dockerd[1172]: time="2020-01-27T01:31:54.273558286Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
kube# [ 8.335288] dockerd[1172]: time="2020-01-27T01:31:54.278860357Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
kube# [ 8.335450] dockerd[1172]: time="2020-01-27T01:31:54.278903938Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
kube# [ 8.335635] dockerd[1172]: time="2020-01-27T01:31:54.278928801Z" level=info msg="containerd successfully booted in 0.082164s"
kube# [ 8.360293] dockerd[1172]: time="2020-01-27T01:31:54.303831128Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.360397] dockerd[1172]: time="2020-01-27T01:31:54.303866887Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.360738] dockerd[1172]: time="2020-01-27T01:31:54.303889795Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.361074] dockerd[1172]: time="2020-01-27T01:31:54.303905719Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.361700] dockerd[1172]: time="2020-01-27T01:31:54.305233820Z" level=info msg="parsed scheme: \"unix\"" module=grpc
kube# [ 8.362086] dockerd[1172]: time="2020-01-27T01:31:54.305278519Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
kube# [ 8.362479] dockerd[1172]: time="2020-01-27T01:31:54.305312881Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
kube# [ 8.363121] dockerd[1172]: time="2020-01-27T01:31:54.305343332Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
kube# [ 8.403939] dockerd[1172]: time="2020-01-27T01:31:54.347488073Z" level=warning msg="Your kernel does not support cgroup rt period"
kube# [ 8.404055] dockerd[1172]: time="2020-01-27T01:31:54.347513496Z" level=warning msg="Your kernel does not support cgroup rt runtime"
kube# [ 8.404489] dockerd[1172]: time="2020-01-27T01:31:54.347625242Z" level=info msg="Loading containers: start."
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 8.559463] Initializing XFRM netlink socket
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 8.550539] dockerd[1172]: time="2020-01-27T01:31:54.494074581Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
kube# [ 8.551408] systemd-udevd[697]: Using default interface naming scheme 'v243'.
kube# [ 8.552972] s[ 8.607972] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
kube# ystemd-udevd[697]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 8.589619] dockerd[1172]: time="2020-01-27T01:31:54.533188491Z" level=info msg="Loading containers: done."
kube# [ 8.604108] dhcpcd[1160]: docker0: waiting for carrier
kube# [ 8.779899] dockerd[1172]: time="2020-01-27T01:31:54.723131067Z" level=info msg="Docker daemon" commit=633a0ea838f10e000b7c6d6eed1623e6e988b5bc graphdriver(s)=overlay2 version=19.03.5
kube# [ 8.780066] dockerd[1172]: time="2020-01-27T01:31:54.723202026Z" level=info msg="Daemon has completed initialization"
kube# [ 8.837289] dockerd[1172]: time="2020-01-27T01:31:54.780742859Z" level=info msg="API listen on /run/docker.sock"
kube# [ 8.837669] systemd[1]: Started Docker Application Container Engine.
kube# [ 8.839368] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 8.844075] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 9.933966] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: Loaded image: pause:latest
kube# [ 9.935834] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 10.748329] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 10.748628] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 1.
kube# [ 10.749158] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 10.751522] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 10.791274] kube-proxy[1460]: W0127 01:31:56.734535 1460 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 10.805337] kube-proxy[1460]: W0127 01:31:56.748881 1460 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.807695] kube-proxy[1460]: W0127 01:31:56.751254 1460 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.809732] kube-proxy[1460]: W0127 01:31:56.753290 1460 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.811511] kube-proxy[1460]: W0127 01:31:56.755079 1460 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.813667] kube-proxy[1460]: W0127 01:31:56.757212 1460 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.815487] kube-proxy[1460]: W0127 01:31:56.759038 1460 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 10.824457] kube-proxy[1460]: F0127 01:31:56.768014 1460 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 10.829740] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 10.830130] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 10.980354] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: Loaded image: coredns/coredns:1.5.0
kube# [ 11.061736] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 11.062192] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: rm: cannot remove '/opt/cni/bin/*': No such file or directory
kube# [ 11.062461] systemd[1]: Reached target Kubernetes.
kube# [ 11.063053] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1305]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 11.063362] systemd[1]: Reached target Multi-User System.
kube# [ 11.063904] systemd[1]: Startup finished in 2.514s (kernel) + 8.483s (userspace) = 10.998s.
kube# [ 11.390419] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.390702] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.391113] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 11.393055] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 11.441534] kube-apiserver[1500]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 11.441828] kube-apiserver[1500]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 11.442109] kube-apiserver[1500]: I0127 01:31:57.384742 1500 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 11.442420] kube-apiserver[1500]: I0127 01:31:57.384953 1500 server.go:147] Version: v1.15.6
kube# [ 11.442708] kube-apiserver[1500]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 11.447839] kube-apiserver[1500]: Usage:
kube# [ 11.447979] kube-apiserver[1500]: kube-apiserver [flags]
kube# [ 11.448190] kube-apiserver[1500]: Generic flags:
kube# [ 11.448432] kube-apiserver[1500]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 11.448635] kube-apiserver[1500]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 11.449021] kube-apiserver[1500]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 11.449237] kube-apiserver[1500]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.449470] kube-apiserver[1500]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 11.449694] kube-apiserver[1500]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 11.450079] kube-apiserver[1500]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 11.450290] kube-apiserver[1500]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 11.450504] kube-apiserver[1500]: APIListChunking=true|false (BETA - default=true)
kube# [ 11.450730] kube-apiserver[1500]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 11.450997] kube-apiserver[1500]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 11.451253] kube-apiserver[1500]: AppArmor=true|false (BETA - default=true)
kube# [ 11.451547] kube-apiserver[1500]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 11.451819] kube-apiserver[1500]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 11.452033] kube-apiserver[1500]: BlockVolume=true|false (BETA - default=true)
kube# [ 11.452247] kube-apiserver[1500]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 11.452376] kube-apiserver[1500]: CPUManager=true|false (BETA - default=true)
kube# [ 11.452620] kube-apiserver[1500]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 11.453054] kube-apiserver[1500]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 11.453400] kube-apiserver[1500]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 11.453599] kube-apiserver[1500]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 11.453955] kube-apiserver[1500]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 11.454166] kube-apiserver[1500]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 11.454384] kube-apiserver[1500]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 11.454610] kube-apiserver[1500]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 11.454865] kube-apiserver[1500]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 11.455105] kube-apiserver[1500]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 11.455292] kube-apiserver[1500]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 11.455524] kube-apiserver[1500]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 11.455753] kube-apiserver[1500]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 11.480403] kube-apiserver[1500]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 11.480636] kube-apiserver[1500]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 11.480846] kube-apiserver[1500]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 11.481074] kube-apiserver[1500]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 11.481254] kube-apiserver[1500]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 11.481461] kube-apiserver[1500]: DevicePlugins=true|false (BETA - default=true)
kube# [ 11.481654] kube-apiserver[1500]: DryRun=true|false (BETA - default=true)
kube# [ 11.481763] kube-apiserver[1500]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 11.482002] kube-apiserver[1500]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 11.482189] kube-apiserver[1500]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 11.482383] kube-apiserver[1500]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 11.482576] kube-apiserver[1500]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 11.482838] kube-apiserver[1500]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 11.483107] kube-apiserver[1500]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 11.483285] kube-apiserver[1500]: HyperVContainer=true|false (ALPHA - default=false)
kube# [ 11.483499] kube-apiserver[1500]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 11.483688] kube-apiserver[1500]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 11.484017] kube-apiserver[1500]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 11.484244] kube-apiserver[1500]: MountContainers=true|false (ALPHA - default=false)
kube# [ 11.484492] kube-apiserver[1500]: NodeLease=true|false (BETA - de[ 11.555525] serial8250: too much work for irq4
kube# fault=true)
kube# [ 11.484694] kube-apiserver[1500]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 11.484852] kube-apiserver[1500]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 11.485124] kube-apiserver[1500]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 11.485313] kube-apiserver[1500]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 11.485493] kube-apiserver[1500]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 11.485694] kube-apiserver[1500]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 11.486030] kube-apiserver[1500]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 11.486202] kube-apiserver[1500]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 11.486396] kube-apiserver[1500]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 11.486833] kube-apiserver[1500]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 11.487110] kube-apiserver[1500]: RunAsGroup=true|false (BETA - default=true)
kube# [ 11.487291] kube-apiserver[1500]: RuntimeClass=true|false (BETA - default=true)
kube# [ 11.487488] kube-apiserver[1500]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 11.487672] kube-apiserver[1500]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 11.512547] kube-apiserver[1500]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 11.512855] kube-apiserver[1500]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 11.513111] kube-apiserver[1500]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 11.513475] kube-apiserver[1500]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 11.513695] kube-apiserver[1500]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 11.514006] kube-apiserver[1500]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 11.514207] kube-apiserver[1500]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 11.514397] kube-apiserver[1500]: Sysctls=true|false (BETA - default=true)
kube# [ 11.514590] kube-apiserver[1500]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 11.514830] kube-apiserver[1500]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 11.515090] kube-apiserver[1500]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 11.515294] kube-apiserver[1500]: TokenRequest=true|false (BETA - default=true)
kube# [ 11.515504] kube-apiserver[1500]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 11.515699] kube-apiserver[1500]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 11.516015] kube-apiserver[1500]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 11.516218] kube-apiserver[1500]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 11.516412] kube-apiserver[1500]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 11.516608] kube-apiserver[1500]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 11.516887] kube-apiserver[1500]: WinDSR=true|false (ALPHA - default=false)
kube# [ 11.517110] kube-apiserver[1500]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 11.517317] kube-apiserver[1500]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 11.517550] kube-apiserver[1500]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 11.517837] kube-apiserver[1500]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 11.518118] kube-apiserver[1500]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube# [ 11.518298] kube-apiserver[1500]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 11.518488] kube-apiserver[1500]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 11.518687] kube-apiserver[1500]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 11.518996] kube-apiserver[1500]: Etcd flags:
kube# [ 11.519193] kube-apiserver[1500]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 11.519373] kube-apiserver[1500]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 11.543070] kube-apiserver[1500]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 11.543235] kube-apiserver[1500]: --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
kube# [ 11.543481] kube-apiserver[1500]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 11.543727] kube-apiserver[1500]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 11.543956] kube-apiserver[1500]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 11.544223] kube-apiserver[1500]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 11.544442] [ 11.606832] serial8250: too much work for irq4
kube# kube-apiserver[1500]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 11.544632] kube-apiserver[1500]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 11.544855] kube-apiserver[1500]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 11.545035] kube-apiserver[1500]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 11.545224] kube-apiserver[1500]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 11.545421] kube-apiserver[1500]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 11.545615] kube-apiserver[1500]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 11.545904] kube-apiserver[1500]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 11.546144] kube-apiserver[1500]: Secure serving flags:
kube# [ 11.546369] kube-apiserver[1500]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 11.546638] kube-apiserver[1500]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 11.546950] kube-apiserver[1500]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 11.547147] kube-apiserver[1500]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 11.547331] kube-apiserver[1500]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 11.572011] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 11.572363] kube-apiserver[1500]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 11.572659] kube-apiserver[1500]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 11.573001] kube-apiserver[1500]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 11.573181] kube-apiserver[1500]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 11.573384] kube-apiserver[1500]: Insecure serving flags:
kube# [ 11.573571] kube-apiserver[1500]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 11.573883] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 11.574182] kube-apiserver[1500]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.574374] kube-apiserver[1500]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 11.574562] kube-apiserver[1500]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 11.574811] kube-apiserver[1500]: Auditing flags:
kube# [ 11.575098] kube-apiserver[1500]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 11.575274] kube-apiserver[1500]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.575466] kube-apiserver[1500]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 11.575652] kube-apiserver[1500]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 11.575989] kube-apiserver[1500]: --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 11.576091] kube-apiserver[1500]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 11.576270] kube-apiserver[1500]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 11.576464] kube-apiserver[1500]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 11.600094] kube-apiserver[1500]: --audit-log-maxage int [ 11.657746] serial8250: too much work for irq4
kube# The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 11.600539] kube-apiserver[1500]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 11.600805] kube-apiserver[1500]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 11.600991] kube-apiserver[1500]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 11.601181] kube-apiserver[1500]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 11.601386] kube-apiserver[1500]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.601581] kube-apiserver[1500]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.601821] kube-apiserver[1500]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.602084] kube-apiserver[1500]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 11.602269] kube-apiserver[1500]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 11.602478] kube-apiserver[1500]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 11.602695] kube-apiserver[1500]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 11.603466] kube-apiserver[1500]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 11.603677] kube-apiserver[1500]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 11.603990] kube-apiserver[1500]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 11.604205] kube-apiserver[1500]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 11.604470] kube-apiserver[1500]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 11.604671] kube-apiserver[1500]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 11.604971] kube-apiserver[1500]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 11.605156] kube-apiserver[1500]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 11.605352] kube-apiserver[1500]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 11.605555] kube-apiserver[1500]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 11.629668] kube-apiserver[1500]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 11.629977] kube-apiserver[1500]: Features flags:
kube# [ 11.630235] kube-apiserver[1500]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 11.630440] kube-apiserver[1500]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 11.630669] kube-apiserver[1500]: Authentication flags:
kube# [ 11.630981] kube-apiserver[1500]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 11.631167] kube-apiserver[1500]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 11.631365] kube-apiserver[1500]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 11.631553] kube-apiserver[1500]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 11.631754] kube-apiserver[1500]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 11.632079] kube-apiserver[1500]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 11.632269] kube-apiserver[1500]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 11.632514] kube-apiserver[1500]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 11.632834] kube-apiserver[1500]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 11.633083] kube-apiserver[1500]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 11.633269] kube-apiserver[1500]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 11.633459] kube-apiserver[1500]: --[ 11.708982] serial8250: too much work for irq4
kube# oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 11.633652] kube-apiserver[1500]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 11.633979] kube-apiserver[1500]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 11.634169] kube-apiserver[1500]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 11.658721] kube-apiserver[1500]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 11.659009] kube-apiserver[1500]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 11.659190] kube-apiserver[1500]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 11.659371] kube-apiserver[1500]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 11.659561] kube-apiserver[1500]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 11.659818] kube-apiserver[1500]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 11.660090] kube-apiserver[1500]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 11.660282] kube-apiserver[1500]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 11.660467] kube-apiserver[1500]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 11.660749] kube-apiserver[1500]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 11.661083] kube-apiserver[1500]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 11.661314] kube-apiserver[1500]: Authorization flags:
kube# [ 11.661510] kube-apiserver[1500]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 11.661716] kube-apiserver[1500]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 11.662049] kube-apiserver[1500]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 11.662225] kube-apiserver[1500]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 11.662420] kube-apiserver[1500]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 11.662615] kube-apiserver[1500]: Cloud provider flags:
kube# [ 11.662888] kube-apiserver[1500]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 11.686821] kube-apiserver[1500]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 11.687091] kube-apiserver[1500]: Api enablement flags:
kube# [ 11.687327] kube-apiserver[1500]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 11.687580] kube-apiserver[1500]: Admission flags:
kube# [ 11.687826] kube-apiserver[1500]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 11.688161] kube-apiserver[1500]: --admission-control-config-file string File with admission control configuration.
kube# [ 11.688384] kube-apiserver[1500]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebho[ 11.760572] serial8250: too much work for irq4
kube# ok, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.688880] kube-apiserver[1500]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 11.714734] kube-apiserver[1500]: Misc flags:
kube# [ 11.714999] kube-apiserver[1500]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 11.715201] kube-apiserver[1500]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 11.715436] kube-apiserver[1500]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 11.715638] kube-apiserver[1500]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 11.715964] kube-apiserver[1500]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 11.716149] kube-apiserver[1500]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 11.716339] kube-apiserver[1500]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 11.716685] kube-apiserver[1500]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 11.717072] kube-apiserver[1500]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 11.717254] kube-apiserver[1500]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 11.717450] kube-apiserver[1500]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 11.717631] kube-apiserver[1500]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 11.717951] kube-apiserver[1500]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 11.718131] kube-apiserver[1500]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 11.718328] kube-apiserver[1500]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 11.718525] kube-apiserver[1500]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 11.718836] kube-apiserver[1500]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 11.719067] kube-apiserver[1500]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 11.719255] kube-apiserver[1500]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 11.719441] kube-apiserver[1500]: Global flags:
kube# [ 11.719632] kube-apiserver[1500]: --alsologtostderr log to standard error as well as files
kube# [ 11.719886] kube-apiserver[1500]: -h, --help help for kube-apiserver
kube# [ 11.720172] kube-apiserver[1500]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 11.746427] kube-apiserver[1500]: --log-dir string If non-empty, write log files in this directory
kube# [ 11.746617] kube-apiserver[1500]: --log-file string If non-empty, use this log file
kube# [ 11.747002] kube-apiserver[1500]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 11.747222] kube-apiserver[1500]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 11.747439] kube-apiserver[1500]: --logtostderr log to standard error instead of files (default true)
kube# [ 11.747667] kube-apiserver[1500]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 11.748037] kube-apiserver[1500]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 11.748249] kube-apiserver[1500]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 11.748480] kube-apiserver[1500]: -v, --v Level number for the log level verbosity
kube# [ 11.748694] kube-apiserver[1500]: --version version[=true] Print version information and quit
kube# [ 11.749069] kube-apiserver[1500]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 11.749348] kube-apiserver[1500]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.18 seconds)
kube# [ 11.991504] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 11.991799] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 1.
kube# [ 11.992155] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 11.994093] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 12.204808] kubelet[1489]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.205062] kubelet[1489]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.205306] kubelet[1489]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.205541] kubelet[1489]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.205726] kubelet[1489]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.205947] kubelet[1489]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.206193] kubelet[1489]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.206432] kubelet[1489]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.206821] kubelet[1489]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.207014] kubelet[1489]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.207208] kubelet[1489]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.207417] kubelet[1489]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.207604] kubelet[1489]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 12.227517] kubelet[1489]: F0127 01:31:58.171086 1489 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 12.232833] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 12.233013] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 12.246593] kube-scheduler[1537]: I0127 01:31:58.189893 1537 serving.go:319] Generated self-signed cert in-memory
kube# [ 12.515691] kube-scheduler[1537]: W0127 01:31:58.459057 1537 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 12.515967] kube-scheduler[1537]: W0127 01:31:58.459086 1537 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 12.516260] kube-scheduler[1537]: W0127 01:31:58.459102 1537 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 12.521530] kube-scheduler[1537]: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-scheduler-client.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem for kube-scheduler due to open /var/lib/kubernetes/secrets/kube-scheduler-client-key.pem: no such file or directory, unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory]
kube# [ 12.525882] systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 12.526044] systemd[1]: kube-scheduler.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 13.483640] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 13.484059] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
kube# [ 13.484542] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 13.486141] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 13.490427] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1576]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 13.771870] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1576]: Loaded image: pause:latest
kube# [ 13.774911] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1576]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.04 seconds)
kube# [ 13.900486] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1576]: Loaded image: coredns/coredns:1.5.0
kube# [ 13.909329] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1576]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 13.916952] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 13.985046] kubelet[1655]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.985232] kubelet[1655]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.985442] kubelet[1655]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.985743] kubelet[1655]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.985964] kubelet[1655]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.986309] kubelet[1655]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.986534] kubelet[1655]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.986867] kubelet[1655]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.987184] kubelet[1655]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.987445] kubelet[1655]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.987705] kubelet[1655]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.988030] kubelet[1655]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.988226] kubelet[1655]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 13.988407] kubelet[1655]: F0127 01:31:59.928494 1655 server.go:253] unable to load client CA file /var/lib/kubernetes/secrets/ca.pem: open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube# [ 14.010094] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 14.010276] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# * unable to read certificate-authority /var/lib/kubernetes/secrets/ca.pem for local due to open /var/lib/kubernetes/secrets/ca.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 15.034305] systemd[1]: kube-certmgr-bootstrap.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 15.034569] systemd[1]: kube-certmgr-bootstrap.service: Scheduled restart job, restart counter is at 1.
kube# [ 15.035109] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 15.035434] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
kube# [ 15.035793] systemd[1]: Stopped Kubernetes certmgr bootstrapper.
kube# [ 15.038159] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 15.038340] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 15.040019] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 15.044132] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1692]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 15.056878] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1691]: % Total % Received % Xferd Average Speed Time Time Time Current
kube# [ 15.057092] s2mx5lspihy60gwff025p4dbs08yzgyi-unit-script-kube-certmgr-bootstrap-start[1691]: Dload Upload Total Spent Left Speed
kube# [ 15.079853] cfssl[1033]: 2020/01/27 01:32:01 [INFO] 192.168.1.1:59232 - "POST /api/v1/cfssl/info" 200
100 1434 100 1432 100 2 62260 86 --:--:-- --:--:-- --:--:-- 62347-bootstrap-start[1691]:
kube# [ 15.085294] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 15.085708] systemd[1]: kube-certmgr-bootstrap.service: Consumed 18ms CPU time, received 3.5K IP traffic, sent 1.7K IP traffic.
kube# [ 15.332839] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1692]: Loaded image: pause:latest
kube# [ 15.335904] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1692]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 15.381896] systemd[1]: kube-addon-manager.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 15.382153] systemd[1]: kube-addon-manager.service: Scheduled restart job, restart counter is at 1.
kube# [ 15.382543] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 15.384474] systemd[1]: Starting Kubernetes addon manager...
kube# [ 15.431619] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1752]: Error in configuration:
kube# [ 15.431874] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1752]: * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# [ 15.432247] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[1752]: * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 15.437250] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 15.437540] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 15.438024] systemd[1]: Failed to start Kubernetes addon manager.
kube# [ 15.460543] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1692]: Loaded image: coredns/coredns:1.5.0
kube# [ 15.469322] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1692]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 15.476709] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 15.524203] kubelet[1776]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.524386] kubelet[1776]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.524602] kubelet[1776]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.524930] kubelet[1776]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.525114] kubelet[1776]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.525303] kubelet[1776]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.525509] kubelet[1776]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.525682] kubelet[1776]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.525864] kubelet[1776]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.526103] kubelet[1776]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.526305] kubelet[1776]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.526474] kubelet[1776]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.526700] kubelet[1776]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 15.562209] systemd[1]: Started Kubernetes systemd probe.
kube# [ 15.566984] kubelet[1776]: I0127 01:32:01.510417 1776 server.go:425] Version: v1.15.6
kube# [ 15.567128] kubelet[1776]: I0127 01:32:01.510541 1776 plugins.go:103] No cloud provider specified.
kube# [ 15.570952] kubelet[1776]: F0127 01:32:01.514517 1776 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 15.572982] systemd[1]: run-r1c69a5d4967740e4b53c1bf6019c2a86.scope: Succeeded.
kube# [ 15.576451] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 15.576598] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube: exit status 1
(0.05 seconds)
kube# [ 16.080674] systemd[1]: kube-proxy.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 16.080997] systemd[1]: kube-proxy.service: Scheduled restart job, restart counter is at 2.
kube# [ 16.081344] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 16.083033] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 16.113471] kube-proxy[1816]: W0127 01:32:02.056673 1816 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 16.125861] kube-proxy[1816]: W0127 01:32:02.069384 1816 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.128049] kube-proxy[1816]: W0127 01:32:02.071610 1816 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.129682] kube-proxy[1816]: W0127 01:32:02.073255 1816 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.131367] kube-proxy[1816]: W0127 01:32:02.074941 1816 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.133026] kube-proxy[1816]: W0127 01:32:02.076603 1816 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.134665] kube-proxy[1816]: W0127 01:32:02.078238 1816 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 16.146062] kube-proxy[1816]: F0127 01:32:02.089618 1816 server.go:449] invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kube-proxy-client.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kube-proxy-client-key.pem for kube-proxy due to open /var/lib/kubernetes/secrets/kube-proxy-client-key.pem: no such file or directory]
kube# [ 16.150414] systemd[1]: kube-proxy.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 16.150585] systemd[1]: kube-proxy.service: Failed with result 'exit-code'.
kube# [ 16.803426] systemd[1]: kube-apiserver.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 16.803702] systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 2.
kube# [ 16.804120] systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
kube# [ 16.804459] systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
kube# [ 16.804966] systemd[1]: certmgr.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 16.805315] systemd[1]: certmgr.service: Scheduled restart job, restart counter is at 1.
kube# [ 16.805681] systemd[1]: Stopped certmgr.
kube# [ 16.808805] systemd[1]: Starting certmgr...
kube# [ 16.810029] systemd[1]: Started Kubernetes certmgr bootstrapper.
kube# [ 16.810265] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 16.810685] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 16.812326] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 16.813677] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 16.819327] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1842]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 16.828625] systemd[1]: kube-certmgr-bootstrap.service: Succeeded.
kube# [ 16.830528] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:02 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 16.830747] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:02 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 16.833896] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:02 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 16.854387] cfssl[1033]: 2020/01/27 01:32:02 [INFO] 192.168.1.1:59234 - "POST /api/v1/cfssl/info" 200
kube# [ 16.871982] cfssl[1033]: 2020/01/27 01:32:02 [INFO] 192.168.1.1:59236 - "POST /api/v1/cfssl/info" 200
kube# [ 16.876411] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:02 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 16.881457] cfssl[1033]: 2020/01/27 01:32:02 [INFO] 192.168.1.1:59238 - "POST /api/v1/cfssl/info" 200
kube# [ 16.891055] kube-apiserver[1841]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 16.891214] kube-apiserver[1841]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 16.891582] kube-apiserver[1841]: I0127 01:32:02.834387 1841 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 16.891984] kube-apiserver[1841]: I0127 01:32:02.834567 1841 server.go:147] Version: v1.15.6
kube# [ 16.892215] kube-apiserver[1841]: Error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 16.892526] kube-apiserver[1841]: Usage:
kube# [ 16.892836] kube-apiserver[1841]: kube-apiserver [flags]
kube# [ 16.893071] kube-apiserver[1841]: Generic flags:
kube# [ 16.893421] kube-apiserver[1841]: --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
kube# [ 16.893596] kube-apiserver[1841]: --cloud-provider-gce-lb-src-cidrs cidrs CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
kube# [ 16.893844] kube-apiserver[1841]: --cors-allowed-origins strings List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
kube# [ 16.894048] kube-apiserver[1841]: --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.894235] kube-apiserver[1841]: --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. (default 300)
kube# [ 16.894457] kube-apiserver[1841]: --enable-inflight-quota-handler If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
kube# [ 16.894697] kube-apiserver[1841]: --external-hostname string The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs).
kube# [ 16.894982] kube-apiserver[1841]: --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
kube# [ 16.895160] kube-apiserver[1841]: APIListChunking=true|false (BETA - default=true)
kube# [ 16.895363] kube-apiserver[1841]: APIResponseCompression=true|false (ALPHA - default=false)
kube# [ 16.895566] kube-apiserver[1841]: AllAlpha=true|false (ALPHA - default=false)
kube# [ 16.895907] kube-apiserver[1841]: AppArmor=true|false (BETA - default=true)
kube# [ 16.896128] kube-apiserver[1841]: AttachVolumeLimit=true|false (BETA - default=true)
kube# [ 16.896324] kube-apiserver[1841]: BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
kube# [ 16.896515] kube-apiserver[1841]: BlockVolume=true|false (BETA - default=true)
kube# [ 16.896714] kube-apiserver[1841]: BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
kube# [ 16.897011] kube-apiserver[1841]: CPUManager=true|false (BETA - default=true)
kube# [ 16.897204] kube-apiserver[1841]: CRIContainerLogRotation=true|false (BETA - default=true)
kube# [ 16.897397] kube-apiserver[1841]: CSIBlockVolume=true|false (BETA - default=true)
kube# [ 16.897590] kube-apiserver[1841]: CSIDriverRegistry=true|false (BETA - default=true)
kube# [ 16.897847] kube-apiserver[1841]: CSIInlineVolume=true|false (ALPHA - default=false)
kube# [ 16.898098] kube-apiserver[1841]: CSIMigration=true|false (ALPHA - default=false)
kube# [ 16.898285] kube-apiserver[1841]: CSIMigrationAWS=true|false (ALPHA - default=false)
kube# [ 16.898481] kube-apiserver[1841]: CSIMigrationAzureDisk=true|false (ALPHA - default=false)
kube# [ 16.898674] kube-apiserver[1841]: CSIMigrationAzureFile=true|false (ALPHA - default=false)
kube# [ 16.899014] kube-apiserver[1841]: CSIMigrationGCE=true|false (ALPHA - default=false)
kube# [ 16.926108] kube-apiserver[1841]: CSIMigrationOpenStack=true|false (ALPHA - default=false)
kube# [ 16.926292] kube-apiserver[1841]: CSINodeInfo=true|false (BETA - default=true)
kube# [ 16.926475] kube-apiserver[1841]: CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
kube# [ 16.926669] kube-apiserver[1841]: CustomResourceDefaulting=true|false (ALPHA - default=false)
kube# [ 16.927084] cfssl[1033]: 2020/01/27 01:32:02 [INFO] 192.168.1.1:59240 - "POST /api/v1/cfssl/info" 200
kube# [ 16.927432] kube-apiserver[1841]: CustomResourcePublishOpenAPI=true|false (BETA - default=true)
kube# [ 16.927600] kube-apiserver[1841]: CustomResourceSubresources=true|false (BETA - default=true)
kube# [ 16.927933] kube-apiserver[1841]: CustomResourceValidation=true|false (BETA - default=true)
kube# [ 16.928155] kube-apiserver[1841]: CustomResourceWebhookConversion=true|false (BETA - default=true)
kube# [ 16.928343] kube-apiserver[1841]: DebugContainers=true|false (ALPHA - default=false)
kube# [ 16.928526] kube-apiserver[1841]: DevicePlugins=true|false (BETA - default=true)
kube# [ 16.928722] kube-apiserver[1841]: DryRun=true|false (BETA - default=true)
kube# [ 16.928989] kube-apiserver[1841]: DynamicAuditing=true|false (ALPHA - default=false)
kube# [ 16.929161] kube-apiserver[1841]: DynamicKubeletConfig=true|false (BETA - default=true)
kube# [ 16.929344] kube-apiserver[1841]: ExpandCSIVolumes=true|false (ALPHA - default=false)
kube# [ 16.929526] kube-apiserver[1841]: ExpandInUsePersistentVolumes=true|false (BETA - default=true)
kube# [ 16.929743] kube-apiserver[1841]: ExpandPersistentVolumes=true|false (BETA - default=true)
kube# [ 16.929987] kube-apiserver[1841]: ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)
kube# [ 16.930162] kube-apiserver[1841]: ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
kube# [ 16.930345] kube-apiserver[1841]: HyperVContainer=true|fals[ 17.000442] serial8250: too much work for irq4
kube# e (ALPHA - default=false)
kube# [ 16.930561] kube-apiserver[1841]: KubeletPodResources=true|false (BETA - default=true)
kube# [ 16.930751] kube-apiserver[1841]: LocalStorageCapacityIsolation=true|false (BETA - default=true)
kube# [ 16.931015] kube-apiserver[1841]: LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
kube# [ 16.931224] kube-apiserver[1841]: MountContainers=true|false (ALPHA - default=false)
kube# [ 16.931470] kube-apiserver[1841]: NodeLease=true|false (BETA - default=true)
kube# [ 16.931680] kube-apiserver[1841]: NonPreemptingPriority=true|false (ALPHA - default=false)
kube# [ 16.931995] kube-apiserver[1841]: PodShareProcessNamespace=true|false (BETA - default=true)
kube# [ 16.932184] kube-apiserver[1841]: ProcMountType=true|false (ALPHA - default=false)
kube# [ 16.932370] kube-apiserver[1841]: QOSReserved=true|false (ALPHA - default=false)
kube# [ 16.932555] kube-apiserver[1841]: RemainingItemCount=true|false (ALPHA - default=false)
kube# [ 16.932746] kube-apiserver[1841]: RequestManagement=true|false (ALPHA - default=false)
kube# [ 16.932974] kube-apiserver[1841]: ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
kube# [ 16.933222] kube-apiserver[1841]: ResourceQuotaScopeSelectors=true|false (BETA - default=true)
kube# [ 16.933386] kube-apiserver[1841]: RotateKubeletClientCertificate=true|false (BETA - default=true)
kube# [ 16.933575] kube-apiserver[1841]: RotateKubeletServerCertificate=true|false (BETA - default=true)
kube# [ 16.958599] kube-apiserver[1841]: RunAsGroup=true|false (BETA - default=true)
kube# [ 16.959023] kube-apiserver[1841]: RuntimeClass=true|false (BETA - default=true)
kube# [ 16.959290] kube-apiserver[1841]: SCTPSupport=true|false (ALPHA - default=false)
kube# [ 16.959549] kube-apiserver[1841]: ScheduleDaemonSetPods=true|false (BETA - default=true)
kube# [ 16.959937] kube-apiserver[1841]: ServerSideApply=true|false (ALPHA - default=false)
kube# [ 16.960299] kube-apiserver[1841]: ServiceLoadBalancerFinalizer=true|false (ALPHA - default=false)
kube# [ 16.960550] kube-apiserver[1841]: ServiceNodeExclusion=true|false (ALPHA - default=false)
kube# [ 16.960997] kube-apiserver[1841]: StorageVersionHash=true|false (BETA - default=true)
kube# [ 16.961190] kube-apiserver[1841]: StreamingProxyRedirects=true|false (BETA - default=true)
kube# [ 16.961468] kube-apiserver[1841]: SupportNodePidsLimit=true|false (BETA - default=true)
kube# [ 16.961706] kube-apiserver[1841]: SupportPodPidsLimit=true|false (BETA - default=true)
kube# [ 16.962148] kube-apiserver[1841]: Sysctls=true|false (BETA - default=true)
kube# [ 16.962436] kube-apiserver[1841]: TTLAfterFinished=true|false (ALPHA - default=false)
kube# [ 16.962680] kube-apiserver[1841]: TaintBasedEvictions=true|false (BETA - default=true)
kube# [ 16.963123] kube-apiserver[1841]: TaintNodesByCondition=true|false (BETA - default=true)
kube# [ 16.963363] kube-apiserver[1841]: TokenRequest=true|false (BETA - default=true)
kube# [ 16.963629] kube-apiserver[1841]: TokenRequestProjection=true|false (BETA - default=true)
kube# [ 16.964041] kube-apiserver[1841]: ValidateProxyRedirects=true|false (BETA - default=true)
kube# [ 16.964301] kube-apiserver[1841]: VolumePVCDataSource=true|false (ALPHA - default=false)
kube# [ 16.964581] kube-apiserver[1841]: VolumeSnapshotDataSource=true|false (ALPHA - default=false)
kube# [ 16.965003] kube-apiserver[1841]: VolumeSubpathEnvExpansion=true|false (BETA - default=true)
kube# [ 16.965212] kube-apiserver[1841]: WatchBookmark=true|false (ALPHA - default=false)
kube# [ 16.965425] kube-apiserver[1841]: WinDSR=true|false (ALPHA - default=false)
kube# [ 16.965616] kube-apiserver[1841]: WinOverlay=true|false (ALPHA - default=false)
kube# [ 16.965983] kube-apiserver[1841]: WindowsGMSA=true|false (ALPHA - default=false)
kube# [ 16.966217] kube-apiserver[1841]: --master-service-namespace string DEPRECATED: the namespace from which the kubernetes master services should be injected into pods. (default "default")
kube# [ 16.966469] kube-apiserver[1841]: --max-mutating-requests-inflight int The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)
kube# [ 16.966662] kube-apiserver[1841]: --max-requests-inflight int The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 16.966988] kube-apiserver[1841]: --min-request-timeout int An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. (default 1800)
kube# [ 16.967166] kube-apiserver[1841]: --request-timeout duration An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. (default 1m0s)
kube# [ 16.992045] kube-apiserver[1841]: --target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
kube# [ 16.992261] kube-apiserver[1841]: Etcd flags:
kube# [ 16.992480] kube-apiserver[1841]: --default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
kube# [ 16.992705] kube-apiserver[1841]: --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
kube# [ 16.992981] kube-apiserver[1841]: --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
kube# [ 16.993187] kube-apiserver[1841]: --encryption-provider-config string The file containing configuration for encry[ 17.055797] serial8250: too much work for irq4
kube# ption providers to be used for storing secrets in etcd
kube# [ 16.993457] kube-apiserver[1841]: --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
kube# [ 16.993743] kube-apiserver[1841]: --etcd-certfile string SSL certification file used to secure etcd communication.
kube# [ 16.994056] kube-apiserver[1841]: --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
kube# [ 16.994261] kube-apiserver[1841]: --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
kube# [ 16.994579] kube-apiserver[1841]: --etcd-keyfile string SSL key file used to secure etcd communication.
kube# [ 16.994890] kube-apiserver[1841]: --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
kube# [ 16.995088] kube-apiserver[1841]: --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
kube# [ 16.995306] kube-apiserver[1841]: --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
kube# [ 16.995524] kube-apiserver[1841]: --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
kube# [ 16.995764] kube-apiserver[1841]: --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/vnd.kubernetes.protobuf")
kube# [ 16.996231] kube-apiserver[1841]: --watch-cache Enable watch caching in the apiserver (default true)
kube# [ 16.996489] kube-apiserver[1841]: --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
kube# [ 16.997015] kube-apiserver[1841]: Secure serving flags:
kube# [ 16.997272] kube-apiserver[1841]: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
kube# [ 16.997484] kube-apiserver[1841]: --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/run/kubernetes")
kube# [ 16.997655] kube-apiserver[1841]: --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
kube# [ 16.997993] kube-apiserver[1841]: --secure-port int The port on which to serve HTTPS with authentication and authorization.It cannot be switched off with 0. (default 6443)
kube# [ 17.026954] kube-apiserver[1841]: --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
kube# [ 17.027429] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
kube# [ 17.027756] kube-apiserver[1841]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
kube# [ 17.028203] kube-apiserver[1841]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
kube# [ 17.028385] kube-apiserver[1841]: --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
kube# [ 17.028578] kube-apiserver[1841]: --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
kube# [ 17.028832] kube-apiserver[1841]: Insecure serving flags:
kube# [ 17.028993] kube-apiserver[1841]: --address ip The IP address on which to serve the insecure --port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: see --bind-address instead.)
kube# [ 17.029299] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube# [ 17.029568] kube-apiserver[1841]: --insecure-bind-address ip The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 127.0.0.1) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 17.029872] kube-apiserver[1841]: --insecure-port int The port on which to serve unsecured, unauthenticated access. (default 8080) (DEPRECATED: This flag will be removed in a future version.)
kube# [ 17.030115] kube-apiserver[1841]: --port int The port on which to serve unsecured, unauthenticated access. Set to 0 to disable. (default 8080) (DEPRECATED: see --secure-port instead.)
kube# [ 17.030306] kube-apiserver[1841]: Auditing flags:
kube# [ 17.030499] kube-apiserver[1841]: --audit-dynamic-configuration Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
kube# [ 17.030678] kube-apiserver[1841]: --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 17.030995] kube-apiserver[1841]: --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
kube# [ 17.031174] kube-apiserver[1841]: --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
kube# [ 17.031363] kube-apiserver[1841]: --audit-log-batch-throttle-burst int Maximu[ 17.110802] serial8250: too much work for irq4
kube# m number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
kube# [ 17.055021] kube-apiserver[1841]: --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
kube# [ 17.055192] kube-apiserver[1841]: --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
kube# [ 17.055371] kube-apiserver[1841]: --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
kube# [ 17.055574] kube-apiserver[1841]: --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
kube# [ 17.055931] kube-apiserver[1841]: --audit-log-maxbackup int The maximum number of old audit log files to retain.
kube# [ 17.056563] kube-apiserver[1841]: --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
kube# [ 17.056852] kube-apiserver[1841]: --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
kube# [ 17.057100] kube-apiserver[1841]: --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
kube# [ 17.057280] kube-apiserver[1841]: --audit-log-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 17.057479] kube-apiserver[1841]: --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 17.057674] kube-apiserver[1841]: --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 17.057991] kube-apiserver[1841]: --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
kube# [ 17.058176] kube-apiserver[1841]: --audit-policy-file string Path to the file that defines the audit policy configuration.
kube# [ 17.058368] kube-apiserver[1841]: --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
kube# [ 17.058577] kube-apiserver[1841]: --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
kube# [ 17.058961] kube-apiserver[1841]: --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
kube# [ 17.059179] kube-apiserver[1841]: --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
kube# [ 17.059362] kube-apiserver[1841]: --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
kube# [ 17.059562] kube-apiserver[1841]: --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
kube# [ 17.059752] kube-apiserver[1841]: --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
kube# [ 17.059981] kube-apiserver[1841]: --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
kube# [ 17.060156] kube-apiserver[1841]: --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
kube# [ 17.060347] kube-apiserver[1841]: --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
kube# [ 17.084185] kube-apiserver[1841]: --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
kube# [ 17.084345] kube-apiserver[1841]: --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
kube# [ 17.084534] kube-apiserver[1841]: --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
kube# [ 17.084730] kube-apiserver[1841]: Features flags:
kube# [ 17.085015] kube-apiserver[1841]: --contention-profiling Enable lock contention profiling, if profiling is enabled
kube# [ 17.085235] kube-apiserver[1841]: --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
kube# [ 17.085424] kube-apiserver[1841]: Authentication flags:
kube# [ 17.085612] kube-apiserver[1841]: --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
kube# [ 17.085849] kube-apiserver[1841]: --api-audiences strings Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL .
kube# [ 17.086034] kube-apiserver[1841]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 2m0s)
kube# [ 17.086223] kube-apiserver[1841]: --authentication-token-webhook-config-file string File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
kube# [ 17.086412] kube-apiserver[1841]: --basic-auth-file string If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
kube# [ 17.086615] kube-apiserver[1841]: --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
kube# [ 17.086851] kube-apiserver[1841]: --enable-bootstrap-token-auth Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
kube# [ 17.087053] kube-apiserver[1841]: --oidc-ca-file string If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
kube# [ 17.0872[ 17.161636] serial8250: too much work for irq4
kube# 93] kube-apiserver[1841]: --oidc-client-id string The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
kube# [ 17.087490] kube-apiserver[1841]: --oidc-groups-claim string If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
kube# [ 17.087733] kube-apiserver[1841]: --oidc-groups-prefix string If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
kube# [ 17.087976] kube-apiserver[1841]: --oidc-issuer-url string The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
kube# [ 17.088171] kube-apiserver[1841]: --oidc-required-claim mapStringString A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
kube# [ 17.112424] kube-apiserver[1841]: --oidc-signing-algs strings Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. (default [RS256])
kube# [ 17.112678] kube-apiserver[1841]: --oidc-username-claim string The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. (default "sub")
kube# [ 17.113139] kube-apiserver[1841]: --oidc-username-prefix string If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
kube# [ 17.113420] kube-apiserver[1841]: --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
kube# [ 17.113604] kube-apiserver[1841]: --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
kube# [ 17.113851] kube-apiserver[1841]: --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested.
kube# [ 17.114012] kube-apiserver[1841]: --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested.
kube# [ 17.114211] kube-apiserver[1841]: --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common.
kube# [ 17.114457] kube-apiserver[1841]: --service-account-issuer string Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI.
kube# [ 17.114682] kube-apiserver[1841]: --service-account-key-file stringArray File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
kube# [ 17.114927] kube-apiserver[1841]: --service-account-lookup If true, validate ServiceAccount tokens exist in etcd as part of authentication. (default true)
kube# [ 17.115166] kube-apiserver[1841]: --service-account-max-token-expiration duration The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
kube# [ 17.115417] kube-apiserver[1841]: --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.
kube# [ 17.115606] kube-apiserver[1841]: Authorization flags:
kube# [ 17.115829] kube-apiserver[1841]: --authorization-mode strings Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow])
kube# [ 17.116002] kube-apiserver[1841]: --authorization-policy-file string File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
kube# [ 17.116189] kube-apiserver[1841]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s)
kube# [ 17.141837] kube-apiserver[1841]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s)
kube# [ 17.142056] kube-apiserver[1841]: --authorization-webhook-config-file string File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
kube# [ 17.142291] kube-apiserver[1841]: Cloud provider flags:
kube# [ 17.142503] kube-apiserver[1841]: --cloud-config string The path to the cloud provider configuration file. Empty string for no configuration file.
kube# [ 17.142712] kube-apiserver[1841]: --cloud-provider string The provider for cloud services. Empty string for no provider.
kube# [ 17.142965] kube-apiserver[1841]: Api enablement flags:
kube# [ 17.143233] kube-apiserver[1841]: --runtime-config mapStringString A set of key=value pairs that describe runtime configuration that may be passed to apiserver. <group>/<version> (or <version> for the core group) key can be used to turn on/off specific api versions. api/all is special key to control all api versions, be careful setting it false, unless you know what you do. api/legacy is deprecated, we will remove it in the future, so stop using it. (default )
kube# [ 17.143521] kube-apiserver[1841]: Admission flags:
kube# [ 17.143753] kube-apiserver[1841]: --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
kube# [ 17.144113] kube-apiserver[1841]: [ 17.217121] serial8250: too much work for irq4
kube# --admission-control-config-file string File with admission control configuration.
kube# [ 17.144366] kube-apiserver[1841]: --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 17.144689] kube-apiserver[1841]: --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
kube# [ 17.173025] kube-apiserver[1841]: Misc flags:
kube# [ 17.173325] kube-apiserver[1841]: --allow-privileged If true, allow privileged containers. [default=false]
kube# [ 17.173533] kube-apiserver[1841]: --apiserver-count int The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
kube# [ 17.173750] kube-apiserver[1841]: --enable-aggregator-routing Turns on aggregator routing requests to endpoints IP rather than cluster IP.
kube# [ 17.174067] kube-apiserver[1841]: --endpoint-reconciler-type string Use an endpoint reconciler (master-count, lease, none) (default "lease")
kube# [ 17.174345] kube-apiserver[1841]: --event-ttl duration Amount of time to retain events. (default 1h0m0s)
kube# [ 17.174561] kube-apiserver[1841]: --kubelet-certificate-authority string Path to a cert file for the certificate authority.
kube# [ 17.174847] kube-apiserver[1841]: --kubelet-client-certificate string Path to a client cert file for TLS.
kube# [ 17.175259] kube-apiserver[1841]: --kubelet-client-key string Path to a client key file for TLS.
kube# [ 17.175461] kube-apiserver[1841]: --kubelet-https Use https for kubelet connections. (default true)
kube# [ 17.175677] kube-apiserver[1841]: --kubelet-preferred-address-types strings List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
kube# [ 17.176024] kube-apiserver[1841]: --kubelet-read-only-port uint DEPRECATED: kubelet port. (default 10255)
kube# [ 17.176247] kube-apiserver[1841]: --kubelet-timeout duration Timeout for kubelet operations. (default 5s)
kube# [ 17.176471] kube-apiserver[1841]: --kubernetes-service-node-port int If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
kube# [ 17.176693] kube-apiserver[1841]: --max-connection-bytes-per-sec int If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
kube# [ 17.177051] kube-apiserver[1841]: --proxy-client-cert-file string Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
kube# [ 17.177346] kube-apiserver[1841]: --proxy-client-key-file string Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
kube# [ 17.177670] kube-apiserver[1841]: --service-account-signing-key-file string Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
kube# [ 17.177969] kube-apiserver[1841]: --service-cluster-ip-range ipNet A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default 10.0.0.0/24)
kube# [ 17.203303] kube-apiserver[1841]: --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
kube# [ 17.203489] kube-apiserver[1841]: Global flags:
kube# [ 17.203696] kube-apiserver[1841]: --alsologtostderr log to standard error as well as files
kube# [ 17.204004] kube-apiserver[1841]: -h, --help help for kube-apiserver
kube# [ 17.204198] kube-apiserver[1841]: --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
kube# [ 17.204387] kube-apiserver[1841]: --log-dir string If non-empty, write log files in this directory
kube# [ 17.204581] kube-apiserver[1841]: --log-file string If non-empty, use this log file
kube# [ 17.204819] kube-apiserver[1841]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
kube# [ 17.204999] kube-apiserver[1841]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
kube# [ 17.205273] kube-apiserver[1841]: --logtostderr log to standard error instead of files (default true)
kube# [ 17.205494] kube-apiserver[1841]: --skip-headers If true, avoid header prefixes in the log messages
kube# [ 17.205694] kube-apiserver[1841]: --skip-log-headers If true, avoid headers when opening log files
kube# [ 17.205997] kube-apiserver[1841]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
kube# [ 17.206192] kube-apiserver[1841]: -v, --v Level number for the log l[ 17.271655] serial8250: too much work for irq4
kube# evel verbosity
kube# [ 17.206375] kube-apiserver[1841]: --version version[=true] Print version information and quit
kube# [ 17.206564] kube-apiserver[1841]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
kube# [ 17.206848] kube-apiserver[1841]: error: unable to load server certificate: open /var/lib/kubernetes/secrets/kube-apiserver.pem: no such file or directory
kube# [ 17.223713] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 17.229566] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59242 - "POST /api/v1/cfssl/info" 200
kube# [ 17.240376] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59244 - "POST /api/v1/cfssl/info" 200
kube# [ 17.244951] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 17.251016] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59246 - "POST /api/v1/cfssl/info" 200
kube# [ 17.262330] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59248 - "POST /api/v1/cfssl/info" 200
kube# [ 17.267387] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.json
kube# Error in configuration:
kube# * unable to read client-cert /var/lib/kubernetes/secrets/cluster-admin.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin.pem: no such file or directory
kube# * unable to read client-key /var/lib/kubernetes/secrets/cluster-admin-key.pem for cluster-admin due to open /var/lib/kubernetes/secrets/cluster-admin-key.pem: no such file or directory
kube# [ 17.272838] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59250 - "POST /api/v1/cfssl/info" 200
kube: exit status 1
(0.29 seconds)
kube# [ 17.288076] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59252 - "POST /api/v1/cfssl/info" 200
kube# [ 17.292936] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube# [ 17.297918] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59254 - "POST /api/v1/cfssl/info" 200
kube# [ 17.310801] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59256 - "POST /api/v1/cfssl/info" 200
kube# [ 17.311750] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1842]: Loaded image: pause:latest
kube# [ 17.312010] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 17.315327] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1842]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 17.317643] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59258 - "POST /api/v1/cfssl/info" 200
kube# [ 17.330408] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59260 - "POST /api/v1/cfssl/info" 200
kube# [ 17.334924] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 17.339434] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59262 - "POST /api/v1/cfssl/info" 200
kube# [ 17.349254] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59264 - "POST /api/v1/cfssl/info" 200
kube# [ 17.354395] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 17.359859] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59266 - "POST /api/v1/cfssl/info" 200
kube# [ 17.371467] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59268 - "POST /api/v1/cfssl/info" 200
kube# [ 17.376497] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 17.381275] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59270 - "POST /api/v1/cfssl/info" 200
kube# [ 17.391495] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59272 - "POST /api/v1/cfssl/info" 200
kube# [ 17.396399] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 17.402237] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59274 - "POST /api/v1/cfssl/info" 200
kube# [ 17.415359] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59276 - "POST /api/v1/cfssl/info" 200
kube# [ 17.420291] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 17.425315] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59278 - "POST /api/v1/cfssl/info" 200
kube# [ 17.436809] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59280 - "POST /api/v1/cfssl/info" 200
kube# [ 17.441681] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 17.446804] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59282 - "POST /api/v1/cfssl/info" 200
kube# [ 17.457325] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59284 - "POST /api/v1/cfssl/info" 200
kube# [ 17.460927] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1842]: Loaded image: coredns/coredns:1.5.0
kube# [ 17.463276] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 17.469548] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59286 - "POST /api/v1/cfssl/info" 200
kube# [ 17.472709] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[1842]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 17.483340] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 17.484906] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59288 - "POST /api/v1/cfssl/info" 200
kube# [ 17.489576] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: 2020/01/27 01:32:03 [INFO] manager: watching 14 certificates
kube# [ 17.489710] 27h33l9ghbzjvgzjd7dgdjym397v0g3d-unit-script-certmgr-pre-start[1839]: OK
kube# [ 17.493812] systemd[1]: Started certmgr.
kube# [ 17.501232] certmgr[1979]: 2020/01/27 01:32:03 [INFO] certmgr: loading from config file /nix/store/bmm143bjzpgvrw7k50r36c5smy1n4pqm-certmgr.yaml
kube# [ 17.501546] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading certificates from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d
kube# [ 17.503806] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/addonManager.json
kube# [ 17.509086] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59290 - "POST /api/v1/cfssl/info" 200
kube# [ 17.520734] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59292 - "POST /api/v1/cfssl/info" 200
kube# [ 17.525962] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiServer.json
kube# [ 17.530057] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59294 - "POST /api/v1/cfssl/info" 200
kube# [ 17.538877] kubelet[1968]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.539070] kubelet[1968]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.539373] kubelet[1968]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.539534] kubelet[1968]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.539725] kubelet[1968]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.539940] kubelet[1968]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.540149] kubelet[1968]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.540316] kubelet[1968]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.540553] kubelet[1968]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.540892] kubelet[1968]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.541139] kubelet[1968]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.541319] kubelet[1968]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.541509] kubelet[1968]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 17.544446] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59296 - "POST /api/v1/cfssl/info" 200
kube# [ 17.564903] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverEtcdClient.json
kube# [ 17.569077] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59298 - "POST /api/v1/cfssl/info" 200
kube# [ 17.572693] systemd[1]: kube-scheduler.service: Service RestartSec=5s expired, scheduling restart.
kube# [ 17.572980] systemd[1]: kube-scheduler.service: Scheduled restart job, restart counter is at 2.
kube# [ 17.573339] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 17.575220] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 17.576570] systemd[1]: Started Kubernetes systemd probe.
kube# [ 17.580252] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59300 - "POST /api/v1/cfssl/info" 200
kube# [ 17.582548] kubelet[1968]: I0127 01:32:03.526090 1968 server.go:425] Version: v1.15.6
kube# [ 17.582856] kubelet[1968]: I0127 01:32:03.526245 1968 plugins.go:103] No cloud provider specified.
kube# [ 17.586315] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverKubeletClient.json
kube# [ 17.588039] systemd[1]: run-rea057ad96ff548dcbfdc888675005a15.scope: Succeeded.
kube# [ 17.591701] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59302 - "POST /api/v1/cfssl/info" 200
kube# [ 17.593496] kubelet[1968]: F0127 01:32:03.537062 1968 server.go:273] failed to run Kubelet: invalid kubeconfig: invalid configuration: [unable to read client-cert /var/lib/kubernetes/secrets/kubelet-client.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client.pem: no such file or directory, unable to read client-key /var/lib/kubernetes/secrets/kubelet-client-key.pem for kubelet due to open /var/lib/kubernetes/secrets/kubelet-client-key.pem: no such file or directory]
kube# [ 17.597155] systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 17.597352] systemd[1]: kubelet.service: Failed with result 'exit-code'.
kube# [ 17.606347] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59304 - "POST /api/v1/cfssl/info" 200
kube# [ 17.610996] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/apiserverProxyClient.json
kube# [ 17.616520] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59306 - "POST /api/v1/cfssl/info" 200
kube# [ 17.628010] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59308 - "POST /api/v1/cfssl/info" 200
kube# [ 17.632293] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/clusterAdmin.json
kube# [ 17.636270] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59310 - "POST /api/v1/cfssl/info" 200
kube# [ 17.647168] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59312 - "POST /api/v1/cfssl/info" 200
kube# [ 17.647471] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManager.json
kube# [ 17.651879] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59314 - "POST /api/v1/cfssl/info" 200
kube# [ 17.661524] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59316 - "POST /api/v1/cfssl/info" 200
kube# [ 17.666109] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/controllerManagerClient.json
kube# [ 17.670135] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59318 - "POST /api/v1/cfssl/info" 200
kube# [ 17.682944] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59320 - "POST /api/v1/cfssl/info" 200
kube# [ 17.687300] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/etcd.json
kube# [ 17.691036] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59322 - "POST /api/v1/cfssl/info" 200
kube# [ 17.701070] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59324 - "POST /api/v1/cfssl/info" 200
kube# [ 17.705418] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeProxyClient.json
kube# [ 17.709329] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59326 - "POST /api/v1/cfssl/info" 200
kube# [ 17.722503] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59328 - "POST /api/v1/cfssl/info" 200
kube# [ 17.727652] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubelet.json
kube# [ 17.732741] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59330 - "POST /api/v1/cfssl/info" 200
kube# [ 17.742740] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59332 - "POST /api/v1/cfssl/info" 200
kube# [ 17.746945] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/kubeletClient.json
kube# [ 17.750868] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59334 - "POST /api/v1/cfssl/info" 200
kube# [ 17.761930] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59336 - "POST /api/v1/cfssl/info" 200
kube# [ 17.766285] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/schedulerClient.json
kube# [ 17.770143] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59338 - "POST /api/v1/cfssl/info" 200
kube# [ 17.780252] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59340 - "POST /api/v1/cfssl/info" 200
kube# [ 17.784616] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: loading spec from /nix/store/2sb19kl4ziw5hrsbvb4m68vbmxc6hiz5-certmgr.d/serviceAccount.json
kube# [ 17.789632] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59342 - "POST /api/v1/cfssl/info" 200
kube# [ 17.799955] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59344 - "POST /api/v1/cfssl/info" 200
kube# [ 17.804698] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: watching 14 certificates
kube# [ 17.804887] certmgr[1979]: 2020/01/27 01:32:03 [WARNING] metrics: no prometheus address or port configured
kube# [ 17.805148] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: checking certificates
kube# [ 17.805407] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queue processor is ready
kube# [ 17.810887] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59346 - "POST /api/v1/cfssl/info" 200
kube# [ 17.811720] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.811877] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:kube-addon-manager because it isn't ready
kube# [ 17.813571] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:kube-addon-manager (attempt 1)
kube# [ 17.817232] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59348 - "POST /api/v1/cfssl/info" 200
kube# [ 17.818050] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.818208] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /kubernetes because it isn't ready
kube# [ 17.818578] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /kubernetes (attempt 1)
kube# [ 17.824028] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59350 - "POST /api/v1/cfssl/info" 200
kube# [ 17.824949] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.825036] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /etcd-client because it isn't ready
kube# [ 17.825308] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /etcd-client (attempt 1)
kube# [ 17.831042] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59352 - "POST /api/v1/cfssl/info" 200
kube# [ 17.831228] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.831352] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:kube-apiserver because it isn't ready
kube# [ 17.831614] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:kube-apiserver (attempt 1)
kube# [ 17.860419] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59354 - "POST /api/v1/cfssl/info" 200
kube# [ 17.860641] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.860998] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /front-proxy-client because it isn't ready
kube# [ 17.861221] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /front-proxy-client (attempt 1)
kube# [ 17.863287] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59356 - "POST /api/v1/cfssl/info" 200
kube# [ 17.863879] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.864217] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /cluster-admin/O=system:masters because it isn't ready
kube# [ 17.864515] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /cluster-admin/O=system:masters (attempt 1)
kube# [ 17.891972] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59358 - "POST /api/v1/cfssl/info" 200
kube# [ 17.892160] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.892342] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /kube-controller-manager because it isn't ready
kube# [ 17.892651] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /kube-controller-manager (attempt 1)
kube# [ 17.894747] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59360 - "POST /api/v1/cfssl/info" 200
kube# [ 17.895067] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.895305] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:kube-controller-manager because it isn't ready
kube# [ 17.897965] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:kube-controller-manager (attempt 1)
kube# [ 17.905063] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59362 - "POST /api/v1/cfssl/info" 200
kube# [ 17.905243] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.905516] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.905808] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.907893] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59364 - "POST /api/v1/cfssl/info" 200
kube# [ 17.908249] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.908503] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:kube-proxy because it isn't ready
kube# [ 17.908953] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:kube-proxy (attempt 1)
kube# [ 17.914216] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59366 - "POST /api/v1/cfssl/info" 200
kube# [ 17.914372] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.914625] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /kube.my.xzy because it isn't ready
kube# [ 17.915069] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /kube.my.xzy (attempt 1)
kube# [ 17.920163] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59368 - "POST /api/v1/cfssl/info" 200
kube# [ 17.920993] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.921089] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:node:kube.my.xzy/O=system:nodes because it isn't ready
kube# [ 17.921331] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:node:kube.my.xzy/O=system:nodes (attempt 1)
kube# [ 17.928250] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59370 - "POST /api/v1/cfssl/info" 200
kube# [ 17.929196] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.929295] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:kube-scheduler because it isn't ready
kube# [ 17.929559] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:kube-scheduler (attempt 1)
kube# [ 17.935434] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59372 - "POST /api/v1/cfssl/info" 200
kube# [ 17.935612] certmgr[1979]: 2020/01/27 01:32:03 [INFO] cert: existing CA certificate at /var/lib/kubernetes/secrets/ca.pem is current
kube# [ 17.936071] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: queueing /system:service-account-signer because it isn't ready
kube# [ 17.936337] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: processing certificate spec /system:service-account-signer (attempt 1)
kube# [ 17.995077] certmgr[1979]: 2020/01/27 01:32:03 [INFO] encoded CSR
kube# [ 17.999880] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signature request received
kube# [ 18.003031] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signed certificate with serial number 717600727699268894207900883831213258402318090418
kube# [ 18.003187] cfssl[1033]: 2020/01/27 01:32:03 [INFO] wrote response
kube# [ 18.003452] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59374 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.003675] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-proxy-client.pem
kube# [ 18.004742] certmgr[1979]: 2020/01/27 01:32:03 [INFO] encoded CSR
kube# [ 18.009282] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signature request received
kube# [ 18.010212] certmgr[1979]: 2020/01/27 01:32:03 [INFO] encoded CSR
kube# [ 18.012321] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signed certificate with serial number 143306617946379894410683425174361333324816222034
kube# [ 18.012471] cfssl[1033]: 2020/01/27 01:32:03 [INFO] wrote response
kube# [ 18.012666] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59376 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.013162] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager.pem
kube# [ 18.014585] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signature request received
kube# [ 18.014930] certmgr[1979]: 2020/01/27 01:32:03 [INFO] encoded CSR
kube# [ 18.017589] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signed certificate with serial number 599788676278136899773393660600371310420978524359
kube# [ 18.017875] cfssl[1033]: 2020/01/27 01:32:03 [INFO] wrote response
kube# [ 18.018177] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59378 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.018400] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet.pem
kube# [ 18.022200] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signature request received
kube# [ 18.024117] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signed certificate with serial number 111777122970743806927299563663841387784007047735
kube# [ 18.024267] cfssl[1033]: 2020/01/27 01:32:03 [INFO] wrote response
kube# [ 18.024612] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59380 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.025112] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-etcd-client.pem
kube# [ 18.040132] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 18.041738] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 18.042528] certmgr[1979]: 2020/01/27 01:32:03 [INFO] encoded CSR
kube# [ 18.045496] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 18.046728] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 18.047879] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signature request received
kube# [ 18.051014] cfssl[1033]: 2020/01/27 01:32:03 [INFO] signed certificate with serial number 215866916900024012180126267586545485829254385533
kube# [ 18.051186] cfssl[1033]: 2020/01/27 01:32:03 [INFO] wrote response
kube# [ 18.051384] cfssl[1033]: 2020/01/27 01:32:03 [INFO] 192.168.1.1:59382 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.051715] certmgr[1979]: 2020/01/27 01:32:03 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-addon-manager.pem
kube# [ 18.052863] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 18.055857] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 18.056362] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2052]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 18.062211] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.062436] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.070934] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 18.076232] systemd[1]: kube-apiserver.service: Succeeded.
kube# [ 18.076660] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 18.077954] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 18.079422] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 18.081081] systemd[1]: Starting Kubernetes addon manager...
kube# [ 18.082078] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.083566] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.090258] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.093332] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 663516113913249173368441132262890305670955916860
kube# [ 18.093545] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.093899] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59384 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.094243] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/service-account.pem
kube# [ 18.112515] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.115026] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.116328] certmgr[1979]: 2020/01/27 01:32:04 [ERROR] manager: exit status 3
kube# [ 18.116507] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.118921] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.122139] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 61036597471264308028583555742208190386507164261
kube# [ 18.122426] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.122741] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59386 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.124729] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-proxy-client.pem
kube# [ 18.126169] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.128451] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 128968243127390040763500551280839698687965133397
kube# [ 18.128681] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.129063] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59388 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.132305] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver.pem
kube# [ 18.143021] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.147944] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.150924] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 369999476909433541171827255552491227630590022673
kube# [ 18.151243] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.151487] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59390 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.152115] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/cluster-admin.pem
kube# [ 18.152352] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.155126] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.159523] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 18.162954] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.163403] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.164735] systemd[1]: kube-apiserver.service: Succeeded.
kube# [ 18.165330] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 18.166743] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 197563676782575235447839206757336525034174865836
kube# [ 18.167014] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.167308] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59392 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.167663] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.168224] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 18.168567] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-scheduler-client.pem
kube# [ 18.169087] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 657855521356350085241465866279792698323825613146
kube# [ 18.169454] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.169658] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59394 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.170069] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-controller-manager-client.pem
kube# [ 18.171197] systemd[1]: Stopped Kubernetes Proxy Service.
kube# [ 18.178416] systemd[1]: Started Kubernetes Proxy Service.
kube# [ 18.179617] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.182976] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.187934] kube-controller-manager[2054]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 18.193263] kube-scheduler[2001]: I0127 01:32:04.136016 2001 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.201610] systemd[1]: Stopping Kubernetes Controller Manager Service...
kube# [ 18.203862] systemd[1]: kube-controller-manager.service: Succeeded.
kube# [ 18.204209] systemd[1]: Stopped Kubernetes Controller Manager Service.
kube# [ 18.206487] systemd[1]: Started Kubernetes Controller Manager Service.
kube# [ 18.209310] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.210215] systemd[1]: Stopping Kubernetes Scheduler Service...
kube# [ 18.212338] systemd[1]: kube-scheduler.service: Succeeded.
kube# [ 18.212713] systemd[1]: Stopped Kubernetes Scheduler Service.
kube# [ 18.214525] systemd[1]: Started Kubernetes Scheduler Service.
kube# [ 18.217929] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.221992] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/dak5nvsj8ab4dywrr2r96mfvvfvmfwav-apiserver-kubelet-api-admin-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.222198] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/q8x42ds4w9azhviqm26k5gzbs0g19wir-coredns-cr.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.222508] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/8zqicjics4fg5dg1g50aqsrllbf5hb41-coredns-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.222718] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/92hgw2wxa9bvyi58akkj8slr5i4pln34-kube-addon-manager-cluster-lister-cr.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.223103] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/n15m5qpvi0asyfi7356idb5ycmf5crcq-kube-addon-manager-cluster-lister-crb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.223283] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/l7irsk7dw8wrs7c1c4fw52rgh24lrsc7-kube-addon-manager-r.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.223570] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[2069]: unable to recognize "/nix/store/rk35vqyf2mfdgrjg53swfnv9hdamj6sb-kube-addon-manager-rb.json": Get https://192.168.1.1/api?timeout=32s: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 18.228441] kube-proxy[2110]: W0127 01:32:04.171737 2110 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
kube# [ 18.240797] systemd[1]: kube-addon-manager.service: Control process exited, code=exited, status=1/FAILURE
kube# [ 18.241114] systemd[1]: kube-addon-manager.service: Failed with result 'exit-code'.
kube# [ 18.241534] systemd[1]: Failed to start Kubernetes addon manager.
kube# [ 18.241832] systemd[1]: kube-addon-manager.service: Consumed 103ms CPU time, received 320B IP traffic, sent 480B IP traffic.
kube# [ 18.247086] certmgr[1979]: 2020/01/27 01:32:04 [ERROR] manager: exit status 1
kube# [ 18.247284] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: certificate successfully processed
kube# [ 18.260446] kube-proxy[2110]: W0127 01:32:04.203955 2110 proxier.go:500] Failed to read file /lib/modules/4.19.95/modules.builtin with error open /lib/modules/4.19.95/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 18.262329] kube-apiserver[2108]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 18.262575] kube-apiserver[2108]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 18.263065] kube-apiserver[2108]: I0127 01:32:04.205659 2108 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 18.263325] kube-apiserver[2108]: I0127 01:32:04.205846 2108 server.go:147] Version: v1.15.6
kube# [ 18.266486] kube-proxy[2110]: W0127 01:32:04.209955 2110 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 18.268452] kube-proxy[2110]: W0127 01:32:04.211991 2110 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 18.270595] kube-proxy[2110]: W0127 01:32:04.214155 2110 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 18.272599] kube-proxy[2110]: W0127 01:32:04.216162 2110 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube# [ 18.274533] kube-proxy[2110]: W0127 01:32:04.218073 2110 proxier.go:513] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 18.294430] kube-proxy[2110]: W0127 01:32:04.237950 2110 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
kube# [ 18.295995] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.299068] kube-controller-manager[2127]: Flag --port has been deprecated, see --secure-port instead.
kube# [ 18.299335] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.300921] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 216770658272136227633471745950532644506899593635
kube# [ 18.301155] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.301425] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59412 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.301913] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kube-apiserver-kubelet-client.pem
kube# [ 18.315598] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.318514] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.319854] systemd[1]: Stopping Kubernetes APIServer Service...
kube# [ 18.320391] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 72784616466058177940005911439682986497182577774
kube# [ 18.320600] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.321083] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59416 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.321411] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/kubelet-client.pem
kube# [ 18.339499] systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
kube# [ 18.339846] systemd[1]: kubelet.service: Failed with result 'signal'.
kube# [ 18.340140] systemd[1]: Stopped Kubernetes Kubelet Service.
kube# [ 18.449102] kube-scheduler[2133]: I0127 01:32:04.391429 2133 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.449349] certmgr[1979]: 2020/01/27 01:32:04 [INFO] encoded CSR
kube# [ 18.453311] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signature request received
kube# [ 18.456168] cfssl[1033]: 2020/01/27 01:32:04 [INFO] signed certificate with serial number 357690666751839157908194987048609328421601147776
kube# [ 18.456302] cfssl[1033]: 2020/01/27 01:32:04 [INFO] wrote response
kube# [ 18.456564] cfssl[1033]: 2020/01/27 01:32:04 [INFO] 192.168.1.1:59420 - "POST /api/v1/cfssl/authsign" 200
kube# [ 18.457085] certmgr[1979]: 2020/01/27 01:32:04 [INFO] manager: executing configured action due to change type key for /var/lib/kubernetes/secrets/etcd.pem
kube# [ 18.473214] systemd[1]: Starting etcd key-value store...
kube# [ 18.485477] etcd[2190]: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd.local:2379
kube# [ 18.485704] etcd[2190]: recognized and used environment variable ETCD_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 18.486111] etcd[2190]: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=1
kube# [ 18.486386] etcd[2190]: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
kube# [ 18.486660] etcd[2190]: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd.local:2380
kube# [ 18.487182] etcd[2190]: recognized and used environment variable ETCD_INITIAL_CLUSTER=kube.my.xzy=https://etcd.local:2380
kube# [ 18.487479] etcd[2190]: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new
kube# [ 18.487745] etcd[2190]: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
kube# [ 18.488088] etcd[2190]: recognized and used environment variable ETCD_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 18.488360] etcd[2190]: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://127.0.0.1:2379
kube# [ 18.488649] etcd[2190]: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://127.0.0.1:2380
kube# [ 18.489068] etcd[2190]: recognized and used environment variable ETCD_NAME=kube.my.xzy
kube# [ 18.489363] etcd[2190]: recognized and used environment variable ETCD_PEER_CERT_FILE=/var/lib/kubernetes/secrets/etcd.pem
kube# [ 18.489650] etcd[2190]: recognized and used environment variable ETCD_PEER_KEY_FILE=/var/lib/kubernetes/secrets/etcd-key.pem
kube# [ 18.490092] etcd[2190]: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 18.490433] etcd[2190]: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/var/lib/kubernetes/secrets/ca.pem
kube# [ 18.490679] etcd[2190]: unrecognized environment variable ETCD_DISCOVERY=
kube# [ 18.491084] etcd[2190]: etcd Version: 3.3.13
kube#
kube# [ 18.491383] etcd[2190]: Git SHA: Not provided (use ./build instead of go build)
kube#
kube# [ 18.491666] etcd[2190]: Go Version: go1.12.9
kube#
kube# [ 18.492071] etcd[2190]: Go OS/Arch: linux/amd64
kube#
kube# [ 18.492340] etcd[2190]: setting maximum number of CPUs to 16, total number of available CPUs is 16
kube# [ 18.492631] etcd[2190]: peerTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = false, crl-file =
kube# [ 18.503927] etcd[2190]: listening for peers on https://127.0.0.1:2380
kube# [ 18.504134] etcd[2190]: listening for client requests on 127.0.0.1:2379
kube# [ 18.520805] etcd[2190]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 18.521003] etcd[2190]: resolving etcd.local:2380 to 127.0.0.1:2380
kube# [ 18.521300] etcd[2190]: name = kube.my.xzy
kube# [ 18.521501] etcd[2190]: data dir = /var/lib/etcd
kube# [ 18.521827] etcd[2190]: member dir = /var/lib/etcd/member
kube# [ 18.522105] etcd[2190]: heartbeat = 100ms
kube# [ 18.522385] etcd[2190]: election = 1000ms
kube# [ 18.522688] etcd[2190]: snapshot count = 100000
kube# [ 18.523110] etcd[2190]: advertise client URLs = https://etcd.local:2379
kube# [ 18.523392] etcd[2190]: initial advertise peer URLs = https://etcd.local:2380
kube# [ 18.523584] etcd[2190]: initial cluster = kube.my.xzy=https://etcd.local:2380
kube# [ 18.532802] etcd[2190]: starting member d579d2a9b6a65847 in cluster cd74e8f1b6ca227e
kube# [ 18.532977] etcd[2190]: d579d2a9b6a65847 became follower at term 0
kube# [ 18.533288] etcd[2190]: newRaft d579d2a9b6a65847 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
kube# [ 18.533574] etcd[2190]: d579d2a9b6a65847 became follower at term 1
kube# [ 18.548635] etcd[2190]: simple token is not cryptographically signed
kube# [ 18.552926] etcd[2190]: starting server... [version: 3.3.13, cluster version: to_be_decided]
kube# [ 18.556098] etcd[2190]: d579d2a9b6a65847 as single-node; fast-forwarding 9 ticks (election ticks 10)
kube# [ 18.559404] etcd[2190]: added member d579d2a9b6a65847 [https://etcd.local:2380] to cluster cd74e8f1b6ca227e
kube# [ 18.563383] etcd[2190]: ClientTLS: cert = /var/lib/kubernetes/secrets/etcd.pem, key = /var/lib/kubernetes/secrets/etcd-key.pem, ca = , trusted-ca = /var/lib/kubernetes/secrets/ca.pem, client-cert-auth = true, crl-file =
kube# [ 18.610229] kube-controller-manager[2127]: I0127 01:32:04.553514 2127 serving.go:319] Generated self-signed cert in-memory
kube# [ 18.725592] kube-apiserver[2108]: I0127 01:32:04.669117 2108 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.725813] kube-apiserver[2108]: I0127 01:32:04.669158 2108 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.730620] kube-apiserver[2108]: E0127 01:32:04.674177 2108 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.730744] kube-apiserver[2108]: E0127 01:32:04.674210 2108 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.730967] kube-apiserver[2108]: E0127 01:32:04.674235 2108 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.731214] kube-apiserver[2108]: E0127 01:32:04.674250 2108 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.731465] kube-apiserver[2108]: E0127 01:32:04.674267 2108 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.731671] kube-apiserver[2108]: E0127 01:32:04.674283 2108 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.732038] kube-apiserver[2108]: E0127 01:32:04.674303 2108 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.732275] kube-apiserver[2108]: E0127 01:32:04.674321 2108 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.732582] kube-apiserver[2108]: E0127 01:32:04.674364 2108 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.732976] kube-apiserver[2108]: E0127 01:32:04.674442 2108 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.733166] kube-apiserver[2108]: E0127 01:32:04.674464 2108 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.733394] kube-apiserver[2108]: E0127 01:32:04.674483 2108 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 18.733688] kube-apiserver[2108]: I0127 01:32:04.674498 2108 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 18.734052] kube-apiserver[2108]: I0127 01:32:04.674517 2108 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 18.741753] kube-scheduler[2133]: W0127 01:32:04.685288 2133 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.742234] kube-scheduler[2133]: W0127 01:32:04.685328 2133 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.742589] kube-scheduler[2133]: W0127 01:32:04.685351 2133 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.760914] kube-apiserver[2108]: I0127 01:32:04.704495 2108 client.go:354] parsed scheme: ""
kube# [ 18.761096] kube-apiserver[2108]: I0127 01:32:04.704686 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 18.764677] kube-apiserver[2108]: I0127 01:32:04.708263 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 18.766191] kube-apiserver[2108]: I0127 01:32:04.709779 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 18.769086] kube-scheduler[2133]: I0127 01:32:04.712643 2133 server.go:142] Version: v1.15.6
kube# [ 18.770303] kube-scheduler[2133]: I0127 01:32:04.713869 2133 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
kube# [ 18.771093] kube-scheduler[2133]: W0127 01:32:04.714669 2133 authorization.go:47] Authorization is disabled
kube# [ 18.771330] kube-scheduler[2133]: W0127 01:32:04.714696 2133 authentication.go:55] Authentication is disabled
kube# [ 18.771605] kube-scheduler[2133]: I0127 01:32:04.714723 2133 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
kube# [ 18.772014] kube-scheduler[2133]: I0127 01:32:04.715031 2133 secure_serving.go:116] Serving securely on [::]:10259
kube# [ 18.983518] kube-controller-manager[2127]: W0127 01:32:04.926929 2127 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
kube# [ 18.983735] kube-controller-manager[2127]: W0127 01:32:04.926969 2127 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
kube# [ 18.984048] kube-controller-manager[2127]: W0127 01:32:04.926987 2127 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
kube# [ 18.984238] kube-controller-manager[2127]: I0127 01:32:04.927005 2127 controllermanager.go:164] Version: v1.15.6
kube# [ 18.990891] kube-controller-manager[2127]: I0127 01:32:04.934468 2127 secure_serving.go:116] Serving securely on 127.0.0.1:10252
kube# [ 18.991077] kube-controller-manager[2127]: I0127 01:32:04.934509 2127 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-controller-manager...
kube# [ 19.342036] etcd[2190]: d579d2a9b6a65847 is starting a new election at term 1
kube# [ 19.342319] etcd[2190]: d579d2a9b6a65847 became candidate at term 2
kube# [ 19.342622] etcd[2190]: d579d2a9b6a65847 received MsgVoteResp from d579d2a9b6a65847 at term 2
kube# [ 19.343352] etcd[2190]: d579d2a9b6a65847 became leader at term 2
kube# [ 19.343739] etcd[2190]: raft.node: d579d2a9b6a65847 elected leader d579d2a9b6a65847 at term 2
kube# [ 19.344072] etcd[2190]: published {Name:kube.my.xzy ClientURLs:[https://etcd.local:2379]} to cluster cd74e8f1b6ca227e
kube# [ 19.344387] etcd[2190]: setting up the initial cluster version to 3.3
kube# [ 19.344699] etcd[2190]: ready to serve client requests
kube# [ 19.345237] systemd[1]: Started etcd key-value store.
kube# [ 19.347859] etcd[2190]: serving client requests on 127.0.0.1:2379
kube# [ 19.349678] certmgr[1979]: 2020/01/27 01:32:05 [INFO] manager: certificate successfully processed
kube# [ 19.350064] etcd[2190]: set the initial cluster version to 3.3
kube# [ 19.350418] etcd[2190]: enabled capabilities for version 3.3
kube# [ 19.362370] kube-apiserver[2108]: I0127 01:32:05.305612 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.362688] kube-apiserver[2108]: I0127 01:32:05.306269 2108 client.go:354] parsed scheme: ""
kube# [ 19.363034] kube-apiserver[2108]: I0127 01:32:05.306619 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.363377] kube-apiserver[2108]: I0127 01:32:05.306968 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.363958] kube-apiserver[2108]: I0127 01:32:05.307260 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.372125] kube-apiserver[2108]: I0127 01:32:05.315700 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.424523] kube-apiserver[2108]: I0127 01:32:05.368052 2108 master.go:233] Using reconciler: lease
kube# [ 19.424912] kube-apiserver[2108]: I0127 01:32:05.368376 2108 client.go:354] parsed scheme: ""
kube# [ 19.425194] kube-apiserver[2108]: I0127 01:32:05.368481 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.425455] kube-apiserver[2108]: I0127 01:32:05.368536 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.425723] kube-apiserver[2108]: I0127 01:32:05.368570 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.433225] kube-apiserver[2108]: I0127 01:32:05.376797 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.436605] kube-apiserver[2108]: I0127 01:32:05.380158 2108 client.go:354] parsed scheme: ""
kube# [ 19.436701] kube-apiserver[2108]: I0127 01:32:05.380198 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.437000] kube-apiserver[2108]: I0127 01:32:05.380225 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.437271] kube-apiserver[2108]: I0127 01:32:05.380276 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.441273] kube-apiserver[2108]: I0127 01:32:05.384808 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.446600] kube-apiserver[2108]: I0127 01:32:05.390166 2108 client.go:354] parsed scheme: ""
kube# [ 19.446734] kube-apiserver[2108]: I0127 01:32:05.390181 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.446986] kube-apiserver[2108]: I0127 01:32:05.390205 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.447317] kube-apiserver[2108]: I0127 01:32:05.390239 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.452407] kube-apiserver[2108]: I0127 01:32:05.395972 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.453731] kube-apiserver[2108]: I0127 01:32:05.397302 2108 client.go:354] parsed scheme: ""
kube# [ 19.453921] kube-apiserver[2108]: I0127 01:32:05.397320 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.454259] kube-apiserver[2108]: I0127 01:32:05.397350 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.454496] kube-apiserver[2108]: I0127 01:32:05.397387 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.463578] kube-apiserver[2108]: I0127 01:32:05.407134 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.467129] kube-apiserver[2108]: I0127 01:32:05.410704 2108 client.go:354] parsed scheme: ""
kube# [ 19.467283] kube-apiserver[2108]: I0127 01:32:05.410722 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.467633] kube-apiserver[2108]: I0127 01:32:05.410750 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.467971] kube-apiserver[2108]: I0127 01:32:05.410789 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.475019] kube-apiserver[2108]: I0127 01:32:05.418551 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.476929] kube-apiserver[2108]: I0127 01:32:05.420488 2108 client.go:354] parsed scheme: ""
kube# [ 19.477073] kube-apiserver[2108]: I0127 01:32:05.420510 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.477298] kube-apiserver[2108]: I0127 01:32:05.420539 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.477575] kube-apiserver[2108]: I0127 01:32:05.420576 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.481609] kube-apiserver[2108]: I0127 01:32:05.425166 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.482038] kube-apiserver[2108]: I0127 01:32:05.425602 2108 client.go:354] parsed scheme: ""
kube# [ 19.482308] kube-apiserver[2108]: I0127 01:32:05.425626 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.482571] kube-apiserver[2108]: I0127 01:32:05.425663 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.483076] kube-apiserver[2108]: I0127 01:32:05.425713 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.490978] kube-apiserver[2108]: I0127 01:32:05.434530 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.491357] kube-apiserver[2108]: I0127 01:32:05.434878 2108 client.go:354] parsed scheme: ""
kube# [ 19.491629] kube-apiserver[2108]: I0127 01:32:05.434902 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.492034] kube-apiserver[2108]: I0127 01:32:05.434945 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.492267] kube-apiserver[2108]: I0127 01:32:05.434977 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.501231] kube-apiserver[2108]: I0127 01:32:05.444782 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.501499] kube-apiserver[2108]: I0127 01:32:05.445062 2108 client.go:354] parsed scheme: ""
kube# [ 19.501811] kube-apiserver[2108]: I0127 01:32:05.445082 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.502179] kube-apiserver[2108]: I0127 01:32:05.445131 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.502456] kube-apiserver[2108]: I0127 01:32:05.445156 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.506248] kube-apiserver[2108]: I0127 01:32:05.449802 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.506931] kube-apiserver[2108]: I0127 01:32:05.450351 2108 client.go:354] parsed scheme: ""
kube# [ 19.507326] kube-apiserver[2108]: I0127 01:32:05.450382 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.507559] kube-apiserver[2108]: I0127 01:32:05.450487 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.508086] kube-apiserver[2108]: I0127 01:32:05.450552 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.511401] kube-apiserver[2108]: I0127 01:32:05.454973 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.511928] kube-apiserver[2108]: I0127 01:32:05.455489 2108 client.go:354] parsed scheme: ""
kube# [ 19.512249] kube-apiserver[2108]: I0127 01:32:05.455513 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.512628] kube-apiserver[2108]: I0127 01:32:05.455556 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.513065] kube-apiserver[2108]: I0127 01:32:05.455595 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.517550] kube-apiserver[2108]: I0127 01:32:05.461103 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.517943] kube-apiserver[2108]: I0127 01:32:05.461514 2108 client.go:354] parsed scheme: ""
kube# [ 19.518063] kube-apiserver[2108]: I0127 01:32:05.461536 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.518358] kube-apiserver[2108]: I0127 01:32:05.461598 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.518723] kube-apiserver[2108]: I0127 01:32:05.461647 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.522564] kube-apiserver[2108]: I0127 01:32:05.466120 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.523111] kube-apiserver[2108]: I0127 01:32:05.466638 2108 client.go:354] parsed scheme: ""
kube# [ 19.523380] kube-apiserver[2108]: I0127 01:32:05.466657 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.523655] kube-apiserver[2108]: I0127 01:32:05.466734 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.524067] kube-apiserver[2108]: I0127 01:32:05.466766 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.527866] kube-apiserver[2108]: I0127 01:32:05.471298 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.532587] kube-apiserver[2108]: I0127 01:32:05.476157 2108 client.go:354] parsed scheme: ""
kube# [ 19.532827] kube-apiserver[2108]: I0127 01:32:05.476176 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.533167] kube-apiserver[2108]: I0127 01:32:05.476209 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.533431] kube-apiserver[2108]: I0127 01:32:05.476261 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.538090] kube-apiserver[2108]: I0127 01:32:05.481652 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.538449] kube-apiserver[2108]: I0127 01:32:05.481990 2108 client.go:354] parsed scheme: ""
kube# [ 19.538678] kube-apiserver[2108]: I0127 01:32:05.482014 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.539089] kube-apiserver[2108]: I0127 01:32:05.482066 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.539332] kube-apiserver[2108]: I0127 01:32:05.482109 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.547044] kube-apiserver[2108]: I0127 01:32:05.490622 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.547337] kube-apiserver[2108]: I0127 01:32:05.490897 2108 client.go:354] parsed scheme: ""
kube# [ 19.547640] kube-apiserver[2108]: I0127 01:32:05.490917 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.548064] kube-apiserver[2108]: I0127 01:32:05.490956 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.548327] kube-apiserver[2108]: I0127 01:32:05.490981 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.552048] kube-apiserver[2108]: I0127 01:32:05.495575 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.552282] kube-apiserver[2108]: I0127 01:32:05.495828 2108 client.go:354] parsed scheme: ""
kube# [ 19.552548] kube-apiserver[2108]: I0127 01:32:05.495853 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.552743] kube-apiserver[2108]: I0127 01:32:05.495878 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.553002] kube-apiserver[2108]: I0127 01:32:05.495905 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.559115] kube-apiserver[2108]: I0127 01:32:05.502298 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.559331] kube-apiserver[2108]: I0127 01:32:05.502748 2108 client.go:354] parsed scheme: ""
kube# [ 19.559596] kube-apiserver[2108]: I0127 01:32:05.502762 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.560026] kube-apiserver[2108]: I0127 01:32:05.502822 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.560337] kube-apiserver[2108]: I0127 01:32:05.502871 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.567966] kube-apiserver[2108]: I0127 01:32:05.511525 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.617306] kube-apiserver[2108]: I0127 01:32:05.560736 2108 client.go:354] parsed scheme: ""
kube# [ 19.617501] kube-apiserver[2108]: I0127 01:32:05.560759 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.617717] kube-apiserver[2108]: I0127 01:32:05.560793 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.618047] kube-apiserver[2108]: I0127 01:32:05.561556 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.625420] kube-apiserver[2108]: I0127 01:32:05.568997 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.625747] kube-apiserver[2108]: I0127 01:32:05.569320 2108 client.go:354] parsed scheme: ""
kube# [ 19.626005] kube-apiserver[2108]: I0127 01:32:05.569337 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.626375] kube-apiserver[2108]: I0127 01:32:05.569451 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.626639] kube-apiserver[2108]: I0127 01:32:05.569481 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.634572] kube-apiserver[2108]: I0127 01:32:05.578151 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.634932] kube-apiserver[2108]: I0127 01:32:05.578479 2108 client.go:354] parsed scheme: ""
kube# [ 19.635212] kube-apiserver[2108]: I0127 01:32:05.578502 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.635565] kube-apiserver[2108]: I0127 01:32:05.578530 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.635810] kube-apiserver[2108]: I0127 01:32:05.578566 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.639667] kube-apiserver[2108]: I0127 01:32:05.583225 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.640091] kube-apiserver[2108]: I0127 01:32:05.583555 2108 client.go:354] parsed scheme: ""
kube# [ 19.640382] kube-apiserver[2108]: I0127 01:32:05.583692 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.640718] kube-apiserver[2108]: I0127 01:32:05.583719 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.640967] kube-apiserver[2108]: I0127 01:32:05.583803 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.648018] kube-apiserver[2108]: I0127 01:32:05.591575 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.649659] kube-apiserver[2108]: I0127 01:32:05.593234 2108 client.go:354] parsed scheme: ""
kube# [ 19.649753] kube-apiserver[2108]: I0127 01:32:05.593256 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.650107] kube-apiserver[2108]: I0127 01:32:05.593362 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.650482] kube-apiserver[2108]: I0127 01:32:05.593788 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.656024] kube-apiserver[2108]: I0127 01:32:05.599589 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.656383] kube-apiserver[2108]: I0127 01:32:05.599942 2108 client.go:354] parsed scheme: ""
kube# [ 19.656620] kube-apiserver[2108]: I0127 01:32:05.599963 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.656907] kube-apiserver[2108]: I0127 01:32:05.600002 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.657148] kube-apiserver[2108]: I0127 01:32:05.600041 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.663252] kube-apiserver[2108]: I0127 01:32:05.606822 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.663600] kube-apiserver[2108]: I0127 01:32:05.607153 2108 client.go:354] parsed scheme: ""
kube# [ 19.663959] kube-apiserver[2108]: I0127 01:32:05.607174 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.664229] kube-apiserver[2108]: I0127 01:32:05.607213 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.664422] kube-apiserver[2108]: I0127 01:32:05.607266 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.668330] kube-apiserver[2108]: I0127 01:32:05.611904 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.668725] kube-apiserver[2108]: I0127 01:32:05.612271 2108 client.go:354] parsed scheme: ""
kube# [ 19.669092] kube-apiserver[2108]: I0127 01:32:05.612292 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.669385] kube-apiserver[2108]: I0127 01:32:05.612323 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.669641] kube-apiserver[2108]: I0127 01:32:05.612358 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.673362] kube-apiserver[2108]: I0127 01:32:05.616929 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.673839] kube-apiserver[2108]: I0127 01:32:05.617300 2108 client.go:354] parsed scheme: ""
kube# [ 19.674119] kube-apiserver[2108]: I0127 01:32:05.617326 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.674421] kube-apiserver[2108]: I0127 01:32:05.617378 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.674662] kube-apiserver[2108]: I0127 01:32:05.617468 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.678539] kube-apiserver[2108]: I0127 01:32:05.622114 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.679046] kube-apiserver[2108]: I0127 01:32:05.622572 2108 client.go:354] parsed scheme: ""
kube# [ 19.679290] kube-apiserver[2108]: I0127 01:32:05.622592 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.679616] kube-apiserver[2108]: I0127 01:32:05.622635 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.680122] kube-apiserver[2108]: I0127 01:32:05.622693 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.683812] kube-apiserver[2108]: I0127 01:32:05.627328 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.684249] kube-apiserver[2108]: I0127 01:32:05.627760 2108 client.go:354] parsed scheme: ""
kube# [ 19.684541] kube-apiserver[2108]: I0127 01:32:05.627788 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.684753] kube-apiserver[2108]: I0127 01:32:05.627826 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.685066] kube-apiserver[2108]: I0127 01:32:05.627864 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.688875] kube-apiserver[2108]: I0127 01:32:05.632424 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.692264] kube-apiserver[2108]: I0127 01:32:05.635841 2108 client.go:354] parsed scheme: ""
kube# [ 19.692371] kube-apiserver[2108]: I0127 01:32:05.635859 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.692640] kube-apiserver[2108]: I0127 01:32:05.635898 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.693065] kube-apiserver[2108]: I0127 01:32:05.635923 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.696949] kube-apiserver[2108]: I0127 01:32:05.640515 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.698467] kube-apiserver[2108]: I0127 01:32:05.642031 2108 client.go:354] parsed scheme: ""
kube# [ 19.698610] kube-apiserver[2108]: I0127 01:32:05.642049 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.698898] kube-apiserver[2108]: I0127 01:32:05.642075 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.699180] kube-apiserver[2108]: I0127 01:32:05.642121 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.706639] kube-apiserver[2108]: I0127 01:32:05.650136 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.708851] kube-apiserver[2108]: I0127 01:32:05.652336 2108 client.go:354] parsed scheme: ""
kube# [ 19.709225] kube-apiserver[2108]: I0127 01:32:05.652350 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.709437] kube-apiserver[2108]: I0127 01:32:05.652375 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.709694] kube-apiserver[2108]: I0127 01:32:05.652434 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.713911] kube-apiserver[2108]: I0127 01:32:05.657390 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.714204] kube-apiserver[2108]: I0127 01:32:05.657766 2108 client.go:354] parsed scheme: ""
kube# [ 19.714523] kube-apiserver[2108]: I0127 01:32:05.657784 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.714833] kube-apiserver[2108]: I0127 01:32:05.657820 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.715078] kube-apiserver[2108]: I0127 01:32:05.657854 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.723672] kube-apiserver[2108]: I0127 01:32:05.667223 2108 client.go:354] parsed scheme: ""
kube# [ 19.723847] kube-apiserver[2108]: I0127 01:32:05.667247 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.724077] kube-apiserver[2108]: I0127 01:32:05.667277 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.724301] kube-apiserver[2108]: I0127 01:32:05.667308 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.729410] kube-apiserver[2108]: I0127 01:32:05.672983 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.729638] kube-apiserver[2108]: I0127 01:32:05.673197 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.730102] kube-apiserver[2108]: I0127 01:32:05.673260 2108 client.go:354] parsed scheme: ""
kube# [ 19.730365] kube-apiserver[2108]: I0127 01:32:05.673295 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.730585] kube-apiserver[2108]: I0127 01:32:05.673343 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.730869] kube-apiserver[2108]: I0127 01:32:05.673384 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.735483] kube-apiserver[2108]: I0127 01:32:05.679055 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.735893] kube-apiserver[2108]: I0127 01:32:05.679386 2108 client.go:354] parsed scheme: ""
kube# [ 19.736159] kube-apiserver[2108]: I0127 01:32:05.679491 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.736428] kube-apiserver[2108]: I0127 01:32:05.679537 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.736642] kube-apiserver[2108]: I0127 01:32:05.679568 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.744684] kube-apiserver[2108]: I0127 01:32:05.688257 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.745031] kube-apiserver[2108]: I0127 01:32:05.688597 2108 client.go:354] parsed scheme: ""
kube# [ 19.745291] kube-apiserver[2108]: I0127 01:32:05.688614 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.745561] kube-apiserver[2108]: I0127 01:32:05.688638 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.745965] kube-apiserver[2108]: I0127 01:32:05.688673 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.753750] kube-apiserver[2108]: I0127 01:32:05.697322 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.754223] kube-apiserver[2108]: I0127 01:32:05.697786 2108 client.go:354] parsed scheme: ""
kube# [ 19.754470] kube-apiserver[2108]: I0127 01:32:05.697817 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.754793] kube-apiserver[2108]: I0127 01:32:05.697860 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.755088] kube-apiserver[2108]: I0127 01:32:05.697904 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.758992] kube-apiserver[2108]: I0127 01:32:05.702565 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.759326] kube-apiserver[2108]: I0127 01:32:05.702899 2108 client.go:354] parsed scheme: ""
kube# [ 19.759598] kube-apiserver[2108]: I0127 01:32:05.702927 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.759949] kube-apiserver[2108]: I0127 01:32:05.702984 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.760174] kube-apiserver[2108]: I0127 01:32:05.703021 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.766153] kube-apiserver[2108]: I0127 01:32:05.709516 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.767464] kube-apiserver[2108]: I0127 01:32:05.711038 2108 client.go:354] parsed scheme: ""
kube# [ 19.767574] kube-apiserver[2108]: I0127 01:32:05.711053 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.767888] kube-apiserver[2108]: I0127 01:32:05.711082 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.770260] kube-apiserver[2108]: I0127 01:32:05.713837 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.775867] kube-apiserver[2108]: I0127 01:32:05.719380 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.780273] kube-apiserver[2108]: I0127 01:32:05.723799 2108 client.go:354] parsed scheme: ""
kube# [ 19.780527] kube-apiserver[2108]: I0127 01:32:05.723825 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.780887] kube-apiserver[2108]: I0127 01:32:05.723866 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.781342] kube-apiserver[2108]: I0127 01:32:05.723943 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.789858] kube-apiserver[2108]: I0127 01:32:05.733315 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.792306] kube-apiserver[2108]: I0127 01:32:05.735842 2108 client.go:354] parsed scheme: ""
kube# [ 19.792536] kube-apiserver[2108]: I0127 01:32:05.735864 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.792884] kube-apiserver[2108]: I0127 01:32:05.735896 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.793249] kube-apiserver[2108]: I0127 01:32:05.735970 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.801073] kube-apiserver[2108]: I0127 01:32:05.744584 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.802622] kube-apiserver[2108]: I0127 01:32:05.746190 2108 client.go:354] parsed scheme: ""
kube# [ 19.802756] kube-apiserver[2108]: I0127 01:32:05.746207 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.803159] kube-apiserver[2108]: I0127 01:32:05.746243 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.805495] kube-apiserver[2108]: I0127 01:32:05.749070 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.811528] kube-apiserver[2108]: I0127 01:32:05.755070 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.813104] kube-apiserver[2108]: I0127 01:32:05.756680 2108 client.go:354] parsed scheme: ""
kube# [ 19.813253] kube-apiserver[2108]: I0127 01:32:05.756724 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.814932] kube-apiserver[2108]: I0127 01:32:05.758457 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.815268] kube-apiserver[2108]: I0127 01:32:05.758557 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.822295] kube-apiserver[2108]: I0127 01:32:05.765820 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.822615] kube-apiserver[2108]: I0127 01:32:05.766184 2108 client.go:354] parsed scheme: ""
kube# [ 19.822857] kube-apiserver[2108]: I0127 01:32:05.766203 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.823108] kube-apiserver[2108]: I0127 01:32:05.766236 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.823416] kube-apiserver[2108]: I0127 01:32:05.766263 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.832402] kube-apiserver[2108]: I0127 01:32:05.775944 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.832684] kube-apiserver[2108]: I0127 01:32:05.776221 2108 client.go:354] parsed scheme: ""
kube# [ 19.833046] kube-apiserver[2108]: I0127 01:32:05.776248 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.833269] kube-apiserver[2108]: I0127 01:32:05.776277 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.833566] kube-apiserver[2108]: I0127 01:32:05.776314 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.838284] kube-apiserver[2108]: I0127 01:32:05.781857 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.838596] kube-apiserver[2108]: I0127 01:32:05.782175 2108 client.go:354] parsed scheme: ""
kube# [ 19.839003] kube-apiserver[2108]: I0127 01:32:05.782196 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.839324] kube-apiserver[2108]: I0127 01:32:05.782226 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.839596] kube-apiserver[2108]: I0127 01:32:05.782324 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.847712] kube-apiserver[2108]: I0127 01:32:05.791271 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.851917] kube-apiserver[2108]: I0127 01:32:05.795488 2108 client.go:354] parsed scheme: ""
kube# [ 19.852019] kube-apiserver[2108]: I0127 01:32:05.795512 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.852283] kube-apiserver[2108]: I0127 01:32:05.795545 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.852566] kube-apiserver[2108]: I0127 01:32:05.795579 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.861013] kube-apiserver[2108]: I0127 01:32:05.804564 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.863689] kube-apiserver[2108]: I0127 01:32:05.807266 2108 client.go:354] parsed scheme: ""
kube# [ 19.863904] kube-apiserver[2108]: I0127 01:32:05.807284 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.864309] kube-apiserver[2108]: I0127 01:32:05.807317 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.864586] kube-apiserver[2108]: I0127 01:32:05.807357 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.869443] kube-apiserver[2108]: I0127 01:32:05.812958 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.870041] kube-apiserver[2108]: I0127 01:32:05.813589 2108 client.go:354] parsed scheme: ""
kube# [ 19.870321] kube-apiserver[2108]: I0127 01:32:05.813625 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.870655] kube-apiserver[2108]: I0127 01:32:05.813717 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.871079] kube-apiserver[2108]: I0127 01:32:05.813778 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.875023] kube-apiserver[2108]: I0127 01:32:05.818598 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.875362] kube-apiserver[2108]: I0127 01:32:05.818926 2108 client.go:354] parsed scheme: ""
kube# [ 19.875737] kube-apiserver[2108]: I0127 01:32:05.818966 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.876007] kube-apiserver[2108]: I0127 01:32:05.819013 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.876270] kube-apiserver[2108]: I0127 01:32:05.819041 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.883970] kube-apiserver[2108]: I0127 01:32:05.827521 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.884321] kube-apiserver[2108]: I0127 01:32:05.827884 2108 client.go:354] parsed scheme: ""
kube# [ 19.884557] kube-apiserver[2108]: I0127 01:32:05.827907 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.884899] kube-apiserver[2108]: I0127 01:32:05.827951 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.885124] kube-apiserver[2108]: I0127 01:32:05.828007 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.893181] kube-apiserver[2108]: I0127 01:32:05.836758 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.893423] kube-apiserver[2108]: I0127 01:32:05.836969 2108 client.go:354] parsed scheme: ""
kube# [ 19.893644] kube-apiserver[2108]: I0127 01:32:05.836989 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.894029] kube-apiserver[2108]: I0127 01:32:05.837043 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.894345] kube-apiserver[2108]: I0127 01:32:05.837231 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.898533] kube-apiserver[2108]: I0127 01:32:05.842112 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.898926] kube-apiserver[2108]: I0127 01:32:05.842490 2108 client.go:354] parsed scheme: ""
kube# [ 19.899173] kube-apiserver[2108]: I0127 01:32:05.842520 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.899440] kube-apiserver[2108]: I0127 01:32:05.842576 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.899732] kube-apiserver[2108]: I0127 01:32:05.842621 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.904103] kube-apiserver[2108]: I0127 01:32:05.847662 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.905573] kube-apiserver[2108]: I0127 01:32:05.849151 2108 client.go:354] parsed scheme: ""
kube# [ 19.905685] kube-apiserver[2108]: I0127 01:32:05.849168 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.906041] kube-apiserver[2108]: I0127 01:32:05.849198 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.908368] kube-apiserver[2108]: I0127 01:32:05.851947 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.913926] kube-apiserver[2108]: I0127 01:32:05.857495 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.914319] kube-apiserver[2108]: I0127 01:32:05.857900 2108 client.go:354] parsed scheme: ""
kube# [ 19.914606] kube-apiserver[2108]: I0127 01:32:05.857918 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.915032] kube-apiserver[2108]: I0127 01:32:05.857954 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.915275] kube-apiserver[2108]: I0127 01:32:05.857997 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.918932] kube-apiserver[2108]: I0127 01:32:05.862510 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.922082] kube-apiserver[2108]: I0127 01:32:05.865663 2108 client.go:354] parsed scheme: ""
kube# [ 19.922186] kube-apiserver[2108]: I0127 01:32:05.865678 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.922454] kube-apiserver[2108]: I0127 01:32:05.865701 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.922728] kube-apiserver[2108]: I0127 01:32:05.865728 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.930240] kube-apiserver[2108]: I0127 01:32:05.873814 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.930647] kube-apiserver[2108]: I0127 01:32:05.874192 2108 client.go:354] parsed scheme: ""
kube# [ 19.931011] kube-apiserver[2108]: I0127 01:32:05.874216 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.931259] kube-apiserver[2108]: I0127 01:32:05.874245 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.931806] kube-apiserver[2108]: I0127 01:32:05.874451 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.937539] kube-apiserver[2108]: I0127 01:32:05.881111 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.937954] kube-apiserver[2108]: I0127 01:32:05.881535 2108 client.go:354] parsed scheme: ""
kube# [ 19.938150] kube-apiserver[2108]: I0127 01:32:05.881552 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.938423] kube-apiserver[2108]: I0127 01:32:05.881635 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.938630] kube-apiserver[2108]: I0127 01:32:05.881807 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.945394] kube-apiserver[2108]: I0127 01:32:05.888973 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.945891] kube-apiserver[2108]: I0127 01:32:05.889457 2108 client.go:354] parsed scheme: ""
kube# [ 19.946121] kube-apiserver[2108]: I0127 01:32:05.889480 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.946390] kube-apiserver[2108]: I0127 01:32:05.889534 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.946681] kube-apiserver[2108]: I0127 01:32:05.889609 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.950891] kube-apiserver[2108]: I0127 01:32:05.894447 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.952394] kube-apiserver[2108]: I0127 01:32:05.895967 2108 client.go:354] parsed scheme: ""
kube# [ 19.952517] kube-apiserver[2108]: I0127 01:32:05.895984 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.952838] kube-apiserver[2108]: I0127 01:32:05.896010 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.953042] kube-apiserver[2108]: I0127 01:32:05.896041 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.960717] kube-apiserver[2108]: I0127 01:32:05.904228 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.963139] kube-apiserver[2108]: I0127 01:32:05.906712 2108 client.go:354] parsed scheme: ""
kube# [ 19.963262] kube-apiserver[2108]: I0127 01:32:05.906734 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.963567] kube-apiserver[2108]: I0127 01:32:05.906773 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.963850] kube-apiserver[2108]: I0127 01:32:05.906813 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.968070] kube-apiserver[2108]: I0127 01:32:05.911637 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.968453] kube-apiserver[2108]: I0127 01:32:05.912018 2108 client.go:354] parsed scheme: ""
kube# [ 19.968874] kube-apiserver[2108]: I0127 01:32:05.912049 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.969215] kube-apiserver[2108]: I0127 01:32:05.912089 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.969486] kube-apiserver[2108]: I0127 01:32:05.912259 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.974364] kube-apiserver[2108]: I0127 01:32:05.917941 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.974696] kube-apiserver[2108]: I0127 01:32:05.918269 2108 client.go:354] parsed scheme: ""
kube# [ 19.974969] kube-apiserver[2108]: I0127 01:32:05.918286 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.975243] kube-apiserver[2108]: I0127 01:32:05.918326 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.975503] kube-apiserver[2108]: I0127 01:32:05.918369 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.981440] kube-apiserver[2108]: I0127 01:32:05.924993 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.981894] kube-apiserver[2108]: I0127 01:32:05.925387 2108 client.go:354] parsed scheme: ""
kube# [ 19.982238] kube-apiserver[2108]: I0127 01:32:05.925459 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.982562] kube-apiserver[2108]: I0127 01:32:05.925522 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.982883] kube-apiserver[2108]: I0127 01:32:05.925565 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.986569] kube-apiserver[2108]: I0127 01:32:05.930134 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.987477] kube-apiserver[2108]: I0127 01:32:05.931042 2108 client.go:354] parsed scheme: ""
kube# [ 19.987810] kube-apiserver[2108]: I0127 01:32:05.931062 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.988145] kube-apiserver[2108]: I0127 01:32:05.931108 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.988400] kube-apiserver[2108]: I0127 01:32:05.931147 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.992210] kube-apiserver[2108]: I0127 01:32:05.935780 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 19.992474] kube-apiserver[2108]: I0127 01:32:05.936044 2108 client.go:354] parsed scheme: ""
kube# [ 19.992672] kube-apiserver[2108]: I0127 01:32:05.936064 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 19.993046] kube-apiserver[2108]: I0127 01:32:05.936088 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 19.995996] kube-apiserver[2108]: I0127 01:32:05.939564 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.004696] kube-apiserver[2108]: I0127 01:32:05.948030 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.005013] kube-apiserver[2108]: I0127 01:32:05.948535 2108 client.go:354] parsed scheme: ""
kube# [ 20.005365] kube-apiserver[2108]: I0127 01:32:05.948558 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.005574] kube-apiserver[2108]: I0127 01:32:05.948602 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.005877] kube-apiserver[2108]: I0127 01:32:05.948669 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.009691] kube-apiserver[2108]: I0127 01:32:05.953229 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.010039] kube-apiserver[2108]: I0127 01:32:05.953613 2108 client.go:354] parsed scheme: ""
kube# [ 20.010313] kube-apiserver[2108]: I0127 01:32:05.953645 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.010570] kube-apiserver[2108]: I0127 01:32:05.953694 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.010800] kube-apiserver[2108]: I0127 01:32:05.953734 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.014983] kube-apiserver[2108]: I0127 01:32:05.958545 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.015322] kube-apiserver[2108]: I0127 01:32:05.958879 2108 client.go:354] parsed scheme: ""
kube# [ 20.015559] kube-apiserver[2108]: I0127 01:32:05.958911 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.015911] kube-apiserver[2108]: I0127 01:32:05.958971 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.016134] kube-apiserver[2108]: I0127 01:32:05.959018 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.020151] kube-apiserver[2108]: I0127 01:32:05.963722 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.020576] kube-apiserver[2108]: I0127 01:32:05.964140 2108 client.go:354] parsed scheme: ""
kube# [ 20.020986] kube-apiserver[2108]: I0127 01:32:05.964156 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.021255] kube-apiserver[2108]: I0127 01:32:05.964194 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.021472] kube-apiserver[2108]: I0127 01:32:05.964219 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.025508] kube-apiserver[2108]: I0127 01:32:05.969082 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.092344] kube-apiserver[2108]: W0127 01:32:06.035804 2108 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
kube# [ 20.096903] kube-apiserver[2108]: W0127 01:32:06.040477 2108 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
kube# [ 20.099097] kube-apiserver[2108]: W0127 01:32:06.042675 2108 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
kube# [ 20.099696] kube-apiserver[2108]: W0127 01:32:06.043272 2108 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
kube# [ 20.103072] kube-apiserver[2108]: W0127 01:32:06.046642 2108 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
kube# [ 20.564923] kube-apiserver[2108]: E0127 01:32:06.507966 2108 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.565102] kube-apiserver[2108]: E0127 01:32:06.508008 2108 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.565385] kube-apiserver[2108]: E0127 01:32:06.508026 2108 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.565611] kube-apiserver[2108]: E0127 01:32:06.508044 2108 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.565865] kube-apiserver[2108]: E0127 01:32:06.508067 2108 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.566126] kube-apiserver[2108]: E0127 01:32:06.508085 2108 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.566372] kube-apiserver[2108]: E0127 01:32:06.508099 2108 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.566580] kube-apiserver[2108]: E0127 01:32:06.508113 2108 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.566836] kube-apiserver[2108]: E0127 01:32:06.508149 2108 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.567034] kube-apiserver[2108]: E0127 01:32:06.508184 2108 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.567243] kube-apiserver[2108]: E0127 01:32:06.508207 2108 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.567444] kube-apiserver[2108]: E0127 01:32:06.508223 2108 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 20.567657] kube-apiserver[2108]: I0127 01:32:06.508245 2108 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 20.568000] kube-apiserver[2108]: I0127 01:32:06.508255 2108 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 20.568211] kube-apiserver[2108]: I0127 01:32:06.509301 2108 client.go:354] parsed scheme: ""
kube# [ 20.568421] kube-apiserver[2108]: I0127 01:32:06.509314 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.568631] kube-apiserver[2108]: I0127 01:32:06.509357 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.569017] kube-apiserver[2108]: I0127 01:32:06.509442 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.570934] kube-apiserver[2108]: I0127 01:32:06.514334 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.571322] kube-apiserver[2108]: I0127 01:32:06.514895 2108 client.go:354] parsed scheme: ""
kube# [ 20.571671] kube-apiserver[2108]: I0127 01:32:06.514918 2108 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 20.572056] kube-apiserver[2108]: I0127 01:32:06.515003 2108 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 20.572310] kube-apiserver[2108]: I0127 01:32:06.515055 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 20.576352] kube-apiserver[2108]: I0127 01:32:06.519883 2108 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 21.618748] kube-apiserver[2108]: I0127 01:32:07.561838 2108 secure_serving.go:116] Serving securely on [::]:443
kube# [ 21.618999] kube-apiserver[2108]: I0127 01:32:07.561888 2108 controller.go:176] Shutting down kubernetes service endpoint reconciler
kube# [ 21.619322] kube-apiserver[2108]: I0127 01:32:07.562021 2108 autoregister_controller.go:140] Starting autoregister controller
kube# [ 21.619586] kube-apiserver[2108]: I0127 01:32:07.562045 2108 cache.go:32] Waiting for caches to sync for autoregister controller
kube# [ 21.619852] kube-apiserver[2108]: I0127 01:32:07.562156 2108 crdregistration_controller.go:112] Starting crd-autoregister controller
kube# [ 21.620126] kube-apiserver[2108]: I0127 01:32:07.562180 2108 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
kube# [ 21.620335] kube-apiserver[2108]: I0127 01:32:07.562193 2108 apiservice_controller.go:94] Starting APIServiceRegistrationController
kube# [ 21.620537] kube-apiserver[2108]: I0127 01:32:07.562216 2108 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
kube# [ 21.620737] kube-apiserver[2108]: E0127 01:32:07.562216 2108 cache.go:35] Unable to sync caches for autoregister controller
kube# [ 21.620991] kube-apiserver[2108]: I0127 01:32:07.562252 2108 controller.go:81] Starting OpenAPI AggregationController
kube# [ 21.621188] kube-apiserver[2108]: I0127 01:32:07.562277 2108 controller.go:87] Shutting down OpenAPI AggregationController
kube# [ 21.621394] kube-apiserver[2108]: I0127 01:32:07.562284 2108 autoregister_controller.go:145] Shutting down autoregister controller
kube# [ 21.621632] kube-apiserver[2108]: E0127 01:32:07.562299 2108 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller
kube# [ 21.622090] kube-apiserver[2108]: E0127 01:32:07.562326 2108 controller_utils.go:1032] unable to sync caches for crd-autoregister controller
kube# [ 21.622375] kube-apiserver[2108]: F0127 01:32:07.561979 2108 hooks.go:195] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
kube# [ 21.707949] kube-scheduler[2133]: E0127 01:32:07.650939 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://192.168.1.1/api/v1/services?limit=500&resourceVersion=0: EOF
kube# [ 21.708114] kube-scheduler[2133]: E0127 01:32:07.650982 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://192.168.1.1/api/v1/persistentvolumes?limit=500&resourceVersion=0: EOF
kube# [ 21.708331] kube-scheduler[2133]: E0127 01:32:07.650998 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://192.168.1.1/api/v1/nodes?limit=500&resourceVersion=0: EOF
kube# [ 21.708601] kube-scheduler[2133]: E0127 01:32:07.651045 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://192.168.1.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0: EOF
kube# [ 21.708920] kube-scheduler[2133]: E0127 01:32:07.651059 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://192.168.1.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: EOF
kube# [ 21.709223] kube-scheduler[2133]: E0127 01:32:07.651067 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://192.168.1.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0: EOF
kube# [ 21.709515] kube-scheduler[2133]: E0127 01:32:07.651053 2133 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://192.168.1.1/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: EOF
kube# [ 21.709839] kube-scheduler[2133]: E0127 01:32:07.651129 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://192.168.1.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: EOF
kube# [ 21.710007] kube-scheduler[2133]: E0127 01:32:07.651564 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://192.168.1.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: EOF
kube# The connection to the server 192.168.1.1 was refused - did you specify the right host or port?
kube# [ 21.710716] kube-proxy[2110]: W0127 01:32:07.651164 2110 node.go:113] Failed to retrieve node info: Get https://192.168.1.1/api/v1/nodes/kube: EOF
kube# [ 21.711042] kube-proxy[2110]: I0127 01:32:07.651195 2110 server_others.go:143] Using iptables Proxier.
kube# [ 21.711242] kube-proxy[2110]: W0127 01:32:07.651265 2110 proxier.go:316] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
kube# [ 21.728139] systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/EXCEPTION
kube# [ 21.729553] systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
kube: exit status 1
kube# [ 21.730943] kube-proxy[2110]: I0127 01:32:07.671625 2110 server.go:534] Version: v1.15.6
kube# [ 21.732401] systemd[1]: Stopped Kubernetes APIServer Service.
kube# [ 21.733415] systemd[1]: kube-apiserver.service: Consumed 3.576s CPU time, received 251.3K IP traffic, sent 229.1K IP traffic.
kube# [ 21.736081] systemd[1]: Started Kubernetes APIServer Service.
kube# [ 21.738041] systemd[1]: Starting Kubernetes Kubelet Service...
kube# [ 21.740336] certmgr[1979]: 2020/01/27 01:32:07 [INFO] manager: certificate successfully processed
kube# [ 21.742536] kube-proxy[2110]: I0127 01:32:07.683286 2110 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
(3.47 seconds)
kube# [ 21.744589] kube-proxy[2110]: I0127 01:32:07.683345 2110 conntrack.go:52] Setting nf_conntrack_max to 524288
kube# [ 21.746705] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2256]: Seeding docker image: /nix/store/zfkrn9hmwiay3fwj7ilwv52rd6myn4i1-docker-image-pause.tar.gz
kube# [ 21.756200] systemd[1]: Started Kubernetes systemd probe.
kube# [ 21.762180] kube-proxy[2110]: I0127 01:32:07.705675 2110 conntrack.go:83] Setting conntrack hashsize to 131072
kube# [ 21.766943] systemd[1]: run-rd0e3c8f4e4a544d8afa70dcd2989fc8d.scope: Succeeded.
kube# [ 21.769923] kube-proxy[2110]: I0127 01:32:07.713495 2110 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
kube# [ 21.772306] kube-proxy[2110]: I0127 01:32:07.713534 2110 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
kube# [ 21.774387] kube-proxy[2110]: I0127 01:32:07.713668 2110 config.go:96] Starting endpoints config controller
kube# [ 21.776365] kube-proxy[2110]: I0127 01:32:07.713722 2110 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
kube# [ 21.778493] kube-proxy[2110]: I0127 01:32:07.713729 2110 config.go:187] Starting service config controller
kube# [ 21.780724] kube-proxy[2110]: I0127 01:32:07.713786 2110 controller_utils.go:1029] Waiting for caches to sync for service config controller
kube# [ 21.783570] kube-proxy[2110]: E0127 01:32:07.726490 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://192.168.1.1/api/v1/services?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 21.787123] kube-proxy[2110]: E0127 01:32:07.726940 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://192.168.1.1/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 192.168.1.1:443: connect: connection refused
kube# [ 21.793015] kube-proxy[2110]: E0127 01:32:07.736565 2110 event.go:249] Unable to write event: 'Post https://192.168.1.1/api/v1/namespaces/default/events: dial tcp 192.168.1.1:443: connect: connection refused' (may retry after sleeping)
kube# [ 21.809628] kube-apiserver[2253]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
kube# [ 21.810970] kube-apiserver[2253]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
kube# [ 21.812534] kube-apiserver[2253]: I0127 01:32:07.752942 2253 server.go:560] external host was not specified, using 192.168.1.1
kube# [ 21.813927] kube-apiserver[2253]: I0127 01:32:07.753141 2253 server.go:147] Version: v1.15.6
kube# [ 22.025937] kube-apiserver[2253]: I0127 01:32:07.969472 2253 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 22.029084] kube-apiserver[2253]: I0127 01:32:07.969516 2253 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 22.031905] kube-apiserver[2253]: E0127 01:32:07.969987 2253 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.034197] kube-apiserver[2253]: E0127 01:32:07.970016 2253 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.034293] kube-apiserver[2253]: E0127 01:32:07.970040 2253 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.034524] kube-apiserver[2253]: E0127 01:32:07.970064 2253 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.034701] kube-apiserver[2253]: E0127 01:32:07.970088 2253 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.035067] kube-apiserver[2253]: E0127 01:32:07.970121 2253 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.035278] kube-apiserver[2253]: E0127 01:32:07.970152 2253 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.035492] kube-apiserver[2253]: E0127 01:32:07.970176 2253 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.035740] kube-apiserver[2253]: E0127 01:32:07.970963 2253 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.036003] kube-apiserver[2253]: E0127 01:32:07.971072 2253 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.036221] kube-apiserver[2253]: E0127 01:32:07.971114 2253 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.036420] kube-apiserver[2253]: E0127 01:32:07.971140 2253 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 22.036661] kube-apiserver[2253]: I0127 01:32:07.971177 2253 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 22.037048] kube-apiserver[2253]: I0127 01:32:07.971195 2253 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 22.037183] kube-apiserver[2253]: I0127 01:32:07.973891 2253 client.go:354] parsed scheme: ""
kube# [ 22.037415] kube-apiserver[2253]: I0127 01:32:07.973918 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.037616] kube-apiserver[2253]: I0127 01:32:07.974045 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.037884] kube-apiserver[2253]: I0127 01:32:07.974113 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.043127] kube-apiserver[2253]: I0127 01:32:07.986672 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.043403] kube-apiserver[2253]: I0127 01:32:07.986976 2253 client.go:354] parsed scheme: ""
kube# [ 22.043655] kube-apiserver[2253]: I0127 01:32:07.987001 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.044046] kube-apiserver[2253]: I0127 01:32:07.987045 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.044326] kube-apiserver[2253]: I0127 01:32:07.987090 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.048322] kube-apiserver[2253]: I0127 01:32:07.991875 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.071705] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2256]: Loaded image: pause:latest
kube# [ 22.074914] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2256]: Seeding docker image: /nix/store/ggrzs3gzv69xzk02ckzijc2caqv738kk-docker-image-coredns-coredns-1.5.0.tar
kube# [ 22.078974] kube-apiserver[2253]: I0127 01:32:08.022544 2253 master.go:233] Using reconciler: lease
kube# [ 22.079253] kube-apiserver[2253]: I0127 01:32:08.022829 2253 client.go:354] parsed scheme: ""
kube# [ 22.079576] kube-apiserver[2253]: I0127 01:32:08.022848 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.080069] kube-apiserver[2253]: I0127 01:32:08.022883 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.080374] kube-apiserver[2253]: I0127 01:32:08.022922 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.084559] kube-apiserver[2253]: I0127 01:32:08.028046 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.085676] kube-apiserver[2253]: I0127 01:32:08.029212 2253 client.go:354] parsed scheme: ""
kube# [ 22.086096] kube-apiserver[2253]: I0127 01:32:08.029236 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.086480] kube-apiserver[2253]: I0127 01:32:08.029270 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.086755] kube-apiserver[2253]: I0127 01:32:08.029352 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.090734] kube-apiserver[2253]: I0127 01:32:08.034288 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.091464] kube-apiserver[2253]: I0127 01:32:08.035009 2253 client.go:354] parsed scheme: ""
kube# [ 22.091837] kube-apiserver[2253]: I0127 01:32:08.035027 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.092138] kube-apiserver[2253]: I0127 01:32:08.035065 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.092403] kube-apiserver[2253]: I0127 01:32:08.035092 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.097926] kube-apiserver[2253]: I0127 01:32:08.041346 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.098175] kube-apiserver[2253]: I0127 01:32:08.041650 2253 client.go:354] parsed scheme: ""
kube# [ 22.098552] kube-apiserver[2253]: I0127 01:32:08.041672 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.099142] kube-apiserver[2253]: I0127 01:32:08.042132 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.099534] kube-apiserver[2253]: I0127 01:32:08.042167 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.103922] kube-apiserver[2253]: I0127 01:32:08.047307 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.104506] kube-apiserver[2253]: I0127 01:32:08.047731 2253 client.go:354] parsed scheme: ""
kube# [ 22.105038] kube-apiserver[2253]: I0127 01:32:08.047802 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.105357] kube-apiserver[2253]: I0127 01:32:08.047888 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.105628] kube-apiserver[2253]: I0127 01:32:08.047930 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.110263] kube-apiserver[2253]: I0127 01:32:08.053819 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.110600] kube-apiserver[2253]: I0127 01:32:08.054115 2253 client.go:354] parsed scheme: ""
kube# [ 22.111042] kube-apiserver[2253]: I0127 01:32:08.054129 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.111379] kube-apiserver[2253]: I0127 01:32:08.054152 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.111598] kube-apiserver[2253]: I0127 01:32:08.054191 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.115581] kube-apiserver[2253]: I0127 01:32:08.059133 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.116124] kube-apiserver[2253]: I0127 01:32:08.059581 2253 client.go:354] parsed scheme: ""
kube# [ 22.116529] kube-apiserver[2253]: I0127 01:32:08.059602 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.116822] kube-apiserver[2253]: I0127 01:32:08.059665 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.117188] kube-apiserver[2253]: I0127 01:32:08.059712 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.121548] kube-apiserver[2253]: I0127 01:32:08.065027 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.121999] kube-apiserver[2253]: I0127 01:32:08.065563 2253 client.go:354] parsed scheme: ""
kube# [ 22.122217] kube-apiserver[2253]: I0127 01:32:08.065584 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.122560] kube-apiserver[2253]: I0127 01:32:08.065614 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.122868] kube-apiserver[2253]: I0127 01:32:08.065691 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.128087] kube-apiserver[2253]: I0127 01:32:08.071655 2253 client.go:354] parsed scheme: ""
kube# [ 22.128244] kube-apiserver[2253]: I0127 01:32:08.071672 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.128710] kube-apiserver[2253]: I0127 01:32:08.071697 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.129072] kube-apiserver[2253]: I0127 01:32:08.071744 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.129355] kube-apiserver[2253]: I0127 01:32:08.071883 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.134445] kube-apiserver[2253]: I0127 01:32:08.077987 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.134970] kube-apiserver[2253]: I0127 01:32:08.078498 2253 client.go:354] parsed scheme: ""
kube# [ 22.135349] kube-apiserver[2253]: I0127 01:32:08.078518 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.135719] kube-apiserver[2253]: I0127 01:32:08.078570 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.136103] kube-apiserver[2253]: I0127 01:32:08.078618 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.140331] kube-apiserver[2253]: I0127 01:32:08.083874 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.140862] kube-apiserver[2253]: I0127 01:32:08.084332 2253 client.go:354] parsed scheme: ""
kube# [ 22.141281] kube-apiserver[2253]: I0127 01:32:08.084352 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.141598] kube-apiserver[2253]: I0127 01:32:08.084466 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.142107] kube-apiserver[2253]: I0127 01:32:08.084525 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.146993] kube-apiserver[2253]: I0127 01:32:08.090553 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.147439] kube-apiserver[2253]: I0127 01:32:08.090954 2253 client.go:354] parsed scheme: ""
kube# [ 22.147717] kube-apiserver[2253]: I0127 01:32:08.091032 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.148075] kube-apiserver[2253]: I0127 01:32:08.091068 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.148367] kube-apiserver[2253]: I0127 01:32:08.091243 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.154045] kube-apiserver[2253]: I0127 01:32:08.097588 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.154537] kube-apiserver[2253]: I0127 01:32:08.098087 2253 client.go:354] parsed scheme: ""
kube# [ 22.154899] kube-apiserver[2253]: I0127 01:32:08.098111 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.155284] kube-apiserver[2253]: I0127 01:32:08.098159 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.155522] kube-apiserver[2253]: I0127 01:32:08.098200 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.161200] kube-apiserver[2253]: I0127 01:32:08.104731 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.161398] kube-apiserver[2253]: I0127 01:32:08.104863 2253 client.go:354] parsed scheme: ""
kube# [ 22.161670] kube-apiserver[2253]: I0127 01:32:08.104904 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.162129] kube-apiserver[2253]: I0127 01:32:08.104949 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.162321] kube-apiserver[2253]: I0127 01:32:08.105010 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.170638] kube-apiserver[2253]: I0127 01:32:08.114196 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.171004] kube-apiserver[2253]: I0127 01:32:08.114560 2253 client.go:354] parsed scheme: ""
kube# [ 22.171430] kube-apiserver[2253]: I0127 01:32:08.114600 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.171753] kube-apiserver[2253]: I0127 01:32:08.114627 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.172049] kube-apiserver[2253]: I0127 01:32:08.114658 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.177155] kube-apiserver[2253]: I0127 01:32:08.120717 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.178028] kube-apiserver[2253]: I0127 01:32:08.121388 2253 client.go:354] parsed scheme: ""
kube# [ 22.178415] kube-apiserver[2253]: I0127 01:32:08.121497 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.178923] kube-apiserver[2253]: I0127 01:32:08.121575 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.179275] kube-apiserver[2253]: I0127 01:32:08.121647 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.182945] kube-apiserver[2253]: I0127 01:32:08.126510 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.183282] kube-apiserver[2253]: I0127 01:32:08.126796 2253 client.go:354] parsed scheme: ""
kube# [ 22.183894] kube-apiserver[2253]: I0127 01:32:08.126854 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.184309] kube-apiserver[2253]: I0127 01:32:08.126906 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.184650] kube-apiserver[2253]: I0127 01:32:08.126950 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.189666] kube-apiserver[2253]: I0127 01:32:08.133229 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.190301] kube-apiserver[2253]: I0127 01:32:08.133830 2253 client.go:354] parsed scheme: ""
kube# [ 22.190701] kube-apiserver[2253]: I0127 01:32:08.133865 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.191101] kube-apiserver[2253]: I0127 01:32:08.133906 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.191381] kube-apiserver[2253]: I0127 01:32:08.133939 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.195202] kube-apiserver[2253]: I0127 01:32:08.138733 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.236476] kube-apiserver[2253]: I0127 01:32:08.180014 2253 client.go:354] parsed scheme: ""
kube# [ 22.236652] kube-apiserver[2253]: I0127 01:32:08.180053 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.237141] kube-apiserver[2253]: I0127 01:32:08.180099 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.237472] kube-apiserver[2253]: I0127 01:32:08.180157 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.241618] kube-apiserver[2253]: I0127 01:32:08.185197 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.242242] kube-apiserver[2253]: I0127 01:32:08.185788 2253 client.go:354] parsed scheme: ""
kube# [ 22.242689] kube-apiserver[2253]: I0127 01:32:08.185817 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.243180] kube-apiserver[2253]: I0127 01:32:08.185848 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.243509] kube-apiserver[2253]: I0127 01:32:08.185899 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.247272] kube-apiserver[2253]: I0127 01:32:08.190851 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.248870] kube-apiserver[2253]: I0127 01:32:08.192443 2253 client.go:354] parsed scheme: ""
kube# [ 22.249009] kube-apiserver[2253]: I0127 01:32:08.192463 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.249316] kube-apiserver[2253]: I0127 01:32:08.192494 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.249671] kube-apiserver[2253]: I0127 01:32:08.192525 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.253692] kube-apiserver[2253]: I0127 01:32:08.197268 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.254147] kube-apiserver[2253]: I0127 01:32:08.197690 2253 client.go:354] parsed scheme: ""
kube# [ 22.254470] kube-apiserver[2253]: I0127 01:32:08.197721 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.254904] kube-apiserver[2253]: I0127 01:32:08.197785 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.255166] kube-apiserver[2253]: I0127 01:32:08.197825 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.265280] kube-apiserver[2253]: I0127 01:32:08.208826 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.266756] kube-apiserver[2253]: I0127 01:32:08.210325 2253 client.go:354] parsed scheme: ""
kube# [ 22.266890] kube-apiserver[2253]: I0127 01:32:08.210342 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.267298] kube-apiserver[2253]: I0127 01:32:08.210373 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.267657] kube-apiserver[2253]: I0127 01:32:08.210426 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.272800] kube-apiserver[2253]: I0127 01:32:08.216356 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.273190] kube-apiserver[2253]: I0127 01:32:08.216706 2253 client.go:354] parsed scheme: ""
kube# [ 22.273467] kube-apiserver[2253]: I0127 01:32:08.216734 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.273922] kube-apiserver[2253]: I0127 01:32:08.216769 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.274220] kube-apiserver[2253]: I0127 01:32:08.216820 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.278348] kube-apiserver[2253]: I0127 01:32:08.221697 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.278615] kube-apiserver[2253]: I0127 01:32:08.221930 2253 client.go:354] parsed scheme: ""
kube# [ 22.279109] kube-apiserver[2253]: I0127 01:32:08.221948 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.279443] kube-apiserver[2253]: I0127 01:32:08.222014 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.279720] kube-apiserver[2253]: I0127 01:32:08.222043 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.286033] kube-apiserver[2253]: I0127 01:32:08.229562 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.286411] kube-apiserver[2253]: I0127 01:32:08.229960 2253 client.go:354] parsed scheme: ""
kube# [ 22.286752] kube-apiserver[2253]: I0127 01:32:08.229986 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.287069] kube-apiserver[2253]: I0127 01:32:08.230030 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.287270] kube-apiserver[2253]: I0127 01:32:08.230073 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.297073] kube-apiserver[2253]: I0127 01:32:08.240646 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.297518] kube-apiserver[2253]: I0127 01:32:08.241067 2253 client.go:354] parsed scheme: ""
kube# [ 22.297654] kube-apiserver[2253]: I0127 01:32:08.241090 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.298200] kube-apiserver[2253]: I0127 01:32:08.241149 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.298498] kube-apiserver[2253]: I0127 01:32:08.241196 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.302730] kube-apiserver[2253]: I0127 01:32:08.246265 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.303176] kube-apiserver[2253]: I0127 01:32:08.246624 2253 client.go:354] parsed scheme: ""
kube# [ 22.303595] kube-apiserver[2253]: I0127 01:32:08.246658 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.304011] kube-apiserver[2253]: I0127 01:32:08.246705 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.304347] kube-apiserver[2253]: I0127 01:32:08.246821 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.309554] kube-apiserver[2253]: I0127 01:32:08.253124 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.310049] kube-apiserver[2253]: I0127 01:32:08.253611 2253 client.go:354] parsed scheme: ""
kube# [ 22.310426] kube-apiserver[2253]: I0127 01:32:08.253645 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.310837] kube-apiserver[2253]: I0127 01:32:08.253694 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.311147] kube-apiserver[2253]: I0127 01:32:08.253737 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.315458] kube-apiserver[2253]: I0127 01:32:08.259022 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.315631] kube-apiserver[2253]: I0127 01:32:08.259158 2253 client.go:354] parsed scheme: ""
kube# [ 22.315866] kube-apiserver[2253]: I0127 01:32:08.259194 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.316199] kube-apiserver[2253]: I0127 01:32:08.259231 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.316544] kube-apiserver[2253]: I0127 01:32:08.259288 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.322023] kube-apiserver[2253]: I0127 01:32:08.265595 2253 client.go:354] parsed scheme: ""
kube# [ 22.322127] kube-apiserver[2253]: I0127 01:32:08.265617 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.322388] kube-apiserver[2253]: I0127 01:32:08.265653 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.322703] kube-apiserver[2253]: I0127 01:32:08.265708 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.323084] kube-apiserver[2253]: I0127 01:32:08.265935 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.328544] kube-apiserver[2253]: I0127 01:32:08.272125 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.329731] kube-apiserver[2253]: I0127 01:32:08.273289 2253 client.go:354] parsed scheme: ""
kube# [ 22.330074] kube-apiserver[2253]: I0127 01:32:08.273326 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.330355] kube-apiserver[2253]: I0127 01:32:08.273541 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.330650] kube-apiserver[2253]: I0127 01:32:08.273691 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.335236] kube-apiserver[2253]: I0127 01:32:08.278810 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.335825] kube-apiserver[2253]: I0127 01:32:08.279328 2253 client.go:354] parsed scheme: ""
kube# [ 22.335998] kube-apiserver[2253]: I0127 01:32:08.279374 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.336410] kube-apiserver[2253]: I0127 01:32:08.279497 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.336652] kube-apiserver[2253]: I0127 01:32:08.279554 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.341579] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2256]: Loaded image: coredns/coredns:1.5.0
kube# [ 22.342616] kube-apiserver[2253]: I0127 01:32:08.286135 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.343081] kube-apiserver[2253]: I0127 01:32:08.286618 2253 client.go:354] parsed scheme: ""
kube# [ 22.343442] kube-apiserver[2253]: I0127 01:32:08.286670 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.343833] kube-apiserver[2253]: I0127 01:32:08.286721 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.344144] kube-apiserver[2253]: I0127 01:32:08.286784 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.348479] kube-apiserver[2253]: I0127 01:32:08.292052 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.348707] kube-apiserver[2253]: I0127 01:32:08.292286 2253 client.go:354] parsed scheme: ""
kube# [ 22.349003] kube-apiserver[2253]: I0127 01:32:08.292303 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.349311] kube-apiserver[2253]: I0127 01:32:08.292327 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.349664] kube-apiserver[2253]: I0127 01:32:08.292362 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.353949] kube-apiserver[2253]: I0127 01:32:08.297527 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.354273] kube-apiserver[2253]: I0127 01:32:08.297839 2253 client.go:354] parsed scheme: ""
kube# [ 22.354537] kube-apiserver[2253]: I0127 01:32:08.297877 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.355004] kube-apiserver[2253]: I0127 01:32:08.297919 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.355274] kube-apiserver[2253]: I0127 01:32:08.297973 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.360949] kube-apiserver[2253]: I0127 01:32:08.304487 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.361419] kube-apiserver[2253]: I0127 01:32:08.304947 2253 client.go:354] parsed scheme: ""
kube# [ 22.361869] kube-apiserver[2253]: I0127 01:32:08.304972 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.362326] kube-apiserver[2253]: I0127 01:32:08.305046 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.362586] kube-apiserver[2253]: I0127 01:32:08.305108 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.366337] kube-apiserver[2253]: I0127 01:32:08.309896 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.366686] kube-apiserver[2253]: I0127 01:32:08.310111 2253 client.go:354] parsed scheme: ""
kube# [ 22.367141] kube-apiserver[2253]: I0127 01:32:08.310124 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.367272] kube-apiserver[2253]: I0127 01:32:08.310154 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.367429] kube-apiserver[2253]: I0127 01:32:08.310246 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.371516] kube-apiserver[2253]: I0127 01:32:08.315011 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.372068] kube-apiserver[2253]: I0127 01:32:08.315461 2253 client.go:354] parsed scheme: ""
kube# [ 22.372174] kube-apiserver[2253]: I0127 01:32:08.315521 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.372405] kube-apiserver[2253]: I0127 01:32:08.315570 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.372880] kube-apiserver[2253]: I0127 01:32:08.315785 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.378862] kube-apiserver[2253]: I0127 01:32:08.322373 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.379130] kube-apiserver[2253]: I0127 01:32:08.322651 2253 client.go:354] parsed scheme: ""
kube# [ 22.379458] kube-apiserver[2253]: I0127 01:32:08.322669 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.379831] kube-apiserver[2253]: I0127 01:32:08.322712 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.380204] kube-apiserver[2253]: I0127 01:32:08.322791 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.385017] kube-apiserver[2253]: I0127 01:32:08.328549 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.385318] kube-apiserver[2253]: I0127 01:32:08.328879 2253 client.go:354] parsed scheme: ""
kube# [ 22.385670] kube-apiserver[2253]: I0127 01:32:08.328901 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.386159] kube-apiserver[2253]: I0127 01:32:08.328947 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.386401] kube-apiserver[2253]: I0127 01:32:08.328988 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.390340] kube-apiserver[2253]: I0127 01:32:08.333860 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.390688] kube-apiserver[2253]: I0127 01:32:08.334142 2253 client.go:354] parsed scheme: ""
kube# [ 22.391073] kube-apiserver[2253]: I0127 01:32:08.334167 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.391363] kube-apiserver[2253]: I0127 01:32:08.334191 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.391602] kube-apiserver[2253]: I0127 01:32:08.334221 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.395545] kube-apiserver[2253]: I0127 01:32:08.339024 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.396198] kube-apiserver[2253]: I0127 01:32:08.339725 2253 client.go:354] parsed scheme: ""
kube# [ 22.396502] kube-apiserver[2253]: I0127 01:32:08.339762 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.396816] kube-apiserver[2253]: I0127 01:32:08.339827 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.397095] kube-apiserver[2253]: I0127 01:32:08.339876 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.403209] kube-apiserver[2253]: I0127 01:32:08.346643 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.403506] mr9jg1sj933r7n8dc6bs62zj3sdnazhk-unit-script-kubelet-pre-start[2256]: Linking cni package: /nix/store/9pqia3j6lxz57qa36w2niphr1f5vsirr-cni-plugins-0.8.2
kube# [ 22.404117] kube-apiserver[2253]: I0127 01:32:08.347094 2253 client.go:354] parsed scheme: ""
kube# [ 22.404500] kube-apiserver[2253]: I0127 01:32:08.347134 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.404751] kube-apiserver[2253]: I0127 01:32:08.347195 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.405042] kube-apiserver[2253]: I0127 01:32:08.347274 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.409232] kube-apiserver[2253]: I0127 01:32:08.352733 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.409683] kube-apiserver[2253]: I0127 01:32:08.353065 2253 client.go:354] parsed scheme: ""
kube# [ 22.410066] kube-apiserver[2253]: I0127 01:32:08.353087 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.410319] kube-apiserver[2253]: I0127 01:32:08.353144 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.410586] kube-apiserver[2253]: I0127 01:32:08.353173 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.414325] kube-apiserver[2253]: I0127 01:32:08.357860 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.414712] kube-apiserver[2253]: I0127 01:32:08.358142 2253 client.go:354] parsed scheme: ""
kube# [ 22.415058] kube-apiserver[2253]: I0127 01:32:08.358168 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.415321] kube-apiserver[2253]: I0127 01:32:08.358205 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.415592] kube-apiserver[2253]: I0127 01:32:08.358276 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.419810] kube-apiserver[2253]: I0127 01:32:08.363305 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.420933] kube-apiserver[2253]: I0127 01:32:08.364494 2253 client.go:354] parsed scheme: ""
kube# [ 22.421245] kube-apiserver[2253]: I0127 01:32:08.364514 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.421580] kube-apiserver[2253]: I0127 01:32:08.364545 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.422026] kube-apiserver[2253]: I0127 01:32:08.364600 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.426593] kube-apiserver[2253]: I0127 01:32:08.370152 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.427174] kube-apiserver[2253]: I0127 01:32:08.370710 2253 client.go:354] parsed scheme: ""
kube# [ 22.427702] kube-apiserver[2253]: I0127 01:32:08.370740 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.428180] kube-apiserver[2253]: I0127 01:32:08.370792 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.428515] kube-apiserver[2253]: I0127 01:32:08.370842 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.432003] kube-apiserver[2253]: I0127 01:32:08.375534 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.432392] kube-apiserver[2253]: I0127 01:32:08.375946 2253 client.go:354] parsed scheme: ""
kube# [ 22.432576] kube-apiserver[2253]: I0127 01:32:08.375984 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.432964] kube-apiserver[2253]: I0127 01:32:08.376113 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.433247] kube-apiserver[2253]: I0127 01:32:08.376180 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.437406] kube-apiserver[2253]: I0127 01:32:08.380921 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.439136] kube-apiserver[2253]: I0127 01:32:08.382544 2253 client.go:354] parsed scheme: ""
kube# [ 22.439276] kube-apiserver[2253]: I0127 01:32:08.382564 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.439490] kube-apiserver[2253]: I0127 01:32:08.382776 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.439887] kube-apiserver[2253]: I0127 01:32:08.382901 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.445927] kube-apiserver[2253]: I0127 01:32:08.389486 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.446482] kube-apiserver[2253]: I0127 01:32:08.390003 2253 client.go:354] parsed scheme: ""
kube# [ 22.447026] kube-apiserver[2253]: I0127 01:32:08.390045 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.447356] kube-apiserver[2253]: I0127 01:32:08.390118 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.447715] kube-apiserver[2253]: I0127 01:32:08.390155 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.451275] kube-apiserver[2253]: I0127 01:32:08.394803 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.451668] kube-apiserver[2253]: I0127 01:32:08.395116 2253 client.go:354] parsed scheme: ""
kube# [ 22.452151] kube-apiserver[2253]: I0127 01:32:08.395135 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.452404] kube-apiserver[2253]: I0127 01:32:08.395191 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.452666] kube-apiserver[2253]: I0127 01:32:08.395252 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.456337] kube-apiserver[2253]: I0127 01:32:08.399874 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.456667] kube-apiserver[2253]: I0127 01:32:08.400172 2253 client.go:354] parsed scheme: ""
kube# [ 22.457026] kube-apiserver[2253]: I0127 01:32:08.400195 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.457322] kube-apiserver[2253]: I0127 01:32:08.400244 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.457641] kube-apiserver[2253]: I0127 01:32:08.400327 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.461444] kube-apiserver[2253]: I0127 01:32:08.404987 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.461959] kube-apiserver[2253]: I0127 01:32:08.405299 2253 client.go:354] parsed scheme: ""
kube# [ 22.462268] kube-apiserver[2253]: I0127 01:32:08.405314 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.462534] kube-apiserver[2253]: I0127 01:32:08.405342 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.462886] kube-apiserver[2253]: I0127 01:32:08.405493 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.466711] kube-apiserver[2253]: I0127 01:32:08.410240 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.467212] kube-apiserver[2253]: I0127 01:32:08.410658 2253 client.go:354] parsed scheme: ""
kube# [ 22.467462] kube-apiserver[2253]: I0127 01:32:08.410690 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.467704] kube-apiserver[2253]: I0127 01:32:08.410717 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.468037] kube-apiserver[2253]: I0127 01:32:08.410765 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.472093] kube-apiserver[2253]: I0127 01:32:08.415625 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.472526] kube-apiserver[2253]: I0127 01:32:08.415966 2253 client.go:354] parsed scheme: ""
kube# [ 22.473039] kube-apiserver[2253]: I0127 01:32:08.415981 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.473316] kube-apiserver[2253]: I0127 01:32:08.416016 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.473599] kube-apiserver[2253]: I0127 01:32:08.416060 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.477376] kube-apiserver[2253]: I0127 01:32:08.420879 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.477716] kube-apiserver[2253]: I0127 01:32:08.421200 2253 client.go:354] parsed scheme: ""
kube# [ 22.478101] kube-apiserver[2253]: I0127 01:32:08.421214 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.478372] kube-apiserver[2253]: I0127 01:32:08.421245 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.478656] kube-apiserver[2253]: I0127 01:32:08.421275 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.484971] kube-apiserver[2253]: I0127 01:32:08.428377 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.485300] kube-apiserver[2253]: I0127 01:32:08.428859 2253 client.go:354] parsed scheme: ""
kube# [ 22.485559] kube-apiserver[2253]: I0127 01:32:08.428877 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.486080] kube-apiserver[2253]: I0127 01:32:08.428926 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.486427] kube-apiserver[2253]: I0127 01:32:08.428982 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.490170] kube-apiserver[2253]: I0127 01:32:08.433719 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.490622] kube-apiserver[2253]: I0127 01:32:08.434140 2253 client.go:354] parsed scheme: ""
kube# [ 22.491131] kube-apiserver[2253]: I0127 01:32:08.434187 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.491470] kube-apiserver[2253]: I0127 01:32:08.434249 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.491904] kube-apiserver[2253]: I0127 01:32:08.434277 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.495510] kube-apiserver[2253]: I0127 01:32:08.439030 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.495957] kube-apiserver[2253]: I0127 01:32:08.439317 2253 client.go:354] parsed scheme: ""
kube# [ 22.496242] kube-apiserver[2253]: I0127 01:32:08.439348 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.496519] kube-apiserver[2253]: I0127 01:32:08.439381 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.496891] kube-apiserver[2253]: I0127 01:32:08.439457 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.500676] kube-apiserver[2253]: I0127 01:32:08.444188 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.501108] kube-apiserver[2253]: I0127 01:32:08.444567 2253 client.go:354] parsed scheme: ""
kube# [ 22.501432] kube-apiserver[2253]: I0127 01:32:08.444603 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.501747] kube-apiserver[2253]: I0127 01:32:08.444672 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.502082] kube-apiserver[2253]: I0127 01:32:08.444725 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.506037] kube-apiserver[2253]: I0127 01:32:08.449578 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.506479] kube-apiserver[2253]: I0127 01:32:08.450016 2253 client.go:354] parsed scheme: ""
kube# [ 22.506896] kube-apiserver[2253]: I0127 01:32:08.450075 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.507305] kube-apiserver[2253]: I0127 01:32:08.450125 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.507655] kube-apiserver[2253]: I0127 01:32:08.450165 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.511368] kube-apiserver[2253]: I0127 01:32:08.454891 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.511857] kube-apiserver[2253]: I0127 01:32:08.455207 2253 client.go:354] parsed scheme: ""
kube# [ 22.512133] kube-apiserver[2253]: I0127 01:32:08.455234 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.512397] kube-apiserver[2253]: I0127 01:32:08.455275 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.512706] kube-apiserver[2253]: I0127 01:32:08.455338 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.516536] kube-apiserver[2253]: I0127 01:32:08.460090 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.517053] kube-apiserver[2253]: I0127 01:32:08.460610 2253 client.go:354] parsed scheme: ""
kube# [ 22.517354] kube-apiserver[2253]: I0127 01:32:08.460630 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.517622] kube-apiserver[2253]: I0127 01:32:08.460662 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.518124] kube-apiserver[2253]: I0127 01:32:08.460700 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.522034] kube-apiserver[2253]: I0127 01:32:08.465507 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.522436] kube-apiserver[2253]: I0127 01:32:08.465938 2253 client.go:354] parsed scheme: ""
kube# [ 22.522709] kube-apiserver[2253]: I0127 01:32:08.465957 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.523082] kube-apiserver[2253]: I0127 01:32:08.465990 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.523345] kube-apiserver[2253]: I0127 01:32:08.466158 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.528700] kube-apiserver[2253]: I0127 01:32:08.472189 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.529279] kube-apiserver[2253]: I0127 01:32:08.472731 2253 client.go:354] parsed scheme: ""
kube# [ 22.529605] kube-apiserver[2253]: I0127 01:32:08.472853 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.530116] kube-apiserver[2253]: I0127 01:32:08.472976 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.530377] kube-apiserver[2253]: I0127 01:32:08.473077 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.534247] kube-apiserver[2253]: I0127 01:32:08.477792 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.535037] kube-apiserver[2253]: I0127 01:32:08.478562 2253 client.go:354] parsed scheme: ""
kube# [ 22.535413] kube-apiserver[2253]: I0127 01:32:08.478586 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.535714] kube-apiserver[2253]: I0127 01:32:08.478648 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.536050] kube-apiserver[2253]: I0127 01:32:08.478711 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.541394] kube-apiserver[2253]: I0127 01:32:08.484944 2253 client.go:354] parsed scheme: ""
kube# [ 22.541518] kube-apiserver[2253]: I0127 01:32:08.484982 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.541913] kube-apiserver[2253]: I0127 01:32:08.485017 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.542149] kube-apiserver[2253]: I0127 01:32:08.485090 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.542485] kube-apiserver[2253]: I0127 01:32:08.485285 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.547480] kube-apiserver[2253]: I0127 01:32:08.491042 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.548014] kube-apiserver[2253]: I0127 01:32:08.491543 2253 client.go:354] parsed scheme: ""
kube# [ 22.548366] kube-apiserver[2253]: I0127 01:32:08.491576 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.548603] kube-apiserver[2253]: I0127 01:32:08.491720 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.549028] kube-apiserver[2253]: I0127 01:32:08.491751 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.552674] kube-apiserver[2253]: I0127 01:32:08.496232 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.552995] kube-apiserver[2253]: I0127 01:32:08.496565 2253 client.go:354] parsed scheme: ""
kube# [ 22.553296] kube-apiserver[2253]: I0127 01:32:08.496591 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 22.553574] kube-apiserver[2253]: I0127 01:32:08.496631 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 22.554004] kube-apiserver[2253]: I0127 01:32:08.496655 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.558428] systemd[1]: Started Kubernetes Kubelet Service.
kube# [ 22.559564] kube-apiserver[2253]: I0127 01:32:08.503091 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 22.562074] certmgr[1979]: 2020/01/27 01:32:08 [INFO] manager: certificate successfully processed
kube# [ 22.562179] certmgr[1979]: 2020/01/27 01:32:08 [INFO] manager: certificate successfully processed
kube# [ 22.611664] kubelet[2346]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.612031] kubelet[2346]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.612297] kubelet[2346]: Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.612569] kubelet[2346]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.612811] kubelet[2346]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.612991] kubelet[2346]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.613205] kubelet[2346]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.613390] kubelet[2346]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.613668] kubelet[2346]: Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.614028] kubelet[2346]: Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.614220] kubelet[2346]: Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.614436] kubelet[2346]: Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.614634] kubelet[2346]: Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kube# [ 22.638114] kube-apiserver[2253]: W0127 01:32:08.581666 2253 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
kube# [ 22.643634] kube-apiserver[2253]: W0127 01:32:08.587205 2253 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
kube# [ 22.646041] kube-apiserver[2253]: W0127 01:32:08.589615 2253 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
kube# [ 22.646518] kube-apiserver[2253]: W0127 01:32:08.590082 2253 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
kube# [ 22.647820] kube-apiserver[2253]: W0127 01:32:08.591353 2253 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
kube# [ 22.650562] systemd[1]: Started Kubernetes systemd probe.
kube# [ 22.656217] kubelet[2346]: I0127 01:32:08.599763 2346 server.go:425] Version: v1.15.6
kube# [ 22.656456] kubelet[2346]: I0127 01:32:08.599965 2346 plugins.go:103] No cloud provider specified.
kube# [ 22.660966] systemd[1]: run-r7a5eb5814b1245928a6efb5674a0a422.scope: Succeeded.
kube: running command: kubectl get node kube.my.xzy | grep -w Ready
kube# [ 22.776337] kubelet[2346]: I0127 01:32:08.719873 2346 server.go:659] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
kube# [ 22.776552] kubelet[2346]: I0127 01:32:08.720128 2346 container_manager_linux.go:270] container manager verified user specified cgroup-root exists: []
kube# [ 22.777137] kubelet[2346]: I0127 01:32:08.720151 2346 container_manager_linux.go:275] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubernetes ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
kube# [ 22.777571] kubelet[2346]: I0127 01:32:08.720236 2346 container_manager_linux.go:295] Creating device plugin manager: true
kube# [ 22.778011] kubelet[2346]: I0127 01:32:08.720343 2346 state_mem.go:36] [cpumanager] initializing new in-memory state store
kube# [ 22.791989] kubelet[2346]: I0127 01:32:08.735485 2346 kubelet.go:307] Watching apiserver
kube# [ 22.810098] kubelet[2346]: I0127 01:32:08.753631 2346 client.go:75] Connecting to docker on unix:///var/run/docker.sock
kube# [ 22.810335] kubelet[2346]: I0127 01:32:08.753662 2346 client.go:104] Start docker client with request timeout=2m0s
kube# [ 22.819531] kubelet[2346]: I0127 01:32:08.763102 2346 docker_service.go:238] Hairpin mode set to "hairpin-veth"
kube# [ 22.832392] kubelet[2346]: W0127 01:32:08.775946 2346 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
kube# [ 22.835054] kubelet[2346]: I0127 01:32:08.778618 2346 docker_service.go:253] Docker cri networking managed by cni
kube# [ 22.844578] kubelet[2346]: I0127 01:32:08.788099 2346 docker_service.go:258] Docker Info: &{ID:AFL6:CEZ7:GL5J:TT34:WKPN:2RHK:F32T:UBET:KMQH:JTGM:TRNB:7PCR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2020-01-27T01:32:08.779237358Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.19.95 OperatingSystem:NixOS 19.09.1861.eb65d1dae62 (Loris) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000712150 NCPU:16 MemTotal:2091192320 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kube Labels:[] ExperimentalBuild:false ServerVersion:19.03.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:true Isolation: InitBinary:docker-init ContainerdCommit:{ID:.m Expected:.m} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[]}
kube# [ 22.844707] kubelet[2346]: I0127 01:32:08.788172 2346 docker_service.go:271] Setting cgroupDriver to cgroupfs
kube# [ 22.870109] kubelet[2346]: I0127 01:32:08.813667 2346 remote_runtime.go:59] parsed scheme: ""
kube# [ 22.870453] kubelet[2346]: I0127 01:32:08.813691 2346 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
kube# [ 22.870817] kubelet[2346]: I0127 01:32:08.813733 2346 remote_image.go:50] parsed scheme: ""
kube# [ 22.871045] kubelet[2346]: I0127 01:32:08.813743 2346 remote_image.go:50] scheme "" not registered, fallback to default scheme
kube# [ 22.871225] kubelet[2346]: I0127 01:32:08.813850 2346 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
kube# [ 22.871443] kubelet[2346]: I0127 01:32:08.813877 2346 clientconn.go:796] ClientConn switching balancer to "pick_first"
kube# [ 22.871672] kubelet[2346]: I0127 01:32:08.813935 2346 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0005f16d0, CONNECTING
kube# [ 22.872072] kubelet[2346]: I0127 01:32:08.813940 2346 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
kube# [ 22.872348] kubelet[2346]: I0127 01:32:08.813983 2346 clientconn.go:796] ClientConn switching balancer to "pick_first"
kube# [ 22.872457] kubelet[2346]: I0127 01:32:08.814168 2346 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000175a40, CONNECTING
kube# [ 22.881676] kubelet[2346]: I0127 01:32:08.825190 2346 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0005f16d0, READY
kube# [ 22.881980] kubelet[2346]: I0127 01:32:08.825337 2346 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000175a40, READY
kube# [ 22.884704] kubelet[2346]: E0127 01:32:08.828263 2346 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
kube# [ 22.884917] kubelet[2346]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
kube# [ 22.904466] kubelet[2346]: I0127 01:32:08.848010 2346 kuberuntime_manager.go:205] Container runtime docker initialized, version: 19.03.5, apiVersion: 1.40.0
kube# [ 22.907525] kubelet[2346]: W0127 01:32:08.851102 2346 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
kube# [ 22.919262] kubelet[2346]: I0127 01:32:08.862803 2346 server.go:1081] Started kubelet
kube# [ 22.923480] kubelet[2346]: E0127 01:32:08.867022 2346 kubelet.go:1294] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
kube# [ 22.927692] kubelet[2346]: I0127 01:32:08.871205 2346 server.go:144] Starting to listen on 0.0.0.0:10250
kube# [ 22.935986] kubelet[2346]: I0127 01:32:08.879515 2346 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
kube# [ 22.936144] kubelet[2346]: I0127 01:32:08.879571 2346 status_manager.go:152] Starting to sync pod status with apiserver
kube# [ 22.936378] kubelet[2346]: I0127 01:32:08.879593 2346 kubelet.go:1809] Starting kubelet main sync loop.
kube# [ 22.936821] kubelet[2346]: I0127 01:32:08.879623 2346 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
kube# [ 22.944649] kubelet[2346]: I0127 01:32:08.887039 2346 desired_state_of_world_populator.go:130] Desired state populator starts to run
kube# [ 22.945937] kubelet[2346]: I0127 01:32:08.889501 2346 volume_manager.go:243] Starting Kubelet Volume Manager
kube# [ 22.951493] kubelet[2346]: I0127 01:32:08.894910 2346 server.go:350] Adding debug handlers to kubelet server.
kube# [ 22.993530] kubelet[2346]: I0127 01:32:08.937076 2346 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
kube# [ 23.024734] kubelet[2346]: I0127 01:32:08.968155 2346 cpu_manager.go:155] [cpumanager] starting with none policy
kube# [ 23.024959] kubelet[2346]: I0127 01:32:08.968225 2346 cpu_manager.go:156] [cpumanager] reconciling every 10s
kube# [ 23.025196] kubelet[2346]: I0127 01:32:08.968240 2346 policy_none.go:42] [cpumanager] none policy: Start
kube# [ 23.025991] kube-apiserver[2253]: I0127 01:32:08.968789 2253 client.go:354] parsed scheme: ""
kube# [ 23.026266] kube-apiserver[2253]: I0127 01:32:08.968818 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 23.026538] kube-apiserver[2253]: I0127 01:32:08.968858 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 23.026754] kube-apiserver[2253]: I0127 01:32:08.968932 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.031747] kube-apiserver[2253]: I0127 01:32:08.975311 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.034445] kubelet[2346]: W0127 01:32:08.977978 2346 manager.go:546] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
kube# [ 23.034642] kubelet[2346]: I0127 01:32:08.978192 2346 plugin_manager.go:116] Starting Kubelet Plugin Manager
kube# [ 23.034990] kubelet[2346]: E0127 01:32:08.978191 2346 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kube.my.xzy" not found
kube# [ 23.045512] kubelet[2346]: E0127 01:32:08.988981 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.045904] kubelet[2346]: I0127 01:32:08.989469 2346 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
kube# [ 23.073225] kubelet[2346]: I0127 01:32:09.016518 2346 kubelet_node_status.go:72] Attempting to register node kube.my.xzy
kube# [ 23.137101] kube-apiserver[2253]: E0127 01:32:09.080598 2253 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.137378] kube-apiserver[2253]: E0127 01:32:09.080642 2253 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.137646] kube-apiserver[2253]: E0127 01:32:09.080668 2253 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.138057] kube-apiserver[2253]: E0127 01:32:09.080691 2253 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.138328] kube-apiserver[2253]: E0127 01:32:09.080717 2253 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.138606] kube-apiserver[2253]: E0127 01:32:09.080740 2253 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.139068] kube-apiserver[2253]: E0127 01:32:09.080761 2253 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.139361] kube-apiserver[2253]: E0127 01:32:09.080778 2253 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.139589] kube-apiserver[2253]: E0127 01:32:09.080862 2253 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.139844] kube-apiserver[2253]: E0127 01:32:09.080936 2253 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.140110] kube-apiserver[2253]: E0127 01:32:09.080967 2253 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.140313] kube-apiserver[2253]: E0127 01:32:09.080994 2253 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
kube# [ 23.140538] kube-apiserver[2253]: I0127 01:32:09.081024 2253 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
kube# [ 23.140744] kube-apiserver[2253]: I0127 01:32:09.081044 2253 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
kube# [ 23.141032] kube-apiserver[2253]: I0127 01:32:09.082309 2253 client.go:354] parsed scheme: ""
kube# [ 23.141269] kube-apiserver[2253]: I0127 01:32:09.082328 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 23.141486] kube-apiserver[2253]: I0127 01:32:09.082368 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 23.141696] kube-apiserver[2253]: I0127 01:32:09.082440 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.144198] kube-apiserver[2253]: I0127 01:32:09.087761 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.144653] kube-apiserver[2253]: I0127 01:32:09.088212 2253 client.go:354] parsed scheme: ""
kube# [ 23.145059] kube-apiserver[2253]: I0127 01:32:09.088233 2253 client.go:354] scheme "" not registered, fallback to default scheme
kube# [ 23.145279] kube-apiserver[2253]: I0127 01:32:09.088261 2253 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.local:2379 0 <nil>}]
kube# [ 23.145574] kube-apiserver[2253]: I0127 01:32:09.088295 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.146048] kubelet[2346]: E0127 01:32:09.089186 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.150368] kube-apiserver[2253]: I0127 01:32:09.093925 2253 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.local:2379 <nil>}]
kube# [ 23.245920] kubelet[2346]: E0127 01:32:09.189427 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.346209] kubelet[2346]: E0127 01:32:09.289651 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.446363] kubelet[2346]: E0127 01:32:09.389888 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.546604] kubelet[2346]: E0127 01:32:09.490119 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.647292] kubelet[2346]: E0127 01:32:09.590339 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.747165] kubelet[2346]: E0127 01:32:09.690658 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.847386] kubelet[2346]: E0127 01:32:09.790902 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 23.947608] kubelet[2346]: E0127 01:32:09.891098 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 24.047775] kubelet[2346]: E0127 01:32:09.991280 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 24.148054] kubelet[2346]: E0127 01:32:10.091560 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 24.161944] kube-apiserver[2253]: I0127 01:32:10.104975 2253 secure_serving.go:116] Serving securely on [::]:443
kube# [ 24.162260] kube-apiserver[2253]: I0127 01:32:10.105025 2253 apiservice_controller.go:94] Starting APIServiceRegistrationController
kube# [ 24.162489] kube-apiserver[2253]: I0127 01:32:10.105046 2253 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
kube# [ 24.162632] kube-apiserver[2253]: I0127 01:32:10.105072 2253 controller.go:81] Starting OpenAPI AggregationController
kube# [ 24.163037] kube-apiserver[2253]: I0127 01:32:10.105103 2253 crd_finalizer.go:255] Starting CRDFinalizer
kube# [ 24.163227] kube-apiserver[2253]: I0127 01:32:10.105148 2253 controller.go:83] Starting OpenAPI controller
kube# [ 24.163349] kube-apiserver[2253]: I0127 01:32:10.105170 2253 customresource_discovery_controller.go:208] Starting DiscoveryController
kube# [ 24.163553] kube-apiserver[2253]: I0127 01:32:10.105193 2253 naming_controller.go:288] Starting NamingConditionController
kube# [ 24.163811] kube-apiserver[2253]: I0127 01:32:10.105202 2253 autoregister_controller.go:140] Starting autoregister controller
kube# [ 24.163987] kube-apiserver[2253]: I0127 01:32:10.105212 2253 establishing_controller.go:73] Starting EstablishingController
kube# [ 24.164180] kube-apiserver[2253]: I0127 01:32:10.105215 2253 cache.go:32] Waiting for caches to sync for autoregister controller
kube# [ 24.164371] kube-apiserver[2253]: I0127 01:32:10.105236 2253 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
kube# [ 24.164557] kube-apiserver[2253]: I0127 01:32:10.105498 2253 crdregistration_controller.go:112] Starting crd-autoregister controller
kube# [ 24.164739] kube-apiserver[2253]: I0127 01:32:10.105514 2253 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
kube# [ 24.164981] kube-apiserver[2253]: E0127 01:32:10.106600 2253 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.1, ResourceVersion: 0, AdditionalErrorMsg:
kube# [ 24.176645] kube-apiserver[2253]: I0127 01:32:10.120043 2253 available_controller.go:376] Starting AvailableConditionController
kube# [ 24.177355] kube-apiserver[2253]: I0127 01:32:10.120087 2253 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
kube# [ 24.193219] kube-controller-manager[2127]: E0127 01:32:10.136264 2127 leaderelection.go:324] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
kube# [ 24.194086] etcd[2190]: proto: no coders for int
kube# [ 24.194354] etcd[2190]: proto: no encoder for ValueSize int [GetProperties]
kube# [ 24.199600] kube-proxy[2110]: E0127 01:32:10.142359 2110 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.15ed9a5380f85701", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube", UID:"kube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"kube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad45ea893101, ext:3590139809, loc:(*time.Location)(0x2740d40)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad45ea893101, ext:3590139809, loc:(*time.Location)(0x2740d40)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
kube# [ 24.215053] kube-proxy[2110]: E0127 01:32:10.158560 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# [ 24.215289] kube-proxy[2110]: E0127 01:32:10.158593 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 24.216097] kube-scheduler[2133]: E0127 01:32:10.158580 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 24.216532] kube-scheduler[2133]: E0127 01:32:10.158593 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 24.216897] kube-scheduler[2133]: E0127 01:32:10.158686 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube# [ 24.217175] kube-scheduler[2133]: E0127 01:32:10.158719 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 24.217419] kube-scheduler[2133]: E0127 01:32:10.159054 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 24.217686] kube-scheduler[2133]: E0127 01:32:10.159336 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 24.218063] kube-scheduler[2133]: E0127 01:32:10.159397 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 24.218270] kube-scheduler[2133]: E0127 01:32:10.159562 2133 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 24.218467] kube-scheduler[2133]: E0127 01:32:10.159592 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 24.218672] kube-scheduler[2133]: E0127 01:32:10.160914 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 24.244906] kubelet[2346]: E0127 01:32:10.188444 2346 controller.go:204] failed to get node "kube.my.xzy" when trying to set owner ref to the node lease: nodes "kube.my.xzy" not found
kube# [ 24.248229] kubelet[2346]: E0127 01:32:10.191757 2346 kubelet.go:2252] node "kube.my.xzy" not found
kube# [ 24.252420] kubelet[2346]: I0127 01:32:10.195988 2346 reconciler.go:150] Reconciler: start to sync state
kube# [ 24.256539] kubelet[2346]: I0127 01:32:10.199914 2346 kubelet_node_status.go:75] Successfully registered node kube.my.xzy
kube# [ 24.261733] kube-apiserver[2253]: I0127 01:32:10.205275 2253 cache.go:39] Caches are synced for APIServiceRegistrationController controller
kube# [ 24.262161] kube-apiserver[2253]: I0127 01:32:10.205597 2253 cache.go:39] Caches are synced for autoregister controller
kube# [ 24.265069] kube-apiserver[2253]: I0127 01:32:10.205799 2253 controller_utils.go:1036] Caches are synced for crd-autoregister controller
kube# [ 24.278716] kube-apiserver[2253]: I0127 01:32:10.220499 2253 cache.go:39] Caches are synced for AvailableConditionController controller
kube# [ 24.296003] kube-apiserver[2253]: I0127 01:32:10.239465 2253 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
kube# [ 24.298285] kubelet[2346]: E0127 01:32:10.241792 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53c55d17d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad46335327d9, ext:353759208, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad46335327d9, ext:353759208, loc:(*time.Location)(0x760a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube: exit status 0
(1.57 seconds)
(24.92 seconds)
kube: waiting for success: docker load < /nix/store/gbaayxyf38z0f90wd7x9iskfnklaxs6p-docker-image-nginx.tar.gz
kube: running command: docker load < /nix/store/gbaayxyf38z0f90wd7x9iskfnklaxs6p-docker-image-nginx.tar.gz
kube# [ 24.351507] kubelet[2346]: E0127 01:32:10.294976 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbc8883", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kube.my.xzy status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b29883, ext:460678384, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b29883, ext:460678384, loc:(*time.Location)(0x760a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.404524] kubelet[2346]: E0127 01:32:10.347968 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbd4436", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kube.my.xzy status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b35436, ext:460726155, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b35436, ext:460726155, loc:(*time.Location)(0x760a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.457347] kubelet[2346]: E0127 01:32:10.400725 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbdaf28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kube.my.xzy status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b3bf28, ext:460753254, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b3bf28, ext:460753254, loc:(*time.Location)(0x760a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.510557] kubelet[2346]: E0127 01:32:10.453849 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cc46b093", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad463a3cc093, ext:469734842, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad463a3cc093, ext:469734842, loc:(*time.Location)(0x760a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.565193] kubelet[2346]: E0127 01:32:10.508678 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbc8883", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kube.my.xzy status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b29883, ext:460678384, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4640f9b041, ext:509042911, loc:(*time.Location)(0x760a580)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.619462] kubelet[2346]: E0127 01:32:10.562895 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbd4436", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kube.my.xzy status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b35436, ext:460726155, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4640fae516, ext:509115266, loc:(*time.Location)(0x760a580)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 24.673711] kubelet[2346]: E0127 01:32:10.616864 2346 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube.my.xzy.15ed9a53cbbdaf28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"kube.my.xzy", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kube.my.xzy status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kube.my.xzy"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4639b3bf28, ext:460753254, loc:(*time.Location)(0x760a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf83ad4640fb5694, ext:509143761, loc:(*time.Location)(0x760a580)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
kube# [ 25.160170] kube-apiserver[2253]: I0127 01:32:11.103705 2253 controller.go:107] OpenAPI AggregationController: Processing item
kube# [ 25.160452] kube-apiserver[2253]: I0127 01:32:11.103730 2253 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
kube# [ 25.160925] kube-apiserver[2253]: I0127 01:32:11.103906 2253 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
kube# [ 25.168507] kube-apiserver[2253]: I0127 01:32:11.111727 2253 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
kube# [ 25.171646] kube-apiserver[2253]: I0127 01:32:11.115206 2253 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
kube# [ 25.171898] kube-apiserver[2253]: I0127 01:32:11.115236 2253 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
kube# [ 25.216662] kube-scheduler[2133]: E0127 01:32:11.159705 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
kube# [ 25.217354] kube-proxy[2110]: E0127 01:32:11.159704 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
kube# [ 25.218079] kube-proxy[2110]: E0127 01:32:11.160925 2110 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
kube# [ 25.218411] kube-scheduler[2133]: E0127 01:32:11.161023 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
kube# [ 25.218958] kube-scheduler[2133]: E0127 01:32:11.162137 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
kube# [ 25.219640] kube-scheduler[2133]: E0127 01:32:11.163181 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
kube# [ 25.220624] kube-scheduler[2133]: E0127 01:32:11.164166 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
kube# [ 25.221852] kube-scheduler[2133]: E0127 01:32:11.165376 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
kube# [ 25.223094] kube-scheduler[2133]: E0127 01:32:11.166624 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
kube# [ 25.224201] kube-scheduler[2133]: E0127 01:32:11.167760 2133 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
kube# [ 25.225256] kube-scheduler[2133]: E0127 01:32:11.168805 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
kube# [ 25.226505] kube-scheduler[2133]: E0127 01:32:11.170036 2133 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
kube# [ 25.422361] kube-apiserver[2253]: I0127 01:32:11.365852 2253 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
kube# [ 25.455109] kube-apiserver[2253]: I0127 01:32:11.398610 2253 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
kube# [ 25.512179] nscd[1159]: 1159 checking for monitored file `/etc/netgroup': No such file or directory
kube# [ 25.553499] kube-apiserver[2253]: W0127 01:32:11.497049 2253 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.1.1]
kube# [ 25.554476] kube-apiserver[2253]: I0127 01:32:11.498028 2253 controller.go:606] quota admission added evaluator for: endpoints
kube# [ 26.270820] kube-proxy[2110]: I0127 01:32:12.213968 2110 controller_utils.go:1036] Caches are synced for endpoints config controller
kube# [ 26.272394] kube-proxy[2110]: I0127 01:32:12.213971 2110 controller_utils.go:1036] Caches are synced for service config controller
kube# [ 26.777824] kube-controller-manager[2127]: I0127 01:32:12.720110 2127 leaderelection.go:245] successfully acquired lease kube-system/kube-controller-manager
kube# [ 26.781521] kube-controller-manager[2127]: I0127 01:32:12.720225 2127 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"91e1020a-c6a8-4d0d-b109-dd3eec815bc7", APIVersion:"v1", ResourceVersion:"151", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube_abe8c1cd-eddd-4ab1-afa1-9482e279bdb0 became leader
kube: exit status 0
(2.66 seconds)
(2.66 seconds)
kube: waiting for success: kubectl apply -f /nix/store/b6wpc0ii4djrs5fsc4g7hnz806y5m40r-kubenix-generated.json
kube: running command: kubectl apply -f /nix/store/b6wpc0ii4djrs5fsc4g7hnz806y5m40r-kubenix-generated.json
kube# [ 26.988377] kube-controller-manager[2127]: I0127 01:32:12.931913 2127 plugins.go:103] No cloud provider specified.
kube# [ 26.990147] kube-controller-manager[2127]: W0127 01:32:12.933023 2127 controllermanager.go:524] Skipping "csrsigning"
kube# [ 26.992507] kube-controller-manager[2127]: I0127 01:32:12.933119 2127 controller_utils.go:1029] Waiting for caches to sync for tokens controller
kube# [ 26.996441] kube-apiserver[2253]: I0127 01:32:12.939594 2253 controller.go:606] quota admission added evaluator for: serviceaccounts
kube# [ 27.073882] kube-scheduler[2133]: I0127 01:32:13.016940 2133 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
kube# [ 27.081888] kube-scheduler[2133]: I0127 01:32:13.025362 2133 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
kube# [ 27.089880] kube-controller-manager[2127]: I0127 01:32:13.033362 2127 controller_utils.go:1036] Caches are synced for tokens controller
kube# [ 27.108258] kube-controller-manager[2127]: I0127 01:32:13.051820 2127 node_ipam_controller.go:94] Sending events to api server.
kube# [ 27.303637] kube-apiserver[2253]: I0127 01:32:13.247184 2253 controller.go:606] quota admission added evaluator for: deployments.apps
kube: exit status 0
(0.35 seconds)
(0.35 seconds)
kube: must succeed: kubectl get deployment | grep -i nginx
kube: exit status 0
(0.06 seconds)
kube: waiting for success: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.06 seconds)
kube# [ 28.272219] systemd[1]: kube-addon-manager.service: Service RestartSec=10s expired, scheduling restart.
kube# [ 28.273731] systemd[1]: kube-addon-manager.service: Scheduled restart job, restart counter is at 2.
kube# [ 28.274940] systemd[1]: Stopped Kubernetes addon manager.
kube# [ 28.276343] systemd[1]: kube-addon-manager.service: Consumed 0 CPU time, received 320B IP traffic, sent 480B IP traffic.
kube# [ 28.278127] systemd[1]: Starting Kubernetes addon manager...
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 28.581997] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver:kubelet-api-admin created
kube# [ 28.585748] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: clusterrole.rbac.authorization.k8s.io/system:coredns created
kube# [ 28.591671] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
kube# [ 28.599118] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: clusterrole.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 28.603075] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: clusterrolebinding.rbac.authorization.k8s.io/system:kube-addon-manager:cluster-lister created
kube# [ 28.608913] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: role.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 28.614922] p2sngll72fib43kisyi654959gkjz827-unit-script-kube-addon-manager-pre-start[3210]: rolebinding.rbac.authorization.k8s.io/system:kube-addon-manager created
kube# [ 28.617816] systemd[1]: Started Kubernetes addon manager.
kube# [ 28.627511] kube-addons[3254]: INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress ==
kube# [ 28.636654] kube-addons[3254]: INFO: == Kubernetes addon manager started at 2020-01-27T01:32:14+00:00 with ADDON_CHECK_INTERVAL_SEC=60 ==
kube# [ 29.212105] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 29.214351] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 29.785193] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 29.787535] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 30.353159] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.355446] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 30.924180] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 30.925440] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 31.492000] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 31.494291] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 32.062449] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 32.063680] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 32.632928] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 32.634941] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 33.201878] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.204049] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 33.777893] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 33.780027] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 34.353289] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.358019] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 34.925027] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 34.926273] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 35.493871] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 35.494983] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 36.066352] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 36.067589] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 36.637176] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 36.639007] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 37.113404] kube-controller-manager[2127]: I0127 01:32:23.056590 2127 range_allocator.go:78] Sending events to api server.
kube# [ 37.114791] kube-controller-manager[2127]: I0127 01:32:23.056741 2127 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses.
kube# [ 37.116564] kube-controller-manager[2127]: I0127 01:32:23.056753 2127 range_allocator.go:105] Node kube.my.xzy has no CIDR, ignoring
kube# [ 37.118488] kube-controller-manager[2127]: I0127 01:32:23.056778 2127 controllermanager.go:532] Started "nodeipam"
kube# [ 37.118623] kube-controller-manager[2127]: I0127 01:32:23.056944 2127 node_ipam_controller.go:162] Starting ipam controller
kube# [ 37.118856] kube-controller-manager[2127]: I0127 01:32:23.056963 2127 controller_utils.go:1029] Waiting for caches to sync for node controller
kube# [ 37.119232] kube-controller-manager[2127]: I0127 01:32:23.062767 2127 node_lifecycle_controller.go:291] Sending events to api server.
kube# [ 37.119653] kube-controller-manager[2127]: I0127 01:32:23.062947 2127 node_lifecycle_controller.go:324] Controller is using taint based evictions.
kube# [ 37.119994] kube-controller-manager[2127]: I0127 01:32:23.063022 2127 taint_manager.go:158] Sending events to api server.
kube# [ 37.120466] kube-controller-manager[2127]: I0127 01:32:23.063379 2127 node_lifecycle_controller.go:418] Controller will reconcile labels.
kube# [ 37.120728] kube-controller-manager[2127]: I0127 01:32:23.063444 2127 node_lifecycle_controller.go:431] Controller will taint node by condition.
kube# [ 37.121110] kube-controller-manager[2127]: I0127 01:32:23.063471 2127 controllermanager.go:532] Started "nodelifecycle"
kube# [ 37.121381] kube-controller-manager[2127]: I0127 01:32:23.063616 2127 node_lifecycle_controller.go:455] Starting node controller
kube# [ 37.121733] kube-controller-manager[2127]: I0127 01:32:23.063641 2127 controller_utils.go:1029] Waiting for caches to sync for taint controller
kube# [ 37.132687] kube-controller-manager[2127]: I0127 01:32:23.076245 2127 controllermanager.go:532] Started "persistentvolume-expander"
kube# [ 37.132958] kube-controller-manager[2127]: I0127 01:32:23.076370 2127 expand_controller.go:300] Starting expand controller
kube# [ 37.136654] kube-controller-manager[2127]: I0127 01:32:23.080213 2127 controller_utils.go:1029] Waiting for caches to sync for expand controller
kube# [ 37.147174] kube-controller-manager[2127]: I0127 01:32:23.090723 2127 controllermanager.go:532] Started "serviceaccount"
kube# [ 37.147385] kube-controller-manager[2127]: I0127 01:32:23.090799 2127 serviceaccounts_controller.go:117] Starting service account controller
kube# [ 37.147704] kube-controller-manager[2127]: I0127 01:32:23.090825 2127 controller_utils.go:1029] Waiting for caches to sync for service account controller
kube# [ 37.161202] kube-controller-manager[2127]: I0127 01:32:23.104765 2127 controllermanager.go:532] Started "daemonset"
kube# [ 37.161396] kube-controller-manager[2127]: I0127 01:32:23.104875 2127 daemon_controller.go:267] Starting daemon sets controller
kube# [ 37.161744] kube-controller-manager[2127]: I0127 01:32:23.104921 2127 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
kube# [ 37.175716] kube-controller-manager[2127]: I0127 01:32:23.119248 2127 controllermanager.go:532] Started "deployment"
kube# [ 37.175967] kube-controller-manager[2127]: W0127 01:32:23.119269 2127 controllermanager.go:511] "tokencleaner" is disabled
kube# [ 37.176318] kube-controller-manager[2127]: I0127 01:32:23.119378 2127 deployment_controller.go:152] Starting deployment controller
kube# [ 37.176610] kube-controller-manager[2127]: I0127 01:32:23.119418 2127 controller_utils.go:1029] Waiting for caches to sync for deployment controller
kube# [ 37.190176] kube-controller-manager[2127]: I0127 01:32:23.133729 2127 controllermanager.go:532] Started "pvc-protection"
kube# [ 37.190381] kube-controller-manager[2127]: W0127 01:32:23.133753 2127 controllermanager.go:524] Skipping "ttl-after-finished"
kube# [ 37.193371] kube-controller-manager[2127]: I0127 01:32:23.136940 2127 pvc_protection_controller.go:100] Starting PVC protection controller
kube# [ 37.193490] kube-controller-manager[2127]: I0127 01:32:23.136981 2127 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
kube# [ 37.217532] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 37.218704] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 37.468714] kube-controller-manager[2127]: W0127 01:32:23.412158 2127 shared_informer.go:364] resyncPeriod 64781215055101 is smaller than resyncCheckPeriod 76339078402808 and the informer has already started. Changing it to 76339078402808
kube# [ 37.469084] kube-controller-manager[2127]: I0127 01:32:23.412219 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
kube# [ 37.469294] kube-controller-manager[2127]: I0127 01:32:23.412254 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
kube# [ 37.469484] kube-controller-manager[2127]: I0127 01:32:23.412316 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
kube# [ 37.469755] kube-controller-manager[2127]: I0127 01:32:23.412359 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.extensions
kube# [ 37.470019] kube-controller-manager[2127]: I0127 01:32:23.412382 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
kube# [ 37.470229] kube-controller-manager[2127]: I0127 01:32:23.412453 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
kube# [ 37.479048] kube-controller-manager[2127]: I0127 01:32:23.422623 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
kube# [ 37.479169] kube-controller-manager[2127]: I0127 01:32:23.422662 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
kube# [ 37.479424] kube-controller-manager[2127]: I0127 01:32:23.422688 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
kube# [ 37.479681] kube-controller-manager[2127]: I0127 01:32:23.422752 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
kube# [ 37.479996] kube-controller-manager[2127]: I0127 01:32:23.422799 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
kube# [ 37.480203] kube-controller-manager[2127]: I0127 01:32:23.422832 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
kube# [ 37.480438] kube-controller-manager[2127]: I0127 01:32:23.422892 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
kube# [ 37.480625] kube-controller-manager[2127]: I0127 01:32:23.422924 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
kube# [ 37.480852] kube-controller-manager[2127]: I0127 01:32:23.422957 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
kube# [ 37.481106] kube-controller-manager[2127]: I0127 01:32:23.422991 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
kube# [ 37.481295] kube-controller-manager[2127]: I0127 01:32:23.423032 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
kube# [ 37.481568] kube-controller-manager[2127]: I0127 01:32:23.423060 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
kube# [ 37.481845] kube-controller-manager[2127]: I0127 01:32:23.423087 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
kube# [ 37.482109] kube-controller-manager[2127]: I0127 01:32:23.423121 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
kube# [ 37.482306] kube-controller-manager[2127]: I0127 01:32:23.423151 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
kube# [ 37.482508] kube-controller-manager[2127]: I0127 01:32:23.423187 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
kube# [ 37.482698] kube-controller-manager[2127]: I0127 01:32:23.423226 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
kube# [ 37.483027] kube-controller-manager[2127]: I0127 01:32:23.423257 2127 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
kube# [ 37.483211] kube-controller-manager[2127]: I0127 01:32:23.423304 2127 controllermanager.go:532] Started "resourcequota"
kube# [ 37.483425] kube-controller-manager[2127]: I0127 01:32:23.423340 2127 resource_quota_controller.go:271] Starting resource quota controller
kube# [ 37.483611] kube-controller-manager[2127]: I0127 01:32:23.423372 2127 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 37.483927] kube-controller-manager[2127]: I0127 01:32:23.423443 2127 resource_quota_monitor.go:303] QuotaMonitor running
kube# [ 37.792352] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 37.794219] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 38.074226] kube-controller-manager[2127]: I0127 01:32:24.017674 2127 garbagecollector.go:128] Starting garbage collector controller
kube# [ 38.074457] kube-controller-manager[2127]: I0127 01:32:24.017705 2127 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 38.074746] kube-controller-manager[2127]: I0127 01:32:24.017732 2127 graph_builder.go:280] GraphBuilder running
kube# [ 38.075100] kube-controller-manager[2127]: I0127 01:32:24.017756 2127 controllermanager.go:532] Started "garbagecollector"
kube# [ 38.091414] kube-controller-manager[2127]: I0127 01:32:24.034930 2127 controllermanager.go:532] Started "csrcleaner"
kube# [ 38.091752] kube-controller-manager[2127]: I0127 01:32:24.035041 2127 cleaner.go:81] Starting CSR cleaner controller
kube# [ 38.166529] kube-controller-manager[2127]: I0127 01:32:24.109726 2127 controllermanager.go:532] Started "attachdetach"
kube# [ 38.166710] kube-controller-manager[2127]: I0127 01:32:24.109811 2127 attach_detach_controller.go:335] Starting attach detach controller
kube# [ 38.167043] kube-controller-manager[2127]: I0127 01:32:24.109828 2127 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
kube# [ 38.368550] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.370015] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 38.415581] kube-controller-manager[2127]: I0127 01:32:24.359130 2127 controllermanager.go:532] Started "clusterrole-aggregation"
kube# [ 38.415693] kube-controller-manager[2127]: W0127 01:32:24.359155 2127 controllermanager.go:524] Skipping "root-ca-cert-publisher"
kube# [ 38.416074] kube-controller-manager[2127]: I0127 01:32:24.359202 2127 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
kube# [ 38.416308] kube-controller-manager[2127]: I0127 01:32:24.359220 2127 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
kube# [ 38.665609] kube-controller-manager[2127]: I0127 01:32:24.609067 2127 controllermanager.go:532] Started "replicationcontroller"
kube# [ 38.665826] kube-controller-manager[2127]: I0127 01:32:24.609121 2127 replica_set.go:182] Starting replicationcontroller controller
kube# [ 38.666076] kube-controller-manager[2127]: I0127 01:32:24.609138 2127 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
kube# [ 38.915592] kube-controller-manager[2127]: I0127 01:32:24.859113 2127 controllermanager.go:532] Started "job"
kube# [ 38.915912] kube-controller-manager[2127]: I0127 01:32:24.859188 2127 job_controller.go:143] Starting job controller
kube# [ 38.916261] kube-controller-manager[2127]: I0127 01:32:24.859219 2127 controller_utils.go:1029] Waiting for caches to sync for job controller
kube# [ 38.944188] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 38.945512] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 39.165193] kube-controller-manager[2127]: I0127 01:32:25.108718 2127 controllermanager.go:532] Started "pv-protection"
kube# [ 39.165370] kube-controller-manager[2127]: I0127 01:32:25.108770 2127 pv_protection_controller.go:82] Starting PV protection controller
kube# [ 39.165552] kube-controller-manager[2127]: I0127 01:32:25.108788 2127 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
kube# [ 39.415894] kube-controller-manager[2127]: I0127 01:32:25.359086 2127 controllermanager.go:532] Started "statefulset"
kube# [ 39.416043] kube-controller-manager[2127]: I0127 01:32:25.359110 2127 stateful_set.go:145] Starting stateful set controller
kube# [ 39.416226] kube-controller-manager[2127]: I0127 01:32:25.359140 2127 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
kube# [ 39.522916] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 39.524284] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 39.565081] kube-controller-manager[2127]: I0127 01:32:25.508629 2127 node_lifecycle_controller.go:77] Sending events to api server
kube# [ 39.565198] kube-controller-manager[2127]: E0127 01:32:25.508679 2127 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
kube# [ 39.565444] kube-controller-manager[2127]: W0127 01:32:25.508692 2127 controllermanager.go:524] Skipping "cloud-node-lifecycle"
kube# [ 39.565819] kube-controller-manager[2127]: W0127 01:32:25.508704 2127 core.go:174] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
kube# [ 39.566090] kube-controller-manager[2127]: W0127 01:32:25.508714 2127 controllermanager.go:524] Skipping "route"
kube# [ 39.815630] kube-controller-manager[2127]: I0127 01:32:25.759147 2127 controllermanager.go:532] Started "endpoint"
kube# [ 39.815873] kube-controller-manager[2127]: I0127 01:32:25.759204 2127 endpoints_controller.go:166] Starting endpoint controller
kube# [ 39.816111] kube-controller-manager[2127]: I0127 01:32:25.759220 2127 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 40.065600] kube-controller-manager[2127]: E0127 01:32:26.009146 2127 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
kube# [ 40.065873] kube-controller-manager[2127]: W0127 01:32:26.009171 2127 controllermanager.go:524] Skipping "service"
kube# [ 40.098618] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 40.100084] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 40.319527] kube-controller-manager[2127]: I0127 01:32:26.263048 2127 controllermanager.go:532] Started "namespace"
kube# [ 40.319715] kube-controller-manager[2127]: I0127 01:32:26.263115 2127 namespace_controller.go:186] Starting namespace controller
kube# [ 40.319996] kube-controller-manager[2127]: I0127 01:32:26.263133 2127 controller_utils.go:1029] Waiting for caches to sync for namespace controller
kube# [ 40.565475] kube-controller-manager[2127]: I0127 01:32:26.508725 2127 controllermanager.go:532] Started "cronjob"
kube# [ 40.565634] kube-controller-manager[2127]: I0127 01:32:26.508785 2127 cronjob_controller.go:96] Starting CronJob Manager
kube# [ 40.675902] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 40.677232] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 40.815491] kube-controller-manager[2127]: I0127 01:32:26.758947 2127 controllermanager.go:532] Started "ttl"
kube# [ 40.815689] kube-controller-manager[2127]: I0127 01:32:26.759002 2127 ttl_controller.go:116] Starting TTL controller
kube# [ 40.816017] kube-controller-manager[2127]: I0127 01:32:26.759021 2127 controller_utils.go:1029] Waiting for caches to sync for TTL controller
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube# [ 41.065275] kube-controller-manager[2127]: I0127 01:32:27.008797 2127 controllermanager.go:532] Started "persistentvolume-binder"
kube# [ 41.065506] kube-controller-manager[2127]: I0127 01:32:27.008851 2127 pv_controller_base.go:282] Starting persistent volume controller
kube# [ 41.065807] kube-controller-manager[2127]: I0127 01:32:27.008868 2127 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
kube: exit status 1
(0.06 seconds)
kube# [ 41.245756] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 41.247056] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 41.315323] kube-controller-manager[2127]: I0127 01:32:27.258857 2127 controllermanager.go:532] Started "podgc"
kube# [ 41.315533] kube-controller-manager[2127]: I0127 01:32:27.258920 2127 gc_controller.go:76] Starting GC controller
kube# [ 41.315913] kube-controller-manager[2127]: I0127 01:32:27.258939 2127 controller_utils.go:1029] Waiting for caches to sync for GC controller
kube# [ 41.718966] kube-controller-manager[2127]: I0127 01:32:27.662112 2127 controllermanager.go:532] Started "disruption"
kube# [ 41.719145] kube-controller-manager[2127]: I0127 01:32:27.662203 2127 disruption.go:333] Starting disruption controller
kube# [ 41.719390] kube-controller-manager[2127]: I0127 01:32:27.662223 2127 controller_utils.go:1029] Waiting for caches to sync for disruption controller
kube# [ 41.818967] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 41.820336] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 41.865149] kube-controller-manager[2127]: I0127 01:32:27.808700 2127 controllermanager.go:532] Started "csrapproving"
kube# [ 41.865283] kube-controller-manager[2127]: W0127 01:32:27.808724 2127 controllermanager.go:511] "bootstrapsigner" is disabled
kube# [ 41.865581] kube-controller-manager[2127]: I0127 01:32:27.808773 2127 certificate_controller.go:113] Starting certificate controller
kube# [ 41.865975] kube-controller-manager[2127]: I0127 01:32:27.808791 2127 controller_utils.go:1029] Waiting for caches to sync for certificate controller
kube# [ 42.115267] kube-controller-manager[2127]: I0127 01:32:28.058785 2127 controllermanager.go:532] Started "replicaset"
kube# [ 42.115442] kube-controller-manager[2127]: I0127 01:32:28.058850 2127 replica_set.go:182] Starting replicaset controller
kube# [ 42.115841] kube-controller-manager[2127]: I0127 01:32:28.058866 2127 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 42.387461] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 42.388998] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 42.815355] kube-controller-manager[2127]: I0127 01:32:28.758546 2127 controllermanager.go:532] Started "horizontalpodautoscaling"
kube# [ 42.815744] kube-controller-manager[2127]: I0127 01:32:28.758787 2127 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
kube# [ 42.816096] kube-controller-manager[2127]: I0127 01:32:28.758824 2127 horizontal.go:156] Starting HPA controller
kube# [ 42.816374] kube-controller-manager[2127]: I0127 01:32:28.758862 2127 controller_utils.go:1029] Waiting for caches to sync for HPA controller
kube# [ 42.821598] kube-controller-manager[2127]: I0127 01:32:28.765103 2127 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
kube# [ 42.823429] kube-controller-manager[2127]: W0127 01:32:28.766931 2127 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kube.my.xzy" does not exist
kube# [ 42.836899] kube-controller-manager[2127]: I0127 01:32:28.780463 2127 controller_utils.go:1036] Caches are synced for expand controller
kube# [ 42.865420] kube-controller-manager[2127]: I0127 01:32:28.808970 2127 controller_utils.go:1036] Caches are synced for PV protection controller
kube# [ 42.865549] kube-controller-manager[2127]: I0127 01:32:28.808972 2127 controller_utils.go:1036] Caches are synced for certificate controller
kube# [ 42.866021] kube-controller-manager[2127]: I0127 01:32:28.809086 2127 controller_utils.go:1036] Caches are synced for persistent volume controller
kube# [ 42.893846] kube-controller-manager[2127]: I0127 01:32:28.837345 2127 controller_utils.go:1036] Caches are synced for PVC protection controller
kube# [ 42.913622] kube-controller-manager[2127]: I0127 01:32:28.857163 2127 controller_utils.go:1036] Caches are synced for node controller
kube# [ 42.913741] kube-controller-manager[2127]: I0127 01:32:28.857196 2127 range_allocator.go:157] Starting range CIDR allocator
kube# [ 42.914015] kube-controller-manager[2127]: I0127 01:32:28.857216 2127 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller
kube# [ 42.915523] kube-controller-manager[2127]: I0127 01:32:28.859074 2127 controller_utils.go:1036] Caches are synced for HPA controller
kube# [ 42.915751] kube-controller-manager[2127]: I0127 01:32:28.859108 2127 controller_utils.go:1036] Caches are synced for ReplicaSet controller
kube# [ 42.916077] kube-controller-manager[2127]: I0127 01:32:28.859145 2127 controller_utils.go:1036] Caches are synced for GC controller
kube# [ 42.916344] kube-controller-manager[2127]: I0127 01:32:28.859257 2127 controller_utils.go:1036] Caches are synced for stateful set controller
kube# [ 42.916572] kube-controller-manager[2127]: I0127 01:32:28.859391 2127 controller_utils.go:1036] Caches are synced for TTL controller
kube# [ 42.957981] kube-addons[3254]: Error from server (NotFound): serviceaccounts "default" not found
kube# [ 42.959351] kube-addons[3254]: WRN: == Error getting default service account, retry in 0.5 second ==
kube# [ 42.976075] kube-controller-manager[2127]: I0127 01:32:28.919628 2127 controller_utils.go:1036] Caches are synced for deployment controller
kube# [ 42.981132] kube-apiserver[2253]: I0127 01:32:28.924361 2253 controller.go:606] quota admission added evaluator for: replicasets.apps
kube# [ 42.982677] kube-controller-manager[2127]: I0127 01:32:28.926203 2127 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"nginx", UID:"8935ef84-f311-460b-a8d6-20f295e51bd0", APIVersion:"apps/v1", ResourceVersion:"158", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7789544485 to 10
kube# [ 43.013905] kube-controller-manager[2127]: I0127 01:32:28.957441 2127 controller_utils.go:1036] Caches are synced for cidrallocator controller
kube# [ 43.016006] kube-controller-manager[2127]: I0127 01:32:28.959500 2127 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
kube# [ 43.017219] kube-controller-manager[2127]: I0127 01:32:28.960739 2127 range_allocator.go:310] Set node kube.my.xzy PodCIDR to 10.1.0.0/24
kube# [ 43.061685] kube-controller-manager[2127]: I0127 01:32:29.005182 2127 controller_utils.go:1036] Caches are synced for daemon sets controller
kube# [ 43.079547] kubelet[2346]: I0127 01:32:29.022629 2346 kuberuntime_manager.go:928] updating runtime config through cri with podcidr 10.1.0.0/24
kube# [ 43.082045] kubelet[2346]: I0127 01:32:29.022823 2346 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.1.0.0/24,},}
kube# [ 43.082257] kubelet[2346]: I0127 01:32:29.025827 2346 kubelet_network.go:77] Setting Pod CIDR: -> 10.1.0.0/24
kube# [ 43.119794] kube-controller-manager[2127]: I0127 01:32:29.063332 2127 controller_utils.go:1036] Caches are synced for namespace controller
kube# [ 43.147468] kube-controller-manager[2127]: I0127 01:32:29.091036 2127 controller_utils.go:1036] Caches are synced for service account controller
kube# [ 43.166438] kube-controller-manager[2127]: I0127 01:32:29.109995 2127 controller_utils.go:1036] Caches are synced for attach detach controller
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 43.315878] kube-controller-manager[2127]: I0127 01:32:29.259385 2127 controller_utils.go:1036] Caches are synced for job controller
kube# [ 43.420639] kube-controller-manager[2127]: I0127 01:32:29.363884 2127 controller_utils.go:1036] Caches are synced for taint controller
kube# [ 43.420920] kube-controller-manager[2127]: I0127 01:32:29.363969 2127 node_lifecycle_controller.go:1189] Initializing eviction metric for zone:
kube# [ 43.421195] kube-controller-manager[2127]: I0127 01:32:29.364012 2127 taint_manager.go:182] Starting NoExecuteTaintManager
kube# [ 43.421419] kube-controller-manager[2127]: I0127 01:32:29.364173 2127 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube.my.xzy", UID:"c8cbf001-577b-42c5-aa98-6414beca0316", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kube.my.xzy event: Registered Node kube.my.xzy in Controller
kube# [ 43.426974] kube-controller-manager[2127]: W0127 01:32:29.370543 2127 node_lifecycle_controller.go:863] Missing timestamp for Node kube.my.xzy. Assuming now as a timestamp.
kube# [ 43.427136] kube-controller-manager[2127]: I0127 01:32:29.370615 2127 node_lifecycle_controller.go:1089] Controller detected that zone is now in state Normal.
kube# [ 43.529008] kube-addons[3254]: INFO: == Default service account in the kube-system namespace has token default-token-r2vbc ==
kube# [ 43.533750] kube-addons[3254]: find: ‘/etc/kubernetes/admission-controls’: No such file or directory
kube# [ 43.537934] kube-addons[3254]: INFO: == Entering periodical apply loop at 2020-01-27T01:32:29+00:00 ==
kube# [ 43.605613] kube-addons[3254]: INFO: Leader is kube
kube# [ 43.615847] kube-controller-manager[2127]: I0127 01:32:29.559378 2127 controller_utils.go:1036] Caches are synced for endpoint controller
kube# [ 43.619033] kube-controller-manager[2127]: I0127 01:32:29.562508 2127 controller_utils.go:1036] Caches are synced for disruption controller
kube# [ 43.619131] kube-controller-manager[2127]: I0127 01:32:29.562583 2127 disruption.go:341] Sending events to api server.
kube# [ 43.621869] kube-controller-manager[2127]: I0127 01:32:29.565370 2127 controller_utils.go:1036] Caches are synced for garbage collector controller
kube# [ 43.665813] kube-controller-manager[2127]: I0127 01:32:29.609292 2127 controller_utils.go:1036] Caches are synced for ReplicationController controller
kube# [ 43.674542] kube-controller-manager[2127]: I0127 01:32:29.617980 2127 controller_utils.go:1036] Caches are synced for garbage collector controller
kube# [ 43.674899] kube-controller-manager[2127]: I0127 01:32:29.618367 2127 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
kube# [ 43.680036] kube-controller-manager[2127]: I0127 01:32:29.623570 2127 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 43.715556] kube-controller-manager[2127]: I0127 01:32:29.659075 2127 controller_utils.go:1036] Caches are synced for resource quota controller
kube# [ 43.745810] kube-addons[3254]: error: no objects passed to create
kube# [ 43.750512] kube-addons[3254]: INFO: == Kubernetes addon ensure completed at 2020-01-27T01:32:29+00:00 ==
kube# [ 43.750639] kube-addons[3254]: INFO: == Reconciling with deprecated label ==
kube# [ 43.896455] kube-apiserver[2253]: I0127 01:32:29.839973 2253 controller.go:606] quota admission added evaluator for: deployments.extensions
kube# [ 43.904165] kube-controller-manager[2127]: I0127 01:32:29.847241 2127 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1b1b243e-81dd-4eb1-9268-f05394e2cc85", APIVersion:"apps/v1", ResourceVersion:"288", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7cb9b6dd8f to 2
kube# [ 43.914722] kube-controller-manager[2127]: I0127 01:32:29.858231 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"903fcfce-6804-4b02-93bb-e63f5627d3aa", APIVersion:"apps/v1", ResourceVersion:"289", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "coredns-7cb9b6dd8f-" is forbidden: error looking up service account kube-system/coredns: serviceaccount "coredns" not found
kube# [ 43.919858] kube-controller-manager[2127]: E0127 01:32:29.863412 2127 replica_set.go:450] Sync "kube-system/coredns-7cb9b6dd8f" failed with pods "coredns-7cb9b6dd8f-" is forbidden: error looking up service account kube-system/coredns: serviceaccount "coredns" not found
kube# [ 44.161639] kube-controller-manager[2127]: I0127 01:32:30.105132 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-nf2pt
kube# [ 44.165873] kube-controller-manager[2127]: I0127 01:32:30.109380 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-49jh5
kube# [ 44.167068] kube-controller-manager[2127]: I0127 01:32:30.110597 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-c8gjr
kube# [ 44.172575] kube-controller-manager[2127]: I0127 01:32:30.116111 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-8x7xd
kube# [ 44.172906] kube-controller-manager[2127]: I0127 01:32:30.116194 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-stgzq
kube# [ 44.173128] kube-controller-manager[2127]: I0127 01:32:30.116266 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-pnq6t
kube# [ 44.173433] kube-controller-manager[2127]: I0127 01:32:30.116574 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-lxjlr
kube# [ 44.179333] kube-controller-manager[2127]: I0127 01:32:30.122834 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-7shzx
kube# [ 44.179876] kube-controller-manager[2127]: I0127 01:32:30.123016 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-92f9g
kube# [ 44.180589] kube-controller-manager[2127]: I0127 01:32:30.124103 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-7789544485", UID:"a9097169-02ee-4cd3-91b5-aeef959d28fe", APIVersion:"apps/v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7789544485-znvpx
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 44.285954] kubelet[2346]: I0127 01:32:30.229101 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/bccd2313-7b09-4554-8eb4-ae71bc2b2dee-default-token-rk5c5") pod "nginx-7789544485-znvpx" (UID: "bccd2313-7b09-4554-8eb4-ae71bc2b2dee")
kube# [ 44.286155] kubelet[2346]: I0127 01:32:30.229142 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/dd07803f-3919-4c50-b41a-ff1a7dc956b3-default-token-rk5c5") pod "nginx-7789544485-lxjlr" (UID: "dd07803f-3919-4c50-b41a-ff1a7dc956b3")
kube# [ 44.286544] kubelet[2346]: I0127 01:32:30.229172 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/2d190087-f27b-4a7d-a611-4141d67bad2d-default-token-rk5c5") pod "nginx-7789544485-7shzx" (UID: "2d190087-f27b-4a7d-a611-4141d67bad2d")
kube# [ 44.286812] kubelet[2346]: I0127 01:32:30.229219 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/bccd2313-7b09-4554-8eb4-ae71bc2b2dee-static") pod "nginx-7789544485-znvpx" (UID: "bccd2313-7b09-4554-8eb4-ae71bc2b2dee")
kube# [ 44.287103] kubelet[2346]: I0127 01:32:30.229321 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/dd07803f-3919-4c50-b41a-ff1a7dc956b3-static") pod "nginx-7789544485-lxjlr" (UID: "dd07803f-3919-4c50-b41a-ff1a7dc956b3")
kube# [ 44.287384] kubelet[2346]: I0127 01:32:30.229371 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/2d190087-f27b-4a7d-a611-4141d67bad2d-static") pod "nginx-7789544485-7shzx" (UID: "2d190087-f27b-4a7d-a611-4141d67bad2d")
kube# [ 44.287611] kubelet[2346]: I0127 01:32:30.229474 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/a82a6530-fd15-4d47-87cb-12be36fc8d21-default-token-rk5c5") pod "nginx-7789544485-49jh5" (UID: "a82a6530-fd15-4d47-87cb-12be36fc8d21")
kube# [ 44.288033] kubelet[2346]: I0127 01:32:30.229515 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/12d5809c-d379-47b6-abe4-f90090087aff-default-token-rk5c5") pod "nginx-7789544485-8x7xd" (UID: "12d5809c-d379-47b6-abe4-f90090087aff")
kube# [ 44.288409] kubelet[2346]: I0127 01:32:30.229553 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/ce814caf-8dcf-4114-9ccd-53a373024881-config") pod "nginx-7789544485-nf2pt" (UID: "ce814caf-8dcf-4114-9ccd-53a373024881")
kube# [ 44.288633] kubelet[2346]: I0127 01:32:30.229594 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/effd818c-1e7e-4e3d-be8f-2762d3d1f024-config") pod "nginx-7789544485-stgzq" (UID: "effd818c-1e7e-4e3d-be8f-2762d3d1f024")
kube# [ 44.288891] kubelet[2346]: I0127 01:32:30.229633 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/effd818c-1e7e-4e3d-be8f-2762d3d1f024-static") pod "nginx-7789544485-stgzq" (UID: "effd818c-1e7e-4e3d-be8f-2762d3d1f024")
kube# [ 44.289188] kubelet[2346]: I0127 01:32:30.229689 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/db8987fb-d101-4234-aeeb-0a8ebcaa5201-default-token-rk5c5") pod "nginx-7789544485-pnq6t" (UID: "db8987fb-d101-4234-aeeb-0a8ebcaa5201")
kube# [ 44.289503] kubelet[2346]: I0127 01:32:30.229746 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/effd818c-1e7e-4e3d-be8f-2762d3d1f024-default-token-rk5c5") pod "nginx-7789544485-stgzq" (UID: "effd818c-1e7e-4e3d-be8f-2762d3d1f024")
kube# [ 44.290008] kubelet[2346]: I0127 01:32:30.229789 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/a82a6530-fd15-4d47-87cb-12be36fc8d21-config") pod "nginx-7789544485-49jh5" (UID: "a82a6530-fd15-4d47-87cb-12be36fc8d21")
kube# [ 44.290281] kubelet[2346]: I0127 01:32:30.229839 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/43c4c43b-09a9-4370-b9c2-f9198b823a2d-config") pod "nginx-7789544485-c8gjr" (UID: "43c4c43b-09a9-4370-b9c2-f9198b823a2d")
kube# [ 44.315238] kubelet[2346]: I0127 01:32:30.229885 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/295a59c4-7594-4df8-97d9-804d1ed14de4-default-token-rk5c5") pod "nginx-7789544485-92f9g" (UID: "295a59c4-7594-4df8-97d9-804d1ed14de4")
kube# [ 44.315489] kubelet[2346]: I0127 01:32:30.229960 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/ce814caf-8dcf-4114-9ccd-53a373024881-default-token-rk5c5") pod "nginx-7789544485-nf2pt" (UID: "ce814caf-8dcf-4114-9ccd-53a373024881")
kube# [ 44.315694] kubelet[2346]: I0127 01:32:30.230013 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/bccd2313-7b09-4554-8eb4-ae71bc2b2dee-config") pod "nginx-7789544485-znvpx" (UID: "bccd2313-7b09-4554-8eb4-ae71bc2b2dee")
kube# [ 44.316035] kubelet[2346]: I0127 01:32:30.230065 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/43c4c43b-09a9-4370-b9c2-f9198b823a2d-static") pod "nginx-7789544485-c8gjr" (UID: "43c4c43b-09a9-4370-b9c2-f9198b823a2d")
kube# [ 44.316223] kubelet[2346]: I0127 01:32:30.230117 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/295a59c4-7594-4df8-97d9-804d1ed14de4-config") pod "nginx-7789544485-92f9g" (UID: "295a59c4-7594-4df8-97d9-804d1ed14de4")
kube# [ 44.316419] kubelet[2346]: I0127 01:32:30.230168 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/dd07803f-3919-4c50-b41a-ff1a7dc956b3-config") pod "nginx-7789544485-lxjlr" (UID: "dd07803f-3919-4c50-b41a-ff1a7dc956b3")
kube# [ 44.316611] kubelet[2346]: I0127 01:32:30.230236 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/db8987fb-d101-4234-aeeb-0a8ebcaa5201-config") pod "nginx-7789544485-pnq6t" (UID: "db8987fb-d101-4234-aeeb-0a8ebcaa5201")
kube# [ 44.316875] kubelet[2346]: I0127 01:32:30.230306 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/12d5809c-d379-47b6-abe4-f90090087aff-config") pod "nginx-7789544485-8x7xd" (UID: "12d5809c-d379-47b6-abe4-f90090087aff")
kube# [ 44.317130] kubelet[2346]: I0127 01:32:30.230362 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/2d190087-f27b-4a7d-a611-4141d67bad2d-config") pod "nginx-7789544485-7shzx" (UID: "2d190087-f27b-4a7d-a611-4141d67bad2d")
kube# [ 44.317334] kubelet[2346]: I0127 01:32:30.230449 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/a82a6530-fd15-4d47-87cb-12be36fc8d21-static") pod "nginx-7789544485-49jh5" (UID: "a82a6530-fd15-4d47-87cb-12be36fc8d21")
kube# [ 44.317521] kubelet[2346]: I0127 01:32:30.230491 2346 reconciler.go:203] operationExecutor.VerifyControllerAtta[ 44.394229] serial8250: too much work for irq4
kube# chedVolume started for volume "default-token-rk5c5" (UniqueName: "kubernetes.io/secret/43c4c43b-09a9-4370-b9c2-f9198b823a2d-default-token-rk5c5") pod "nginx-7789544485-c8gjr" (UID: "43c4c43b-09a9-4370-b9c2-f9198b823a2d")
kube# [ 44.317710] kubelet[2346]: I0127 01:32:30.230524 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/295a59c4-7594-4df8-97d9-804d1ed14de4-static") pod "nginx-7789544485-92f9g" (UID: "295a59c4-7594-4df8-97d9-804d1ed14de4")
kube# [ 44.318038] kubelet[2346]: I0127 01:32:30.230558 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/ce814caf-8dcf-4114-9ccd-53a373024881-static") pod "nginx-7789544485-nf2pt" (UID: "ce814caf-8dcf-4114-9ccd-53a373024881")
kube# [ 44.318250] kubelet[2346]: I0127 01:32:30.230594 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/db8987fb-d101-4234-aeeb-0a8ebcaa5201-static") pod "nginx-7789544485-pnq6t" (UID: "db8987fb-d101-4234-aeeb-0a8ebcaa5201")
kube# [ 44.345368] kubelet[2346]: I0127 01:32:30.230651 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "static" (UniqueName: "kubernetes.io/configmap/12d5809c-d379-47b6-abe4-f90090087aff-static") pod "nginx-7789544485-8x7xd" (UID: "12d5809c-d379-47b6-abe4-f90090087aff")
kube# [ 44.410085] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/43c4c43b-09a9-4370-b9c2-f9198b823a2d/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.416867] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/2d190087-f27b-4a7d-a611-4141d67bad2d/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.419156] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/bccd2313-7b09-4554-8eb4-ae71bc2b2dee/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.422082] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/dd07803f-3919-4c50-b41a-ff1a7dc956b3/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.425511] systemd[1]: run-r041864548ac540c1b12e2385de8498d4.scope: Succeeded.
kube# [ 44.428440] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/12d5809c-d379-47b6-abe4-f90090087aff/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.437875] systemd[1]: run-r50812508bd7e4ce38276712a25b2e230.scope: Succeeded.
kube# [ 44.438336] systemd[1]: run-r9b32819bb25541ee866341840cae726f.scope: Succeeded.
kube# [ 44.438861] systemd[1]: run-rc5a50ebfa3524f50a64626f57a58dd50.scope: Succeeded.
kube# [ 44.446377] systemd[1]: run-rb7c94480f7f343c7bca870c59711226c.scope: Succeeded.
kube# [ 44.449310] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/db8987fb-d101-4234-aeeb-0a8ebcaa5201/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.451340] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/295a59c4-7594-4df8-97d9-804d1ed14de4/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.453050] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/effd818c-1e7e-4e3d-be8f-2762d3d1f024/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.454782] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/ce814caf-8dcf-4114-9ccd-53a373024881/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.456614] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/a82a6530-fd15-4d47-87cb-12be36fc8d21/volumes/kubernetes.io~secret/default-token-rk5c5.
kube# [ 44.473011] systemd[1]: run-re14c6d0870ac446a820bd92f51937fd9.scope: Succeeded.
kube# [ 44.473409] systemd[1]: run-r5ca27a9d9dbb42cab9f49f9a0ddf35e5.scope: Succeeded.
kube# [ 44.473898] systemd[1]: run-r8df08078fcb94e0ea8d81a83d612ea80.scope: Succeeded.
kube# [ 44.474302] systemd[1]: run-r9205188aa2554bcc880613074130dcc0.scope: Succeeded.
kube# [ 44.474727] systemd[1]: run-r576c8148146f4f349fb8b5ece5aca134.scope: Succeeded.
kube# [ 44.507868] systemd[1]: var-lib-docker-overlay2-96881a52e5464fe8b830b00e333828346b1f8d4016c4ccf1884d38b68bd380e7\x2dinit-merged.mount: Succeeded.
kube# [ 44.508454] systemd[1]: var-lib-docker-overlay2-4428f9c209fda3fb81235595b34c9efd1df6a71640e81652c6ebc7bca51d5459\x2dinit-merged.mount: Succeeded.
kube# [ 44.514090] systemd[1]: var-lib-docker-overlay2-24139fc4dbf9aa40c2c29093c37479c6ebb5133cb2165db16b8d50bca022f879\x2dinit-merged.mount: Succeeded.
kube# [ 44.519718] systemd[1]: var-lib-docker-overlay2-b577dd87da641e95dbacedadf745c026a3e7d28bcb2ae58ca449ff45bd6aef63\x2dinit-merged.mount: Succeeded.
kube# [ 44.526406] systemd[1]: var-lib-docker-overlay2-a235ffbdc486838ce3d3a994b8a3f383ead401df786802e53f8b99104e715cdd\x2dinit-merged.mount: Succeeded.
kube# [ 44.529332] systemd[1]: var-lib-docker-overlay2-cbe2ef19a8dafb5229e20836c79b259d2a86af5e8f8aa635fe21b2236650f386\x2dinit-merged.mount: Succeeded.
kube# [ 44.538051] systemd[1]: var-lib-docker-overlay2-83940841dbe2c531ac74018dc3d159b6fe95e17b2e6072808fb2263f1774ca23\x2dinit-merged.mount: Succeeded.
kube# [ 44.750005] dockerd[1172]: time="2020-01-27T01:32:30.692996227Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38827c862ffcd894b487985be4b954d7db3bdf8cf6ef2002306a6e93c8e8317a/shim.sock" debug=false pid=4681
kube# [ 44.764401] dockerd[1172]: time="2020-01-27T01:32:30.707953994Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c99c42872b1cbee7ec19986b02cefe5059ef25ec0b3ea3cec3be96fcf4d46584/shim.sock" debug=false pid=4682
kube# [ 44.780135] dockerd[1172]: time="2020-01-27T01:32:30.723626098Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/465342a7fa95789348f63d8bcd3c571572897b8a63790816e8394984fcff1837/shim.sock" debug=false pid=4689
kube# [ 44.800156] dockerd[1172]: time="2020-01-27T01:32:30.743245351Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55bbbbf6f9487b5400fe710b5dc686d0aff167044542a8668c985dc48c022c1c/shim.sock" debug=false pid=4702
kube# [ 44.826981] dockerd[1172]: time="2020-01-27T01:32:30.770503564Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/360a5ee86082526c809518c2e85d99789111bc8e4f1c24c4dc663f4ebb1e40ea/shim.sock" debug=false pid=4740
kube# [ 44.889263] dockerd[1172]: time="2020-01-27T01:32:30.832750582Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b7fa1649be8208ca461b35ae2f69d818b003d3b4de8d6a1d420779d44fd72e7/shim.sock" debug=false pid=4796
kube# [ 44.912918] dockerd[1172]: time="2020-01-27T01:32:30.856452477Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/63c30338555cb6b62b8aee3e61f842185ea8df4a63bc19e928940eeb544bd344/shim.sock" debug=false pid=4841
kube# [ 44.917630] dockerd[1172]: time="2020-01-27T01:32:30.861181290Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7c0b0adf8f0568feb389195d67dedeeacf40f5c77522a9fc834e5e8cd98420fa/shim.sock" debug=false pid=4845
kube# [ 44.923111] dockerd[1172]: time="2020-01-27T01:32:30.865798637Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5a1057aed74c0d5c4a7536940b6a8ffc6f0d1585e39964e4cbab3d4e2da6a47c/shim.sock" debug=false pid=4860
kube# [ 44.926578] kube-controller-manager[2127]: I0127 01:32:30.868999 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"903fcfce-6804-4b02-93bb-e63f5627d3aa", APIVersion:"apps/v1", ResourceVersion:"295", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-2sn52
kube# [ 44.930049] kube-controller-manager[2127]: I0127 01:32:30.873609 2127 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7cb9b6dd8f", UID:"903fcfce-6804-4b02-93bb-e63f5627d3aa", APIVersion:"apps/v1", ResourceVersion:"295", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7cb9b6dd8f-tt9br
kube# [ 44.946411] dockerd[1172]: time="2020-01-27T01:32:30.887762601Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a92e2e4a3afb0d649e7d7c807e65f8ca056336393e8c5c94f09db5a64450af0f/shim.sock" debug=false pid=4882
kube# [ 44.994627] kubelet[2346]: I0127 01:32:30.938082 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-xd8ph" (UniqueName: "kubernetes.io/secret/6583ea38-0a30-4394-a6bf-94105a76ff1d-coredns-token-xd8ph") pod "coredns-7cb9b6dd8f-tt9br" (UID: "6583ea38-0a30-4394-a6bf-94105a76ff1d")
kube# [ 44.995256] kubelet[2346]: I0127 01:32:30.938157 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-xd8ph" (UniqueName: "kubernetes.io/secret/f83c0d74-43c3-4ddc-9531-813866292239-coredns-token-xd8ph") pod "coredns-7cb9b6dd8f-2sn52" (UID: "f83c0d74-43c3-4ddc-9531-813866292239")
kube# [ 44.995722] kubelet[2346]: I0127 01:32:30.938199 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6583ea38-0a30-4394-a6bf-94105a76ff1d-config-volume") pod "coredns-7cb9b6dd8f-tt9br" (UID: "6583ea38-0a30-4394-a6bf-94105a76ff1d")
kube# [ 44.996331] kubelet[2346]: I0127 01:32:30.938363 2346 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f83c0d74-43c3-4ddc-9531-813866292239-config-volume") pod "coredns-7cb9b6dd8f-2sn52" (UID: "f83c0d74-43c3-4ddc-9531-813866292239")
kube# [ 45.139984] kube-addons[3254]: configmap/coredns created
kube# [ 45.140260] kube-addons[3254]: deployment.extensions/coredns created
kube# [ 45.140688] kube-addons[3254]: serviceaccount/coredns created
kube# [ 45.141055] kube-addons[3254]: service/kube-dns created
kube# [ 45.141349] kube-addons[3254]: INFO: == Reconciling with addon-manager label ==
kube# [ 45.219954] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/f83c0d74-43c3-4ddc-9531-813866292239/volumes/kubernetes.io~secret/coredns-token-xd8ph.
kube# [ 45.223536] systemd[1]: Started Kubernetes transient mount for /var/lib/kubernetes/pods/6583ea38-0a30-4394-a6bf-94105a76ff1d/volumes/kubernetes.io~secret/coredns-token-xd8ph.
kube# [ 45.228550] systemd-udevd[5203]: Using default interface naming scheme 'v243'.
kube# [ 45.229044] systemd-udevd[5203]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.248367] systemd[1]: run-r40beb433bc5040c9bcf76f4ec0fd02a8.scope: Succeeded.
kube# [ 45.249104] systemd[1]: run-r84dfca377ad64dd290a4d0e9845175ee.scope: Succeeded.
kube# [ 45.267365] systemd-udevd[5206]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube# [ 45.351004] cni0: port 1(vethe0b55034) entered blocking state
kube# [ 45.351801] cni0: port 1(vethe0b55034) entered disabled state
kube# [ 45.351853] device vethe0b55034 entered promiscuous mode
kube# [ 45.296558] s[ 45.353531] cni0: port 1(vethe0b55034) entered blocking state
kube# ystemd-udevd[521[ 45.354291] cni0: port 1(vethe0b55034) entered forwarding state
kube# 2]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.299646] systemd[ 45.356356] cni0: port 2(vethc52503c5) entered blocking state
kube# -udevd[5212]: Us[ 45.357330] cni0: port 2(vethc52503c5) entered disabled state
kube# ing default inte[ 45.358336] device vethc52503c5 entered promiscuous mode
kube# rface naming sch[ 45.359258] cni0: port 2(vethc52503c5) entered blocking state
kube# eme 'v243'.
kube# [ 45.360110] cni0: port 2(vethc52503c5) entered forwarding state
kube# [ 45.304715] systemd-udevd[5210]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.308282] systemd-udevd[5210]: Using default interface naming scheme 'v243'.
kube# [ 45.376078] cni0: port 3(veth19883ad1) entered blocking state
kube# [ 45.377953] cni0: port 3(veth19883ad1) entered disabled state
kube# [ 45.380059] device veth19883ad1 entered promiscuous mode
kube# [ 45.381734] cni0: port 3(veth19883ad1) entered blocking state
kube# [ 45.384014] cni0: port 3(veth19883ad1) entered forwarding state
kube# [ 45.332858] systemd-udevd[5206]: Using default interface naming scheme 'v243'.
kube# [ 45.333353] systemd-udevd[5203]: ethtool: autonegotiation is unset or enabled, the speed and duplex are no[ 45.389883] cni0: port 4(veth5f245672) entered blocking state
kube# t writable.
kube# [ 45.390775] cni0: port 4(veth5f245672) entered disabled state
kube# [ 45.394657] device veth5f245672 entered promiscuous mode
kube# [ 45.397588] cni0: port 4(veth5f245672) entered blocking state
kube# [ 45.400618] cni0: port 4(veth5f245672) entered forwarding state
kube# [ 45.351350] systemd-udevd[5269]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.351855] systemd-udevd[5269]: Using default interface naming scheme 'v243'.
kube# [ 45.411964] cni0: port 5(vethc70dcaf4) entered blocking state
kube# [ 45.414967] cni0: port 5(vethc70dcaf4) entered disabled state
kube# [ 45.419564] device vethc70dcaf4 entered promiscuous mode
kube# [ 45.424078] cni0: port 5(vethc70dcaf4) entered blocking state
kube# [ 45.424078] cni0: port 5(vethc70dcaf4) entered forwarding state
kube# [ 45.366859] kube-addons[3254]: error: no objects passed to apply
kube# [ 45.384629] kube-addons[3254]: INFO: == Kubernetes addon reconcile completed at 2020-01-27T01:32:31+00:00 ==
kube# [ 45.392436] dhcpcd[1160]: cni0: waiting for carrier
kube: exit status 1
(0.13 seconds)
kube# [ 45.416643] dhcpcd[1160]: vethc52503c5: IAID [ 45.471863] show_signal_msg: 131 callbacks suppressed
kube# 7b:8e:16:6c
kube# [ 45.471865] dhcpcd[1160]: segfault at 100cc ip 000000000042982e sp 00007ffde75d7ca0 error 4 in dhcpcd[407000+32000]
kube# [ 45.417230] [ 45.478027] Code: 48 89 ee bf 14 00 00 00 e8 3f a6 00 00 f6 45 68 4c 75 04 83 4d 6c 40 41 8b 47 2c 85 c0 0f 84 09 ff ff ff 49 8b 87 c0 00 00 00 <f6> 80 cc 00 01 00 20 0f 84 f5 fe ff ff f6 43 38 04 0f 85 eb fe ff
kube# dhcpcd[1160]: vethc52503c5: adding address fe80::6062:7bff:fe8e:166c
kube# [ 45.449907] systemd[1]: Created slice system-systemd\x2dcoredump.slice.
kube# [ 45.450470] systemd[1]: Started Process Core Dump (PID 5389/UID 0).
kube# [ 45.545215] kubelet[2346]: W0127 01:32:31.488051 2346 pod_container_deletor.go:75] Container "0b7fa1649be8208ca461b35ae2f69d818b003d3b4de8d6a1d420779d44fd72e7" not found in pod's containers
kube# [ 45.556723] kubelet[2346]: W0127 01:32:31.500246 2346 pod_container_deletor.go:75] Container "5a1057aed74c0d5c4a7536940b6a8ffc6f0d1585e39964e4cbab3d4e2da6a47c" not found in pod's containers
kube# [ 45.569362] systemd[1]: var-lib-docker-overlay2-61ca680bb003fd461c1b5cef5af186d9977c83e0235e599c07141a81d1d9804f\x2dinit-merged.mount: Succeeded.
kube# [ 45.571594] systemd-udevd[5210]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.572993] systemd[1]: var-lib-docker-overlay2-b867ce06733c22667dc461eb4c1a79e7a9d7345d212f2d1c7b83baec35872927\x2dinit-merged.mount: Succeeded.
kube# [ 45.574211] systemd[1]: var-lib-docker-overla[ 45.629137] cni0: port 6(vethce1f55e2) entered blocking state
kube# y2-19933a7cf6cfd[ 45.629863] cni0: port 6(vethce1f55e2) entered disabled state
kube# 712dbc3b88dcde92[ 45.630639] device vethce1f55e2 entered promiscuous mode
kube# 2d49dc880343d83e[ 45.631282] cni0: port 6(vethce1f55e2) entered blocking state
kube# b5fdc9a50d36b401[ 45.632044] cni0: port 6(vethce1f55e2) entered forwarding state
kube# d3b\x2dinit-merged.mount: Succeeded.
kube# [ 45.587323] kubelet[2346]: W0127 01:32:31.530851 2346 pod_container_deletor.go:75] Container "38827c862ffcd894b487985be4b954d7db3bdf8cf6ef2002306a6e93c8e8317a" not found in pod's containers
kube# [ 45.590890] s[ 45.645558] cni0: port 7(veth3c341536) entered blocking state
kube# ystemd-udevd[526[ 45.646535] cni0: port 7(veth3c341536) entered disabled state
kube# 9]: ethtool: aut[ 45.647453] device veth3c341536 entered promiscuous mode
kube# onegotiation is [ 45.648121] cni0: port 7(veth3c341536) entered blocking state
kube# unset or enabled[ 45.649003] cni0: port 7(veth3c341536) entered forwarding state
kube# , the speed and duplex are not writable.
kube# [ 45.592085] kubelet[2346]: W0127 01:32:31.535515 2346 pod_container_deletor.go:75] Container "7c0b0adf8f0568feb389195d67dedeeacf40f5c77522a9fc834e5e8cd98420fa" not found in pod's containers
kube# [ 45.596541] kubelet[2346]: W0127 01:32:31.538706 2346 pod_container_deletor.go:75] Container "a92e2e4a3afb0d649e7d7c807e65f8ca056336393e8c5c94f09db5a64450af0f" not found in pod's containers
kube# [ 45.616365] systemd-udevd[5203]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.687144] cni0: port 8(veth8e8e94d3) entered blocking state
kube# [ 45.688738] cni0: port 8(veth8e8e94d3) entered disabled state
kube# [ 45.688786] device veth8e8e94d3 entered promiscuous mode
kube# [ 45.688809] cni0: port 8(veth8e8e94d3) entered blocking state
kube# [ 45.691820] cni0: port 8(veth8e8e94d3) entered forwarding state
kube# [ 45.632640] systemd-udevd[5210]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.694469] cni0: port 9(vethe2d13415) entered blocking state
kube# [ 45.696130] cni0: port 9(vethe2d13415) entered disabled state
kube# [ 45.642078] s[ 45.697659] device vethe2d13415 entered promiscuous mode
kube# ystemd[1]: var-l[ 45.698980] cni0: port 9(vethe2d13415) entered blocking state
kube# ib-docker-overla[ 45.699888] cni0: port 9(vethe2d13415) entered forwarding state
kube# y2-19933a7cf6cfd712dbc3b88dcde922d49dc880343d83eb5fdc9a50d36b401d3b-merged.mount[ 45.701255] cni0: port 10(veth80de4b3c) entered blocking state
kube# : Succeeded.
kube# [ [ 45.702231] cni0: port 10(veth80de4b3c) entered disabled state
kube# 45.643606] kub[ 45.702398] device veth80de4b3c entered promiscuous mode
kube# elet[2346]: W012[ 45.703815] cni0: port 10(veth80de4b3c) entered blocking state
kube# 7 01:32:31.58592[ 45.704712] cni0: port 10(veth80de4b3c) entered forwarding state
kube# 9 2346 pod_container_deletor.go:75] Container "360a5ee86082526c809518c2e85d99789111bc8e4f1c24c4dc663f4ebb1e40ea" not found in pod's containers
kube# [ 45.647019] systemd-udevd[5212]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 45.669220] kubelet[2346]: E0127 01:32:31.612503 2346 remote_runtime.go:295] ContainerStatus "2f6d4205e177c24b024ed1ba6ccc0c2d128ac7a5ded1e2b31a55a48215004270" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 2f6d4205e177c24b024ed1ba6ccc0c2d128ac7a5ded1e2b31a55a48215004270
kube# [ 45.669649] kubelet[2346]: E0127 01:32:31.612821 2346 kuberuntime_manager.go:902] getPodContainerStatuses for pod "nginx-7789544485-stgzq_default(effd818c-1e7e-4e3d-be8f-2762d3d1f024)" failed: rpc error: code = Unknown desc = Error: No such container: 2f6d4205e177c24b024ed1ba6ccc0c2d128ac7a5ded1e2b31a55a48215004270
kube# [ 45.694716] dockerd[1172]: time="2020-01-27T01:32:31.638041478Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a93574bdc4cf63b74cfa127f763dc551b772975853a925f1ffe64aead5a84c3/shim.sock" debug=false pid=5698
kube# [ 45.749395] dockerd[1172]: time="2020-01-27T01:32:31.692784469Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f6d4205e177c24b024ed1ba6ccc0c2d128ac7a5ded1e2b31a55a48215004270/shim.sock" debug=false pid=5742
kube# [ 45.753465] dockerd[1172]: time="2020-01-27T01:32:31.696457840Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c50985e215e6ec601738fc72fd5fa2e56191c14fb217ad4dc014dfe5f019c7d7/shim.sock" debug=false pid=5752
kube# [ 45.757263] dockerd[1172]: time="2020-01-27T01:32:31.700615631Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bfa06027add359b67a25ea8d32c41a7306a34e782c45d2501422ac5dee5b7bde/shim.sock" debug=false pid=5763
kube# [ 45.760348] dockerd[1172]: time="2020-01-27T01:32:31.703609308Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b9f36891d590ade74f4c03b83cbc7e3db8dda47338bdcdc12fcdd6224f17b282/shim.sock" debug=false pid=5769
kube# [ 45.769152] systemd[1]: var-lib-docker-overlay2-6568643a3dbcb2f05614dd92be06cb8258831694400f8a814815c37dee795e83\x2dinit-merged.mount: Succeeded.
kube# [ 45.782939] dockerd[1172]: time="2020-01-27T01:32:31.726318619Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3253a5234f5409edf7b787c1639e122a89c3851334eecb8fccaa08e21379b890/shim.sock" debug=false pid=5818
kube# [ 45.786396] dockerd[1172]: time="2020-01-27T01:32:31.729781070Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ed0817155dd07764f5bedc7c1e033981768c312b19e359cc0578e23ad4ca50e/shim.sock" debug=false pid=5826
kube# [ 45.811293] systemd[1]: var-lib-docker-overlay2-ff80b5ddbd6a6bed00ba95e6a7050b4b442a28a5e663f09e7d4b3717efbf6779\x2dinit-merged.mount: Succeeded.
kube# [ 45.811886] systemd[1]: dhcpcd.service: Main process exited, code=dumped, status=11/SEGV
kube# [ 45.812222] systemd[1]: dhcpcd.service: Failed with result 'core-dump'.
kube# [ 45.812660] systemd[1]: dhcpcd.service: Consumed 137ms CPU time, received 96B IP traffic, sent 272B IP traffic.
kube# [ 45.821381] systemd[1]: var-lib-docker-overlay2-f9712a6c4a890ec9b388e273daa63add7ee81680c91055297bf8268ab77b1076\x2dinit-merged.mount: Succeeded.
kube# [ 45.823144] systemd-coredump[5399]: Cannot resolve systemd-coredump user. Proceeding to dump core as root: No such process
kube# [ 45.823992] systemd-coredump[5399]: Process 1160 (dhcpcd) of user 0 dumped core.
kube# [ 45.838994] systemd[1]: systemd-coredump@0-5389-0.service: Succeeded.
kube# [ 45.875569] dockerd[1172]: time="2020-01-27T01:32:31.819096612Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7a89c8c946332dc8fd1f62e1baa2bd3d6f840c389e88234cf1c01646c177f186/shim.sock" debug=false pid=5935
kube# [ 45.883461] systemd[1]: var-lib-docker-overlay2-bacdb27907a8b2240fedb6bdaacf9434255919ee0128bc59fa6225d3c464caca-merged.mount: Succeeded.
kube# [ 45.911403] dockerd[1172]: time="2020-01-27T01:32:31.854779080Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b393ac3a5af6bc1276c72b6822610c14f8c4b3d7821fe3212cc40ebb57986c67/shim.sock" debug=false pid=5976
kube# [ 45.917095] systemd[1]: dhcpcd.service: Service RestartSec=100ms expired, scheduling restart.
kube# [ 45.917423] systemd[1]: dhcpcd.service: Scheduled restart job, restart counter is at 1.
kube# [ 45.917836] systemd[1]: Stopped DHCP Client.
kube# [ 45.918990] systemd[1]: dhcpcd.service: Consumed 137ms CPU time, received 96B IP traffic, sent 272B IP traffic.
kube# [ 45.921477] systemd[1]: Starting DHCP Client...
kube# [ 45.925917] dhcpcd[6001]: main: control_open: Connection refused
kube# [ 45.926226] dhcpcd[6001]: main: control_open: Connection refused
kube# [ 45.929444] dhcpcd[6001]: dev: loaded udev
kube# [ 45.945976] dockerd[1172]: time="2020-01-27T01:32:31.889304138Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57c16d20c8839610fa538c2b7fc147cbff0d886ab7e07fd9787a45833fbab011/shim.sock" debug=false pid=6043
kube# [ 45.949220] dockerd[1172]: time="2020-01-27T01:32:31.892623275Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ac69f67929b2c36b8a47033fa888c0b2d7e16157cffe511805fd86906aa5d515/shim.sock" debug=false pid=6044
kube# [ 45.952441] dockerd[1172]: time="2020-01-27T01:32:31.895983478Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d990a39e0239692af3fbc255c824a0622a87ba49f9736c315b0ef1422f4da6c9/shim.sock" debug=false pid=6062
kube# [ 45.973361] kubelet[2346]: W0127 01:32:31.916876 2346 pod_container_deletor.go:75] Container "465342a7fa95789348f63d8bcd3c571572897b8a63790816e8394984fcff1837" not found in pod's containers
kube# [ 46.035850] kubelet[2346]: W0127 01:32:31.979350 2346 pod_container_deletor.go:75] Container "63c30338555cb6b62b8aee3e61f842185ea8df4a63bc19e928940eeb544bd344" not found in pod's containers
kube# [ 46.047545] kubelet[2346]: W0127 01:32:31.991074 2346 pod_container_deletor.go:75] Container "c99c42872b1cbee7ec19986b02cefe5059ef25ec0b3ea3cec3be96fcf4d46584" not found in pod's containers
kube# [ 46.084744] systemd-udevd[521[ 46.139804] cni0: port 11(veth70c2b7cd) entered blocking state
kube# 0]: ethtool: aut[ 46.140852] cni0: port 11(veth70c2b7cd) entered disabled state
kube# onegotiation is [ 46.141984] device veth70c2b7cd entered promiscuous mode
kube# unset or enabled[ 46.142830] cni0: port 11(veth70c2b7cd) entered blocking state
kube# , the speed and [ 46.143663] cni0: port 11(veth70c2b7cd) entered forwarding state
kube# duplex are not writable.
kube# [ 46.166054] cni0: port 12(veth454b8b69) entered blocking state
kube# [ 46.167365] cni0: port 12(veth454b8b69) entered disabled state
kube# [ 46.168875] device veth454b8b69 entered promiscuous mode
kube# [ 46.170596] cni0: port 12(veth454b8b69) entered blocking state
kube# [ 46.172123] cni0: port 12(veth454b8b69) entered forwarding state
kube# [ 46.119065] systemd-udevd[5269]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
kube# [ 46.192445] dhcpcd[6001]: DUID 00:01:00:01:25:c0:fa:07:52:54:00:12:34:56
kube# [ 46.192704] dhcpcd[6001]: eth0: IAID 00:12:34:56
kube# [ 46.193138] dhcpcd[6001]: docker0: waiting for carrier
kube# [ 46.193622] dhcpcd[6001]: cni0: IAID c7:cd:59:9a
kube# [ 46.194279] dhcpcd[6001]: veth19883ad1: IAID d6:92:ee:f6
kube# [ 46.194920] dhcpcd[6001]: vethe0b55034: IAID 50:05:61:25
kube# [ 46.195354] dhcpcd[6001]: vethc52503c5: IAID 7b:8e:16:6c
kube# [ 46.196499] dhcpcd[6001]: veth5f245672: IAID 7a:b0:94:4b
kube# [ 46.197126] dhcpcd[6001]: vethc70dcaf4: IAID ab:47:9a:a4
kube# [ 46.197655] dhcpcd[6001]: vethce1f55e2: IAID fc:a9:a4:79
kube# [ 46.198446] dhcpcd[6001]: veth3c341536: IAID be:7b:04:5d
kube# [ 46.198618] dhcpcd[6001]: vethe2d13415: IAID c6:d4:e5:61
kube# [ 46.200745] dhcpcd[6001]: veth80de4b3c: IAID d6:80:3c:f9
kube# [ 46.201309] dhcpcd[6001]: veth8e8e94d3: IAID 07:7f:a8:22
kube# [ 46.215930] dhcpcd[6001]: veth70c2b7cd: IAID bc:ce:83:eb
kube# [ 46.216139] dhcpcd[6001]: veth70c2b7cd: adding address fe80::6883:bcff:fece:83eb
kube# [ 46.216470] dhcpcd[6001]: vethce1f55e2: soliciting a DHCP lease
kube# [ 46.229934] dhcpcd[6001]: veth70c2b7cd: soliciting a DHCP lease
kube# [ 46.301830] dhcpcd[6001]: segfault at 100cc ip 000000000042982e sp 00007ffd7164d0d0 error 4 in dhcpcd[407000+32000]
kube# [ 46.303427] Code: 48 89 ee bf 14 00 00 00 e8 3f a6 00 00 f6 45 68 4c 75 04 83 4d 6c 40 41 8b 47 2c 85 c0 0f 84 09 ff ff ff 49 8b 87 c0 00 00 00 <f6> 80 cc 00 01 00 20 0f 84 f5 fe ff ff f6 43 38 04 0f 85 eb fe ff
kube# [ 46.265381] systemd[1]: Started Process Core Dump (PID 6382/UID 0).
kube# [ 46.275299] dockerd[1172]: time="2020-01-27T01:32:32.218848815Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3d69f4eac385fba6af7558d3dc6e00adce38c0a2537f620327cfe2a70ad468a9/shim.sock" debug=false pid=6384
kube# [ 46.277992] dockerd[1172]: time="2020-01-27T01:32:32.221543850Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9f01d023cf0398f6dcdc5c8159e4a7c417409eac15cba898a78169cdb688d800/shim.sock" debug=false pid=6393
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube# [ 46.602884] systemd[1]: dhcpcd.service: Control process exited, code=dumped, status=11/SEGV
kube# [ 46.603215] systemd[1]: dhcpcd.service: Failed with result 'core-dump'.
kube# [ 46.603637] systemd[1]: Failed to start DHCP Client.
kube# [ 46.612622] systemd-coredump[6383]: Cannot resolve systemd-coredump user. Proceeding to dump core as root: No such process
kube# [ 46.613007] systemd-coredump[6383]: Process 6001 (dhcpcd) of user 0 dumped core.
kube# [ 46.627594] systemd[1]: systemd-coredump@1-6382-0.service: Succeeded.
kube# [ 46.772028] systemd[1]: dhcpcd.service: Service RestartSec=100ms expired, scheduling restart.
kube# [ 46.772278] systemd[1]: dhcpcd.service: Scheduled restart job, restart counter is at 2.
kube# [ 46.772663] systemd[1]: Stopped DHCP Client.
kube# [ 46.774662] systemd[1]: Starting DHCP Client...
kube# [ 46.778244] dhcpcd[6509]: main: control_open: Connection refused
kube# [ 46.778543] dhcpcd[6509]: main: control_open: Connection refused
kube# [ 46.780982] dhcpcd[6509]: dev: loaded udev
kube# [ 46.993005] dhcpcd[6509]: DUID 00:01:00:01:25:c0:fa:07:52:54:00:12:34:56
kube# [ 46.993314] dhcpcd[6509]: eth0: IAID 00:12:34:56
kube# [ 46.993674] dhcpcd[6509]: docker0: waiting for carrier
kube# [ 46.994131] dhcpcd[6509]: cni0: IAID c7:cd:59:9a
kube# [ 46.994394] dhcpcd[6509]: veth19883ad1: IAID d6:92:ee:f6
kube# [ 46.994702] dhcpcd[6509]: vethe0b55034: IAID 50:05:61:25
kube# [ 46.995089] dhcpcd[6509]: vethc52503c5: IAID 7b:8e:16:6c
kube# [ 46.995381] dhcpcd[6509]: veth5f245672: IAID 7a:b0:94:4b
kube# [ 46.995683] dhcpcd[6509]: vethc70dcaf4: IAID ab:47:9a:a4
kube# [ 46.996040] dhcpcd[6509]: vethce1f55e2: IAID fc:a9:a4:79
kube# [ 46.996351] dhcpcd[6509]: veth3c341536: IAID be:7b:04:5d
kube# [ 46.996669] dhcpcd[6509]: vethe2d13415: IAID c6:d4:e5:61
kube# [ 46.997107] dhcpcd[6509]: veth80de4b3c: IAID d6:80:3c:f9
kube# [ 46.997386] dhcpcd[6509]: veth8e8e94d3: IAID 07:7f:a8:22
kube# [ 46.997685] dhcpcd[6509]: veth70c2b7cd: IAID bc:ce:83:eb
kube# [ 46.998043] dhcpcd[6509]: veth454b8b69: IAID ab:83:20:bf
kube# [ 47.173430] dhcpcd[6509]: vethe0b55034: soliciting a DHCP lease
kube# [ 47.183983] dhcpcd[6509]: veth3c341536: soliciting a DHCP lease
kube# [ 47.296284] dhcpcd[6509]: eth0: soliciting an IPv6 router
kube# [ 47.296590] dhcpcd[6509]: eth0: Router Advertisement from fe80::2
kube# [ 47.297483] dhcpcd[6509]: eth0: adding address fec0::5054:ff:fe12:3456/64
kube# [ 47.297897] dhcpcd[6509]: eth0: adding route to fec0::/64
kube# [ 47.298217] dhcpcd[6509]: eth0: adding default route via fe80::2
kube# [ 47.336029] dhcpcd[6509]: veth8e8e94d3: soliciting an IPv6 router
kube# [ 47.368235] dhcpcd[6509]: veth3c341536: soliciting an IPv6 router
kube# [ 47.397474] dhcpcd[6509]: veth19883ad1: soliciting an IPv6 router
kube# [ 47.402623] dhcpcd[6509]: veth8e8e94d3: soliciting a DHCP lease
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube# [ 47.472264] dhcpcd[6509]: veth454b8b69: soliciting an IPv6 router
kube# [ 47.476404] dhcpcd[6509]: vethc70dcaf4: soliciting a DHCP lease
kube: exit status 1
(0.05 seconds)
kube# [ 47.541240] dhcpcd[6509]: vethe0b55034: soliciting an IPv6 router
kube# [ 47.545315] dhcpcd[6509]: vethce1f55e2: soliciting an IPv6 router
kube# [ 47.660754] dhcpcd[6509]: veth70c2b7cd: soliciting a DHCP lease
kube# [ 47.665878] dhcpcd[6509]: vethe2d13415: soliciting an IPv6 router
kube# [ 47.716108] dhcpcd[6509]: cni0: soliciting an IPv6 router
kube# [ 47.744376] dhcpcd[6509]: veth454b8b69: soliciting a DHCP lease
kube# [ 47.783338] dhcpcd[6509]: vethc52503c5: soliciting an IPv6 router
kube# [ 47.807315] dhcpcd[6509]: veth70c2b7cd: soliciting an IPv6 router
kube# [ 47.833576] dhcpcd[6509]: veth19883ad1: soliciting a DHCP lease
kube# [ 47.841984] dhcpcd[6509]: veth5f245672: soliciting an IPv6 router
kube# [ 47.858226] dhcpcd[6509]: vethc70dcaf4: soliciting an IPv6 router
kube# [ 47.881543] dhcpcd[6509]: cni0: soliciting a DHCP lease
kube# [ 47.894076] dhcpcd[6509]: veth80de4b3c: soliciting an IPv6 router
kube# [ 47.912378] dhcpcd[6509]: vethc52503c5: soliciting a DHCP lease
kube# [ 47.939100] dhcpcd[6509]: vethe2d13415: soliciting a DHCP lease
kube# [ 47.945970] dhcpcd[6509]: veth80de4b3c: soliciting a DHCP lease
kube# [ 47.955080] dhcpcd[6509]: veth5f245672: soliciting a DHCP lease
kube# [ 47.969119] dhcpcd[6509]: eth0: rebinding lease of 10.0.2.15
kube# [ 47.989919] dhcpcd[6509]: vethce1f55e2: soliciting a DHCP lease
kube# [ 48.001987] dhcpcd[6509]: eth0: truncated packet (17) from 10.1.0.12
kube# [ 48.002258] dhcpcd[6509]: eth0: truncated packet (17) from 10.1.0.12
kube# [ 48.002866] dhcpcd[6509]: eth0: leased 10.0.2.15 for 86400 seconds
kube# [ 48.003185] dhcpcd[6509]: eth0: adding route to 10.0.2.0/24
kube# [ 48.003500] dhcpcd[6509]: eth0: adding default route via 10.0.2.2
kube# [ 48.053136] dhcpcd[6509]: Failed to reload-or-try-restart ntpd.service: Unit ntpd.service not found.
kube# [ 48.053430] dhcpcd[6509]: Failed to reload-or-try-restart openntpd.service: Unit openntpd.service not found.
kube# [ 48.054788] dhcpcd[6509]: Failed to reload-or-try-restart chronyd.service: Unit chronyd.service not found.
kube# [ 48.057721] dhcpcd[6509]: forked to background, child pid 6600
kube# [ 48.064667] systemd[1]: Started DHCP Client.
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.06 seconds)
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.05 seconds)
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 1
(0.06 seconds)
kube: running command: kubectl get deployment -o go-template nginx --template={{.status.readyReplicas}} | grep 10
kube: exit status 0
(0.05 seconds)
(25.42 seconds)
kube: waiting for success: /nix/store/a7i4hha4gh7fsbw7bfz51l2dkhgvb59a-curl-7.65.3-bin/bin/curl http://nginx.default.svc.cluster.local | grep -i hello
kube: running command: /nix/store/a7i4hha4gh7fsbw7bfz51l2dkhgvb59a-curl-7.65.3-bin/bin/curl http://nginx.default.svc.cluster.local | grep -i hello
kube# % Total % Received % Xferd Average Speed Time Time Time Current
kube# Dload Upload Total Spent Left Speed
100 52 100 52 0 0 10400 0 --:--:-- --:--:-- --:--:-- 10400
kube: exit status 0
(0.04 seconds)
(0.04 seconds)
(53.50 seconds)
collecting coverage data
kube: running command: test -e /sys/kernel/debug/gcov
kube: exit status 1
(0.00 seconds)
(0.00 seconds)
syncing
kube: running command: sync
kube: exit status 0
(0.07 seconds)
(0.07 seconds)
test script finished in 53.58s
cleaning up
killing kube (pid 9)
(0.00 seconds)
vde_switch: EOF on stdin, cleaning up and exiting
vde_switch: Could not remove ctl dir '/build/vde1.ctl': Directory not empty
/nix/store/biyimfbv7qg6i1wgq0mngvfgf5k0m539-vm-test-run-nginx-deployment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment