Skip to content

Instantly share code, notes, and snippets.

@medyagh
Created July 1, 2020 17:34
Show Gist options
  • Save medyagh/b7b6d9ff8226a6476fdbf77a4bf92021 to your computer and use it in GitHub Desktop.
Save medyagh/b7b6d9ff8226a6476fdbf77a4bf92021 to your computer and use it in GitHub Desktop.
Started by upstream project "Build_Cross" build number 12926
originally caused by:
Started by timer
Started by user Medya Ghazizadeh
Rebuilds build #10879
Running as SYSTEM
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Evaluating the Groovy script content
[EnvInject] - Injecting contributions.
Building remotely on GCP - Debian Agent 1 (debian10) in workspace /home/jenkins/workspace/KVM_Linux_integration
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[KVM_Linux_integration] $ /bin/bash -xe /tmp/jenkins10965592934645667523.sh
+ set -e
+ gsutil -m cp -r gs://minikube-builds/master/installers .
Copying gs://minikube-builds/master/installers/check_install_golang.sh...
/ [0/1 files][ 0.0 B/ 2.2 KiB] 0% Done
/ [1/1 files][ 2.2 KiB/ 2.2 KiB] 100% Done
Operation completed over 1 objects/2.2 KiB.
+ chmod +x ./installers/check_install_golang.sh
+ gsutil -m cp -r gs://minikube-builds/master/common.sh .
Copying gs://minikube-builds/master/common.sh...
/ [0/1 files][ 0.0 B/ 13.7 KiB] 0% Done
/ [1/1 files][ 13.7 KiB/ 13.7 KiB] 100% Done
Operation completed over 1 objects/13.7 KiB.
+ gsutil cp gs://minikube-builds/master/linux_integration_tests_kvm.sh .
Copying gs://minikube-builds/master/linux_integration_tests_kvm.sh...
/ [0 files][ 0.0 B/ 1.5 KiB]
/ [1 files][ 1.5 KiB/ 1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
+ sudo gsutil cp gs://minikube-builds/master/docker-machine-driver-kvm2 /usr/local/bin/docker-machine-driver-kvm2
Copying gs://minikube-builds/master/docker-machine-driver-kvm2...
/ [0 files][ 0.0 B/ 13.9 MiB]
/ [1 files][ 13.9 MiB/ 13.9 MiB]
Operation completed over 1 objects/13.9 MiB.
+ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm2
+ bash linux_integration_tests_kvm.sh
+ (( 2 < 2 ))
+ VERSION_TO_INSTALL=1.13.9
+ INSTALL_PATH=/usr/local
+ check_and_install_golang
+ go version
+ echo 'WARNING: No golang installation found in your environment.'
WARNING: No golang installation found in your environment.
+ install_golang 1.13.9 /usr/local
+ echo 'Installing golang version: 1.13.9 on /usr/local'
Installing golang version: 1.13.9 on /usr/local
+ pushd /tmp
+ sudo curl -qL -O https://storage.googleapis.com/golang/go1.13.9.linux-amd64.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
3 114M 3 4096k 0 0 19.8M 0 0:00:05 --:--:-- 0:00:05 19.7M
100 114M 100 114M 0 0 184M 0 --:--:-- --:--:-- --:--:-- 183M
+ sudo tar xfa go1.13.9.linux-amd64.tar.gz
+ sudo rm -rf /usr/local/go
+ sudo mv go /usr/local/
++ whoami
+ sudo chown -R root: /usr/local/go
+ popd
+ return
Total reclaimed space: 0B
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 1 0 973.5MB 973.5MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
>> Starting at Wed Jul 1 02:56:06 UTC 2020
arch: linux-amd64
build: master
driver: kvm2
job: KVM_Linux
test home: /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
sudo:
kernel: #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07)
uptime: 02:56:06 up 10 min, 0 users, load average: 0.11, 0.04, 0.01
env: ‘kubectl’: No such file or directory
kubectl:
docker: 19.03.12
sudo: podman: command not found
podman:
go: go version go1.13.9 linux/amd64
virsh: 5.0.0
>> Downloading test inputs from master ...
minikube version: v1.12.0-beta.0
commit: 8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
>> Cleaning up after previous test runs ...
/usr/bin/virsh
>> virsh VM list after clean up (should be empty):
Id Name State
--------------------
sudo: lsof: command not found
Sending build context to Docker daemon 199.3MB
Step 1/4 : FROM ubuntu:18.04
18.04: Pulling from library/ubuntu
d7c3167c320d: Pulling fs layer
131f805ec7fd: Pulling fs layer
322ed380e680: Pulling fs layer
6ac240b13098: Pulling fs layer
6ac240b13098: Waiting
322ed380e680: Verifying Checksum
322ed380e680: Download complete
131f805ec7fd: Verifying Checksum
131f805ec7fd: Download complete
d7c3167c320d: Download complete
6ac240b13098: Download complete
d7c3167c320d: Pull complete
131f805ec7fd: Pull complete
322ed380e680: Pull complete
6ac240b13098: Pull complete
Digest: sha256:86510528ab9cd7b64209cbbe6946e094a6d10c6db21def64a93ebdd20011de1d
Status: Downloaded newer image for ubuntu:18.04
---> 8e4ce0a6ce69
Step 2/4 : RUN apt-get update && apt-get install -y kmod gcc wget xz-utils libc6-dev bc libelf-dev bison flex openssl libssl-dev libidn2-0 sudo libcap2 && rm -rf /var/lib/apt/lists/*
---> Running in e82c2391cc0e
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:3 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [977 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [9012 B]
Get:5 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [863 kB]
Get:6 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [82.2 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]
Get:10 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [13.4 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [93.8 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1399 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1271 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [8158 B]
Get:18 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8286 B]
Fetched 18.1 MB in 3s (5617 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
libidn2-0 is already the newest version (2.0.4-1.1ubuntu0.2).
The following additional packages will be installed:
binutils binutils-common binutils-x86-64-linux-gnu ca-certificates cpp cpp-7
gcc-7 gcc-7-base libasan4 libatomic1 libbinutils libbison-dev libc-dev-bin
libcc1-0 libcilkrts5 libelf1 libfl-dev libfl2 libgcc-7-dev libgomp1 libisl19
libitm1 libkmod2 liblsan0 libmpc3 libmpfr6 libmpx2 libpsl5 libquadmath0
libreadline7 libsigsegv2 libssl1.1 libtsan0 libubsan0 linux-libc-dev m4
manpages manpages-dev publicsuffix readline-common zlib1g-dev
Suggested packages:
binutils-doc bison-doc cpp-doc gcc-7-locales build-essential flex-doc
gcc-multilib make autoconf automake libtool gdb gcc-doc gcc-7-multilib
gcc-7-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan4-dbg
liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx2-dbg
libquadmath0-dbg glibc-doc libssl-doc m4-doc man-browser readline-doc
The following NEW packages will be installed:
bc binutils binutils-common binutils-x86-64-linux-gnu bison ca-certificates
cpp cpp-7 flex gcc gcc-7 gcc-7-base kmod libasan4 libatomic1 libbinutils
libbison-dev libc-dev-bin libc6-dev libcap2 libcc1-0 libcilkrts5 libelf-dev
libelf1 libfl-dev libfl2 libgcc-7-dev libgomp1 libisl19 libitm1 libkmod2
liblsan0 libmpc3 libmpfr6 libmpx2 libpsl5 libquadmath0 libreadline7
libsigsegv2 libssl-dev libssl1.1 libtsan0 libubsan0 linux-libc-dev m4
manpages manpages-dev openssl publicsuffix readline-common sudo wget
xz-utils zlib1g-dev
0 upgraded, 54 newly installed, 0 to remove and 1 not upgraded.
Need to get 38.5 MB of archives.
After this operation, 143 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 libsigsegv2 amd64 2.12-1 [14.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 m4 amd64 1.4.18-1 [197 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 flex amd64 2.6.4-6 [316 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.1-1ubuntu2.1~18.04.6 [1301 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 openssl amd64 1.1.1-1ubuntu2.1~18.04.6 [614 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ca-certificates all 20190110~18.04.1 [146 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libkmod2 amd64 24-1ubuntu3.4 [40.1 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 kmod amd64 24-1ubuntu3.4 [88.7 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 libcap2 amd64 1:2.25-1.2 [13.0 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libelf1 amd64 0.170-0.4ubuntu0.1 [44.8 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic/main amd64 readline-common all 7.0-3 [52.9 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 libreadline7 amd64 7.0-3 [124 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 sudo amd64 1.8.21p2-3ubuntu1.2 [427 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic/main amd64 xz-utils amd64 5.2.2-1.3 [83.8 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic/main amd64 libpsl5 amd64 0.19.1-5build1 [41.8 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic/main amd64 manpages all 4.15-1 [1234 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic/main amd64 publicsuffix all 20180223.1310-1 [97.6 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 wget amd64 1.19.4-1ubuntu2.2 [316 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic/main amd64 bc amd64 1.07.1-2 [86.2 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils-common amd64 2.30-21ubuntu1~18.04.3 [196 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libbinutils amd64 2.30-21ubuntu1~18.04.3 [488 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils-x86-64-linux-gnu amd64 2.30-21ubuntu1~18.04.3 [1839 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils amd64 2.30-21ubuntu1~18.04.3 [3388 B]
Get:24 http://archive.ubuntu.com/ubuntu bionic/main amd64 libbison-dev amd64 2:3.0.4.dfsg-1build1 [339 kB]
Get:25 http://archive.ubuntu.com/ubuntu bionic/main amd64 bison amd64 2:3.0.4.dfsg-1build1 [266 kB]
Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc-7-base amd64 7.5.0-3ubuntu1~18.04 [18.3 kB]
Get:27 http://archive.ubuntu.com/ubuntu bionic/main amd64 libisl19 amd64 0.19-1 [551 kB]
Get:28 http://archive.ubuntu.com/ubuntu bionic/main amd64 libmpfr6 amd64 4.0.1-1 [243 kB]
Get:29 http://archive.ubuntu.com/ubuntu bionic/main amd64 libmpc3 amd64 1.1.0-1 [40.8 kB]
Get:30 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 cpp-7 amd64 7.5.0-3ubuntu1~18.04 [8591 kB]
Get:31 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 cpp amd64 4:7.4.0-1ubuntu2.3 [27.7 kB]
Get:32 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcc1-0 amd64 8.4.0-1ubuntu1~18.04 [39.4 kB]
Get:33 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgomp1 amd64 8.4.0-1ubuntu1~18.04 [76.5 kB]
Get:34 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libitm1 amd64 8.4.0-1ubuntu1~18.04 [27.9 kB]
Get:35 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libatomic1 amd64 8.4.0-1ubuntu1~18.04 [9192 B]
Get:36 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libasan4 amd64 7.5.0-3ubuntu1~18.04 [358 kB]
Get:37 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 liblsan0 amd64 8.4.0-1ubuntu1~18.04 [133 kB]
Get:38 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libtsan0 amd64 8.4.0-1ubuntu1~18.04 [288 kB]
Get:39 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libubsan0 amd64 7.5.0-3ubuntu1~18.04 [126 kB]
Get:40 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcilkrts5 amd64 7.5.0-3ubuntu1~18.04 [42.5 kB]
Get:41 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmpx2 amd64 8.4.0-1ubuntu1~18.04 [11.6 kB]
Get:42 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libquadmath0 amd64 8.4.0-1ubuntu1~18.04 [134 kB]
Get:43 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgcc-7-dev amd64 7.5.0-3ubuntu1~18.04 [2378 kB]
Get:44 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc-7 amd64 7.5.0-3ubuntu1~18.04 [9381 kB]
Get:45 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc amd64 4:7.4.0-1ubuntu2.3 [5184 B]
Get:46 http://archive.ubuntu.com/ubuntu bionic/main amd64 libc-dev-bin amd64 2.27-3ubuntu1 [71.8 kB]
Get:47 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-108.109 [991 kB]
Get:48 http://archive.ubuntu.com/ubuntu bionic/main amd64 libc6-dev amd64 2.27-3ubuntu1 [2587 kB]
Get:49 http://archive.ubuntu.com/ubuntu bionic/main amd64 zlib1g-dev amd64 1:1.2.11.dfsg-0ubuntu2 [176 kB]
Get:50 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libelf-dev amd64 0.170-0.4ubuntu0.1 [57.3 kB]
Get:51 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfl2 amd64 2.6.4-6 [11.4 kB]
Get:52 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfl-dev amd64 2.6.4-6 [6320 B]
Get:53 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl-dev amd64 1.1.1-1ubuntu2.1~18.04.6 [1566 kB]
Get:54 http://archive.ubuntu.com/ubuntu bionic/main amd64 manpages-dev all 4.15-1 [2217 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 38.5 MB in 3s (11.5 MB/s)
Selecting previously unselected package libsigsegv2:amd64.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4046 files and directories currently installed.)
Preparing to unpack .../00-libsigsegv2_2.12-1_amd64.deb ...
Unpacking libsigsegv2:amd64 (2.12-1) ...
Selecting previously unselected package m4.
Preparing to unpack .../01-m4_1.4.18-1_amd64.deb ...
Unpacking m4 (1.4.18-1) ...
Selecting previously unselected package flex.
Preparing to unpack .../02-flex_2.6.4-6_amd64.deb ...
Unpacking flex (2.6.4-6) ...
Selecting previously unselected package libssl1.1:amd64.
Preparing to unpack .../03-libssl1.1_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ...
Unpacking libssl1.1:amd64 (1.1.1-1ubuntu2.1~18.04.6) ...
Selecting previously unselected package openssl.
Preparing to unpack .../04-openssl_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ...
Unpacking openssl (1.1.1-1ubuntu2.1~18.04.6) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../05-ca-certificates_20190110~18.04.1_all.deb ...
Unpacking ca-certificates (20190110~18.04.1) ...
Selecting previously unselected package libkmod2:amd64.
Preparing to unpack .../06-libkmod2_24-1ubuntu3.4_amd64.deb ...
Unpacking libkmod2:amd64 (24-1ubuntu3.4) ...
Selecting previously unselected package kmod.
Preparing to unpack .../07-kmod_24-1ubuntu3.4_amd64.deb ...
Unpacking kmod (24-1ubuntu3.4) ...
Selecting previously unselected package libcap2:amd64.
Preparing to unpack .../08-libcap2_1%3a2.25-1.2_amd64.deb ...
Unpacking libcap2:amd64 (1:2.25-1.2) ...
Selecting previously unselected package libelf1:amd64.
Preparing to unpack .../09-libelf1_0.170-0.4ubuntu0.1_amd64.deb ...
Unpacking libelf1:amd64 (0.170-0.4ubuntu0.1) ...
Selecting previously unselected package readline-common.
Preparing to unpack .../10-readline-common_7.0-3_all.deb ...
Unpacking readline-common (7.0-3) ...
Selecting previously unselected package libreadline7:amd64.
Preparing to unpack .../11-libreadline7_7.0-3_amd64.deb ...
Unpacking libreadline7:amd64 (7.0-3) ...
Selecting previously unselected package sudo.
Preparing to unpack .../12-sudo_1.8.21p2-3ubuntu1.2_amd64.deb ...
Unpacking sudo (1.8.21p2-3ubuntu1.2) ...
Selecting previously unselected package xz-utils.
Preparing to unpack .../13-xz-utils_5.2.2-1.3_amd64.deb ...
Unpacking xz-utils (5.2.2-1.3) ...
Selecting previously unselected package libpsl5:amd64.
Preparing to unpack .../14-libpsl5_0.19.1-5build1_amd64.deb ...
Unpacking libpsl5:amd64 (0.19.1-5build1) ...
Selecting previously unselected package manpages.
Preparing to unpack .../15-manpages_4.15-1_all.deb ...
Unpacking manpages (4.15-1) ...
Selecting previously unselected package publicsuffix.
Preparing to unpack .../16-publicsuffix_20180223.1310-1_all.deb ...
Unpacking publicsuffix (20180223.1310-1) ...
Selecting previously unselected package wget.
Preparing to unpack .../17-wget_1.19.4-1ubuntu2.2_amd64.deb ...
Unpacking wget (1.19.4-1ubuntu2.2) ...
Selecting previously unselected package bc.
Preparing to unpack .../18-bc_1.07.1-2_amd64.deb ...
Unpacking bc (1.07.1-2) ...
Selecting previously unselected package binutils-common:amd64.
Preparing to unpack .../19-binutils-common_2.30-21ubuntu1~18.04.3_amd64.deb ...
Unpacking binutils-common:amd64 (2.30-21ubuntu1~18.04.3) ...
Selecting previously unselected package libbinutils:amd64.
Preparing to unpack .../20-libbinutils_2.30-21ubuntu1~18.04.3_amd64.deb ...
Unpacking libbinutils:amd64 (2.30-21ubuntu1~18.04.3) ...
Selecting previously unselected package binutils-x86-64-linux-gnu.
Preparing to unpack .../21-binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04.3_amd64.deb ...
Unpacking binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.3) ...
Selecting previously unselected package binutils.
Preparing to unpack .../22-binutils_2.30-21ubuntu1~18.04.3_amd64.deb ...
Unpacking binutils (2.30-21ubuntu1~18.04.3) ...
Selecting previously unselected package libbison-dev:amd64.
Preparing to unpack .../23-libbison-dev_2%3a3.0.4.dfsg-1build1_amd64.deb ...
Unpacking libbison-dev:amd64 (2:3.0.4.dfsg-1build1) ...
Selecting previously unselected package bison.
Preparing to unpack .../24-bison_2%3a3.0.4.dfsg-1build1_amd64.deb ...
Unpacking bison (2:3.0.4.dfsg-1build1) ...
Selecting previously unselected package gcc-7-base:amd64.
Preparing to unpack .../25-gcc-7-base_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking gcc-7-base:amd64 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package libisl19:amd64.
Preparing to unpack .../26-libisl19_0.19-1_amd64.deb ...
Unpacking libisl19:amd64 (0.19-1) ...
Selecting previously unselected package libmpfr6:amd64.
Preparing to unpack .../27-libmpfr6_4.0.1-1_amd64.deb ...
Unpacking libmpfr6:amd64 (4.0.1-1) ...
Selecting previously unselected package libmpc3:amd64.
Preparing to unpack .../28-libmpc3_1.1.0-1_amd64.deb ...
Unpacking libmpc3:amd64 (1.1.0-1) ...
Selecting previously unselected package cpp-7.
Preparing to unpack .../29-cpp-7_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking cpp-7 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package cpp.
Preparing to unpack .../30-cpp_4%3a7.4.0-1ubuntu2.3_amd64.deb ...
Unpacking cpp (4:7.4.0-1ubuntu2.3) ...
Selecting previously unselected package libcc1-0:amd64.
Preparing to unpack .../31-libcc1-0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libgomp1:amd64.
Preparing to unpack .../32-libgomp1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libgomp1:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libitm1:amd64.
Preparing to unpack .../33-libitm1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libitm1:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../34-libatomic1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libatomic1:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libasan4:amd64.
Preparing to unpack .../35-libasan4_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking libasan4:amd64 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package liblsan0:amd64.
Preparing to unpack .../36-liblsan0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking liblsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libtsan0:amd64.
Preparing to unpack .../37-libtsan0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libtsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libubsan0:amd64.
Preparing to unpack .../38-libubsan0_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking libubsan0:amd64 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package libcilkrts5:amd64.
Preparing to unpack .../39-libcilkrts5_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking libcilkrts5:amd64 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package libmpx2:amd64.
Preparing to unpack .../40-libmpx2_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libmpx2:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libquadmath0:amd64.
Preparing to unpack .../41-libquadmath0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) ...
Selecting previously unselected package libgcc-7-dev:amd64.
Preparing to unpack .../42-libgcc-7-dev_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking libgcc-7-dev:amd64 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package gcc-7.
Preparing to unpack .../43-gcc-7_7.5.0-3ubuntu1~18.04_amd64.deb ...
Unpacking gcc-7 (7.5.0-3ubuntu1~18.04) ...
Selecting previously unselected package gcc.
Preparing to unpack .../44-gcc_4%3a7.4.0-1ubuntu2.3_amd64.deb ...
Unpacking gcc (4:7.4.0-1ubuntu2.3) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../45-libc-dev-bin_2.27-3ubuntu1_amd64.deb ...
Unpacking libc-dev-bin (2.27-3ubuntu1) ...
Selecting previously unselected package linux-libc-dev:amd64.
Preparing to unpack .../46-linux-libc-dev_4.15.0-108.109_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.15.0-108.109) ...
Selecting previously unselected package libc6-dev:amd64.
Preparing to unpack .../47-libc6-dev_2.27-3ubuntu1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.27-3ubuntu1) ...
Selecting previously unselected package zlib1g-dev:amd64.
Preparing to unpack .../48-zlib1g-dev_1%3a1.2.11.dfsg-0ubuntu2_amd64.deb ...
Unpacking zlib1g-dev:amd64 (1:1.2.11.dfsg-0ubuntu2) ...
Selecting previously unselected package libelf-dev:amd64.
Preparing to unpack .../49-libelf-dev_0.170-0.4ubuntu0.1_amd64.deb ...
Unpacking libelf-dev:amd64 (0.170-0.4ubuntu0.1) ...
Selecting previously unselected package libfl2:amd64.
Preparing to unpack .../50-libfl2_2.6.4-6_amd64.deb ...
Unpacking libfl2:amd64 (2.6.4-6) ...
Selecting previously unselected package libfl-dev:amd64.
Preparing to unpack .../51-libfl-dev_2.6.4-6_amd64.deb ...
Unpacking libfl-dev:amd64 (2.6.4-6) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../52-libssl-dev_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ...
Unpacking libssl-dev:amd64 (1.1.1-1ubuntu2.1~18.04.6) ...
Selecting previously unselected package manpages-dev.
Preparing to unpack .../53-manpages-dev_4.15-1_all.deb ...
Unpacking manpages-dev (4.15-1) ...
Setting up libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libgomp1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libatomic1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up readline-common (7.0-3) ...
Setting up manpages (4.15-1) ...
Setting up libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up sudo (1.8.21p2-3ubuntu1.2) ...
Setting up libsigsegv2:amd64 (2.12-1) ...
Setting up libreadline7:amd64 (7.0-3) ...
Setting up libpsl5:amd64 (0.19.1-5build1) ...
Setting up libelf1:amd64 (0.170-0.4ubuntu0.1) ...
Setting up libtsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libcap2:amd64 (1:2.25-1.2) ...
Setting up linux-libc-dev:amd64 (4.15.0-108.109) ...
Setting up libmpfr6:amd64 (4.0.1-1) ...
Setting up m4 (1.4.18-1) ...
Setting up libkmod2:amd64 (24-1ubuntu3.4) ...
Setting up liblsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up gcc-7-base:amd64 (7.5.0-3ubuntu1~18.04) ...
Setting up binutils-common:amd64 (2.30-21ubuntu1~18.04.3) ...
Setting up libmpx2:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up publicsuffix (20180223.1310-1) ...
Setting up libssl1.1:amd64 (1.1.1-1ubuntu2.1~18.04.6) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up xz-utils (5.2.2-1.3) ...
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/man1/lzma.1.gz because associated file /usr/share/man/man1/xz.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/unlzma.1.gz because associated file /usr/share/man/man1/unxz.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzcat.1.gz because associated file /usr/share/man/man1/xzcat.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzmore.1.gz because associated file /usr/share/man/man1/xzmore.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzless.1.gz because associated file /usr/share/man/man1/xzless.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzdiff.1.gz because associated file /usr/share/man/man1/xzdiff.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzcmp.1.gz because associated file /usr/share/man/man1/xzcmp.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzgrep.1.gz because associated file /usr/share/man/man1/xzgrep.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzegrep.1.gz because associated file /usr/share/man/man1/xzegrep.1.gz (of link group lzma) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/lzfgrep.1.gz because associated file /usr/share/man/man1/xzfgrep.1.gz (of link group lzma) doesn't exist
Setting up openssl (1.1.1-1ubuntu2.1~18.04.6) ...
Setting up wget (1.19.4-1ubuntu2.2) ...
Setting up libbison-dev:amd64 (2:3.0.4.dfsg-1build1) ...
Setting up libfl2:amd64 (2.6.4-6) ...
Setting up libmpc3:amd64 (1.1.0-1) ...
Setting up libc-dev-bin (2.27-3ubuntu1) ...
Setting up bison (2:3.0.4.dfsg-1build1) ...
update-alternatives: using /usr/bin/bison.yacc to provide /usr/bin/yacc (yacc) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/man1/yacc.1.gz because associated file /usr/share/man/man1/bison.yacc.1.gz (of link group yacc) doesn't exist
Setting up bc (1.07.1-2) ...
Setting up ca-certificates (20190110~18.04.1) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Updating certificates in /etc/ssl/certs...
127 added, 0 removed; done.
Setting up manpages-dev (4.15-1) ...
Setting up libc6-dev:amd64 (2.27-3ubuntu1) ...
Setting up libitm1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up zlib1g-dev:amd64 (1:1.2.11.dfsg-0ubuntu2) ...
Setting up libisl19:amd64 (0.19-1) ...
Setting up kmod (24-1ubuntu3.4) ...
Setting up libasan4:amd64 (7.5.0-3ubuntu1~18.04) ...
Setting up libbinutils:amd64 (2.30-21ubuntu1~18.04.3) ...
Setting up flex (2.6.4-6) ...
Setting up libcilkrts5:amd64 (7.5.0-3ubuntu1~18.04) ...
Setting up libubsan0:amd64 (7.5.0-3ubuntu1~18.04) ...
Setting up libssl-dev:amd64 (1.1.1-1ubuntu2.1~18.04.6) ...
Setting up libelf-dev:amd64 (0.170-0.4ubuntu0.1) ...
Setting up libgcc-7-dev:amd64 (7.5.0-3ubuntu1~18.04) ...
Setting up cpp-7 (7.5.0-3ubuntu1~18.04) ...
Setting up libfl-dev:amd64 (2.6.4-6) ...
Setting up binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.3) ...
Setting up cpp (4:7.4.0-1ubuntu2.3) ...
Setting up binutils (2.30-21ubuntu1~18.04.3) ...
Setting up gcc-7 (7.5.0-3ubuntu1~18.04) ...
Setting up gcc (4:7.4.0-1ubuntu2.3) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for ca-certificates (20190110~18.04.1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container e82c2391cc0e
---> 35aaaa54cb39
Step 3/4 : COPY gvisor-addon /gvisor-addon
---> 4f2d248c8802
Step 4/4 : CMD ["/gvisor-addon"]
---> Running in af3ad56e1015
Removing intermediate container af3ad56e1015
---> 93a8c861c09e
Successfully built 93a8c861c09e
Successfully tagged gcr.io/k8s-minikube/gvisor-addon:2
>> Starting out/e2e-linux-amd64 at Wed Jul 1 02:56:34 UTC 2020
++ test -f /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt
++ touch /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt
++ out/e2e-linux-amd64 '-minikube-start-args=--driver=kvm2 ' -test.timeout=70m -test.v -gvisor -binary=out/minikube-linux-amd64
++ tee /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt
Found 8 cores, limiting parallelism with --test.parallel=4
=== RUN TestDownloadOnly
=== RUN TestDownloadOnly/crio
=== RUN TestDownloadOnly/crio/v1.13.0
=== RUN TestDownloadOnly/crio/v1.18.3
=== RUN TestDownloadOnly/crio/v1.18.4-rc.0
=== RUN TestDownloadOnly/crio/DeleteAll
=== RUN TestDownloadOnly/crio/DeleteAlwaysSucceeds
=== RUN TestDownloadOnly/docker
=== RUN TestDownloadOnly/docker/v1.13.0
=== RUN TestDownloadOnly/docker/v1.18.3
=== RUN TestDownloadOnly/docker/v1.18.4-rc.0
=== RUN TestDownloadOnly/docker/DeleteAll
=== RUN TestDownloadOnly/docker/DeleteAlwaysSucceeds
=== RUN TestDownloadOnly/containerd
=== RUN TestDownloadOnly/containerd/v1.13.0
=== RUN TestDownloadOnly/containerd/v1.18.3
=== RUN TestDownloadOnly/containerd/v1.18.4-rc.0
=== RUN TestDownloadOnly/containerd/DeleteAll
=== RUN TestDownloadOnly/containerd/DeleteAlwaysSucceeds
--- PASS: TestDownloadOnly (77.72s)
--- PASS: TestDownloadOnly/crio (11.10s)
--- PASS: TestDownloadOnly/crio/v1.13.0 (4.98s)
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2 : (4.980478001s)
--- PASS: TestDownloadOnly/crio/v1.18.3 (3.03s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=kvm2 : (3.025520149s)
--- PASS: TestDownloadOnly/crio/v1.18.4-rc.0 (2.62s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=kvm2 : (2.616657247s)
--- PASS: TestDownloadOnly/crio/DeleteAll (0.18s)
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.15s)
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p crio-20200701025634-8084
helpers_test.go:170: Cleaning up "crio-20200701025634-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p crio-20200701025634-8084
--- PASS: TestDownloadOnly/docker (19.49s)
--- PASS: TestDownloadOnly/docker/v1.13.0 (6.87s)
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2 : (6.867983204s)
--- PASS: TestDownloadOnly/docker/v1.18.3 (5.77s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=kvm2 : (5.770595136s)
--- PASS: TestDownloadOnly/docker/v1.18.4-rc.0 (6.37s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=kvm2 : (6.372763555s)
--- PASS: TestDownloadOnly/docker/DeleteAll (0.18s)
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.15s)
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p docker-20200701025646-8084
helpers_test.go:170: Cleaning up "docker-20200701025646-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p docker-20200701025646-8084
--- PASS: TestDownloadOnly/containerd (47.13s)
--- PASS: TestDownloadOnly/containerd/v1.13.0 (8.55s)
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2 : (8.545680308s)
--- PASS: TestDownloadOnly/containerd/v1.18.3 (10.91s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=kvm2 : (10.905585979s)
--- PASS: TestDownloadOnly/containerd/v1.18.4-rc.0 (19.35s)
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=kvm2
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=kvm2 : (19.34490696s)
--- PASS: TestDownloadOnly/containerd/DeleteAll (3.04s)
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete --all: (3.039439188s)
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (2.54s)
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084
aaa_download_only_test.go:145: (dbg) Done: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084: (2.541553081s)
helpers_test.go:170: Cleaning up "containerd-20200701025705-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084: (2.753247521s)
=== RUN TestDownloadOnlyKic
--- SKIP: TestDownloadOnlyKic (0.00s)
aaa_download_only_test.go:156: skipping, only for docker or podman driver
=== RUN TestOffline
=== RUN TestOffline/group
=== RUN TestOffline/group/docker
=== PAUSE TestOffline/group/docker
=== RUN TestOffline/group/crio
=== PAUSE TestOffline/group/crio
=== RUN TestOffline/group/containerd
=== PAUSE TestOffline/group/containerd
=== CONT TestOffline/group/docker
=== CONT TestOffline/group/containerd
=== CONT TestOffline/group/crio
--- PASS: TestOffline (253.61s)
--- PASS: TestOffline/group (0.00s)
--- PASS: TestOffline/group/containerd (230.10s)
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2 : (3m49.139342662s)
helpers_test.go:170: Cleaning up "offline-containerd-20200701025752-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20200701025752-8084
--- PASS: TestOffline/group/docker (243.19s)
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-docker-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2 : (4m2.446347584s)
helpers_test.go:170: Cleaning up "offline-docker-20200701025752-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-docker-20200701025752-8084
--- PASS: TestOffline/group/crio (253.61s)
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-crio-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2 : (4m12.338034102s)
helpers_test.go:170: Cleaning up "offline-crio-20200701025752-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-crio-20200701025752-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20200701025752-8084: (1.272372307s)
=== RUN TestAddons
=== RUN TestAddons/parallel
=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== RUN TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer
=== RUN TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller
=== RUN TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm
=== CONT TestAddons/parallel/Registry
=== CONT TestAddons/parallel/HelmTiller
=== CONT TestAddons/parallel/Olm
=== CONT TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/MetricsServer
2020/07/01 03:04:36 [DEBUG] GET http://192.168.39.105:5000
--- FAIL: TestAddons (609.83s)
addons_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p addons-20200701030206-8084 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=kvm2
addons_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p addons-20200701030206-8084 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=kvm2 : (2m19.805768991s)
--- FAIL: TestAddons/parallel (0.00s)
--- SKIP: TestAddons/parallel/Olm (0.00s)
addons_test.go:334: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- FAIL: TestAddons/parallel/Registry (12.87s)
addons_test.go:173: registry stabilized in 15.318845ms
addons_test.go:175: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:332: "registry-vdzjt" [f79eaab8-b0fe-446d-a971-cb33624725a8] Running
addons_test.go:175: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021224498s
addons_test.go:178: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:332: "registry-proxy-7kmmq" [065fd00f-b3cd-4b35-8896-7021306fecbb] Running
addons_test.go:178: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008063588s
addons_test.go:183: (dbg) Run: kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now: exec: "kubectl": executable file not found in $PATH (418ns)
addons_test.go:185: pre-cleanup kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now failed: exec: "kubectl": executable file not found in $PATH (not a problem)
addons_test.go:188: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:188: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exec: "kubectl": executable file not found in $PATH (116ns)
addons_test.go:190: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exec: "kubectl": executable file not found in $PATH
addons_test.go:194: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:202: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 ip
addons_test.go:231: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable registry --alsologtostderr -v=1
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:237: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.84531831s)
helpers_test.go:245: TestAddons/parallel/Registry logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:04:38 UTC. --
* Jul 01 03:03:56 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:56.880573447Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:03:57 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:57.977854661Z" level=info msg="shim reaped" id=a2f179901974b17442fcdc5ec101768baf6d5faae9bbd904ebe16f1e48fe5759
* Jul 01 03:03:57 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:57.988208857Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.042862264Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2453f72f2d3fbbf3211d903f4e35f631dfaa2af789ad1423e4b31f4a2ba3bc0c/shim.sock" debug=false pid=5615
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.734848751Z" level=info msg="shim reaped" id=2453f72f2d3fbbf3211d903f4e35f631dfaa2af789ad1423e4b31f4a2ba3bc0c
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.745141379Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:01 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:01.601972719Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b30ca3163d6c1a8b981eaef619f6464eff0a43c70ed443a72b3704544816c57/shim.sock" debug=false pid=5721
* Jul 01 03:04:02 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:02.457841311Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/39c3f696531d245b11df91f4c5735d60d1b881e5876c2ba18732619f1d2ea1e5/shim.sock" debug=false pid=5811
* Jul 01 03:04:02 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:02.466947173Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d8512f3c21a09573210467eb25d58e2a6c904e412a403d1d782f87f6c065a40f/shim.sock" debug=false pid=5816
* Jul 01 03:04:04 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:04.757414710Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5/shim.sock" debug=false pid=5974
* Jul 01 03:04:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:06.038697805Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47f59199ff8b2542ebb3b8f2df0ec48ce6ba2ab3da623f4dd2e66e8c3d3c34f2/shim.sock" debug=false pid=6057
* Jul 01 03:04:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:06.076207506Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/740d3a15da583d6a597ee2c1e43b2c18ba003d2c1b42751671fa3775d5d84d5d/shim.sock" debug=false pid=6073
* Jul 01 03:04:12 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:12.897699882Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87/shim.sock" debug=false pid=6233
* Jul 01 03:04:25 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:25.272317806Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d920f932f040fce34d1156f22795d7e86a77132c51b63cd48afc626354ef6c2a/shim.sock" debug=false pid=6400
* Jul 01 03:04:28 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:28.374226784Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602/shim.sock" debug=false pid=6590
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.264160126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6815cdaec6e0a262bfafc87fa88eab7fcf036190a70f1d5687986c531a42fb9d/shim.sock" debug=false pid=6628
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.953751693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/636b722f9b872ae2b392ff7f4518d3777f898a782f30fde093c50c90dc789b8f/shim.sock" debug=false pid=6668
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.316757798Z" level=info msg="shim reaped" id=9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.317492282Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.389801388Z" level=info msg="shim reaped" id=49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.408845696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.935730704Z" level=info msg="shim reaped" id=4f853a2c4fb3a27395c5bbe725cc9e9ff5ac2fb56ce858cb3eae48e6c6f83ccb
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 9 seconds ago Running packageserver 0 740d3a15da583
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 9 seconds ago Running packageserver 0 47f59199ff8b2
* ed93f8777ade8 quay.io/operator-framework/upstream-community-operators@sha256:4bdd1485bffb217bfd06dccd62a899dcce8bc57af971568ba995176c8b1aa464 10 seconds ago Running registry-server 0 d8512f3c21a09
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 13 seconds ago Running controller 0 eefa25270d8a6
* 49554f7da428d gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 26 seconds ago Exited registry-proxy 0 4f853a2c4fb3a
* 9d9dffb8b723a registry.hub.docker.com/library/registry@sha256:8be26f81ffea54106bae012c6f349df70f4d5e7e2ec01b143c46e2c03b9e551d 34 seconds ago Exited registry 0 4c6a2b7b2735c
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 36 seconds ago Running olm-operator 0 4a26317d80253
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 37 seconds ago Running catalog-operator 0 87e032f179b67
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 42 seconds ago Exited patch 0 a2f179901974b
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 43 seconds ago Running metrics-server 0 1b8c0d094c10b
* 1bc07123ace4f gcr.io/kubernetes-helm/tiller@sha256:59b6200a822ddb18577ca7120fb644a3297635f47d92db521836b39b74ad19e8 45 seconds ago Running tiller 0 e8c2c1e0e0a62
* d6a261bca5222 67da37a9a360e 47 seconds ago Running coredns 0 d11b454b968e3
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 47 seconds ago Exited create 0 de667a00fefb0
* 94232379c1581 4689081edb103 52 seconds ago Running storage-provisioner 0 7896015c69c73
* 40c9a46cf08ab 3439b7546f29b 56 seconds ago Running kube-proxy 0 8df7717a34531
* a8673db5ff2ad 76216c34ed0c7 About a minute ago Running kube-scheduler 0 69d249b151f2d
* 663dada323e98 303ce5db0e90d About a minute ago Running etcd 0 4777c338fb836
* 24d686838dec2 da26705ccb4b5 About a minute ago Running kube-controller-manager 0 ff24f8e852b09
* b7ced5cccc0a4 7e28efa976bd1 About a minute ago Running kube-apiserver 0 1456a98fec87b
*
* ==> coredns [d6a261bca522] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: addons-20200701030206-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=addons-20200701030206-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=addons-20200701030206-8084
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: addons-20200701030206-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:04:35 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.105
* Hostname: addons-20200701030206-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* System Info:
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (17 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 57s
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62s
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 57s
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 62s
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 62s
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 62s
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s
* kube-system registry-proxy-7kmmq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53s
* kube-system registry-vdzjt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62s
* kube-system tiller-deploy-78ff886c54-7kcct 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 57s
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 57s
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 37s
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 33s
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 33s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 800m (40%) 100m (5%)
* memory 550Mi (22%) 270Mi (11%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 63s kubelet, addons-20200701030206-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 62s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods
* Normal Starting 56s kube-proxy, addons-20200701030206-8084 Starting kube-proxy.
* Normal NodeReady 53s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady
*
* ==> dmesg <==
* "trace_clock=local"
* on the kernel command line
* [ +0.000071] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +1.825039] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.005760] systemd-fstab-generator[1147]: Ignoring "noauto" for root device
* [ +0.006598] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.628469] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +0.314006] vboxguest: loading out-of-tree module taints kernel.
* [ +0.005336] vboxguest: PCI device not found, probably running on physical hardware.
* [ +4.401958] systemd-fstab-generator[1990]: Ignoring "noauto" for root device
* [ +0.072204] systemd-fstab-generator[2000]: Ignoring "noauto" for root device
* [ +7.410511] systemd-fstab-generator[2190]: Ignoring "noauto" for root device
* [Jul 1 03:03] kauditd_printk_skb: 65 callbacks suppressed
* [ +0.256868] systemd-fstab-generator[2359]: Ignoring "noauto" for root device
* [ +0.295426] systemd-fstab-generator[2430]: Ignoring "noauto" for root device
* [ +1.557931] systemd-fstab-generator[2640]: Ignoring "noauto" for root device
* [ +3.130682] kauditd_printk_skb: 107 callbacks suppressed
* [ +8.920986] systemd-fstab-generator[3723]: Ignoring "noauto" for root device
* [ +8.276427] kauditd_printk_skb: 32 callbacks suppressed
* [ +8.301918] kauditd_printk_skb: 71 callbacks suppressed
* [ +5.560259] kauditd_printk_skb: 29 callbacks suppressed
* [Jul 1 03:04] kauditd_printk_skb: 11 callbacks suppressed
* [ +17.054203] NFSD: Unable to end grace period: -110
* [ +15.586696] kauditd_printk_skb: 29 callbacks suppressed
*
* ==> etcd [663dada323e9] <==
* raft2020/07/01 03:03:28 INFO: 38dbae10e7efb596 became leader at term 2
* raft2020/07/01 03:03:28 INFO: raft.node: 38dbae10e7efb596 elected leader 38dbae10e7efb596 at term 2
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute
*
* ==> kernel <==
* 03:04:38 up 2 min, 0 users, load average: 2.78, 1.08, 0.41
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [b7ced5cccc0a] <==
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.readinessProbe.properties.tcpSocket.properties.port has invalid property: anyOf
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.httpGet.properties.port has invalid property: anyOf
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf
* I0701 03:03:39.786250 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:03:39.786317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:03:39.809953 1 controller.go:606] quota admission added evaluator for: clusterserviceversions.operators.coreos.com
* I0701 03:03:39.838120 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:03:39.838409 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
*
* ==> kube-controller-manager [24d686838dec] <==
* I0701 03:03:41.514749 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0701 03:03:41.520691 1 shared_informer.go:230] Caches are synced for garbage collector
* E0701 03:03:41.534440 1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
* E0701 03:03:41.972255 1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector
*
* ==> kube-proxy [40c9a46cf08a] <==
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier.
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config
*
* ==> kube-scheduler [a8673db5ff2a] <==
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:04:38 UTC. --
* Jul 01 03:04:02 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:02.977707 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/olm-operator-5fd48d8cd4-sh5bk through plugin: invalid network status for
* Jul 01 03:04:02 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:02.992739 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:05.046781 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-vdzjt through plugin: invalid network status for
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.126552 3731 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.287977 3731 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.312864 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5ed5b503-410d-47fe-b817-f526f3df1542-apiservice-cert") pod "packageserver-fc86cd5d4-djgms" (UID: "5ed5b503-410d-47fe-b817-f526f3df1542")
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.312902 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "olm-operator-serviceaccount-token-p7zp2" (UniqueName: "kubernetes.io/secret/5ed5b503-410d-47fe-b817-f526f3df1542-olm-operator-serviceaccount-token-p7zp2") pod "packageserver-fc86cd5d4-djgms" (UID: "5ed5b503-410d-47fe-b817-f526f3df1542")
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.413832 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "olm-operator-serviceaccount-token-p7zp2" (UniqueName: "kubernetes.io/secret/f6a28a72-ed19-4c82-ad04-87a8fc6fdc75-olm-operator-serviceaccount-token-p7zp2") pod "packageserver-fc86cd5d4-wgfqr" (UID: "f6a28a72-ed19-4c82-ad04-87a8fc6fdc75")
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.413923 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/f6a28a72-ed19-4c82-ad04-87a8fc6fdc75-apiservice-cert") pod "packageserver-fc86cd5d4-wgfqr" (UID: "f6a28a72-ed19-4c82-ad04-87a8fc6fdc75")
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.418476 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.425914 3731 pod_container_deletor.go:77] Container "47f59199ff8b2542ebb3b8f2df0ec48ce6ba2ab3da623f4dd2e66e8c3d3c34f2" not found in pod's containers
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.440427 3731 pod_container_deletor.go:77] Container "740d3a15da583d6a597ee2c1e43b2c18ba003d2c1b42751671fa3775d5d84d5d" not found in pod's containers
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.443913 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for
* Jul 01 03:04:07 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:07.457557 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for
* Jul 01 03:04:07 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:07.463213 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for
* Jul 01 03:04:13 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:13.563226 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-7kmmq through plugin: invalid network status for
* Jul 01 03:04:25 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:25.730292 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd through plugin: invalid network status for
* Jul 01 03:04:28 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:28.779848 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:04:29 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:29.811228 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for
* Jul 01 03:04:30 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:30.833048 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.202932 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.265162 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.336890 3731 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-8hvxk" (UniqueName: "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk") pod "065fd00f-b3cd-4b35-8896-7021306fecbb" (UID: "065fd00f-b3cd-4b35-8896-7021306fecbb")
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.346186 3731 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk" (OuterVolumeSpecName: "default-token-8hvxk") pod "065fd00f-b3cd-4b35-8896-7021306fecbb" (UID: "065fd00f-b3cd-4b35-8896-7021306fecbb"). InnerVolumeSpecName "default-token-8hvxk". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.437405 3731 reconciler.go:319] Volume detached for volume "default-token-8hvxk" (UniqueName: "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk") on node "addons-20200701030206-8084" DevicePath ""
*
* ==> storage-provisioner [94232379c158] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (313ns)
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
--- FAIL: TestAddons/parallel/HelmTiller (139.99s)
addons_test.go:293: tiller-deploy stabilized in 18.029826ms
addons_test.go:295: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:332: "tiller-deploy-78ff886c54-7kcct" [40b7a3ba-bbb3-4355-8399-0e9570a4d0c8] Running
addons_test.go:295: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019518201s
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (336ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (501ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (424ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (443ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (849ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (442ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (475ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (436ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (433ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (460ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (1.283µs)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (462ns)
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (1.302µs)
addons_test.go:324: failed checking helm tiller: exec: "kubectl": executable file not found in $PATH
addons_test.go:327: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:237: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.124447778s)
helpers_test.go:245: TestAddons/parallel/HelmTiller logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:06:45 UTC. --
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.264160126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6815cdaec6e0a262bfafc87fa88eab7fcf036190a70f1d5687986c531a42fb9d/shim.sock" debug=false pid=6628
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.953751693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/636b722f9b872ae2b392ff7f4518d3777f898a782f30fde093c50c90dc789b8f/shim.sock" debug=false pid=6668
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.316757798Z" level=info msg="shim reaped" id=9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.317492282Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.389801388Z" level=info msg="shim reaped" id=49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.408845696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.935730704Z" level=info msg="shim reaped" id=4f853a2c4fb3a27395c5bbe725cc9e9ff5ac2fb56ce858cb3eae48e6c6f83ccb
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.808158458Z" level=info msg="shim reaped" id=ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.817866003Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:44.401425335Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7/shim.sock" debug=false pid=7303
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* ccc6490d8f7f4 65fedb276e53e 30 seconds ago Exited registry-server 3 d8512f3c21a09
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running packageserver 0 740d3a15da583
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running packageserver 0 47f59199ff8b2
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 2 minutes ago Running controller 0 eefa25270d8a6
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running olm-operator 0 4a26317d80253
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running catalog-operator 0 87e032f179b67
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 2 minutes ago Exited patch 0 a2f179901974b
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 2 minutes ago Running metrics-server 0 1b8c0d094c10b
* 1bc07123ace4f gcr.io/kubernetes-helm/tiller@sha256:59b6200a822ddb18577ca7120fb644a3297635f47d92db521836b39b74ad19e8 2 minutes ago Exited tiller 0 e8c2c1e0e0a62
* d6a261bca5222 67da37a9a360e 2 minutes ago Running coredns 0 d11b454b968e3
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 2 minutes ago Exited create 0 de667a00fefb0
* 94232379c1581 4689081edb103 2 minutes ago Running storage-provisioner 0 7896015c69c73
* 40c9a46cf08ab 3439b7546f29b 3 minutes ago Running kube-proxy 0 8df7717a34531
* a8673db5ff2ad 76216c34ed0c7 3 minutes ago Running kube-scheduler 0 69d249b151f2d
* 663dada323e98 303ce5db0e90d 3 minutes ago Running etcd 0 4777c338fb836
* 24d686838dec2 da26705ccb4b5 3 minutes ago Running kube-controller-manager 0 ff24f8e852b09
* b7ced5cccc0a4 7e28efa976bd1 3 minutes ago Running kube-apiserver 0 1456a98fec87b
*
* ==> coredns [d6a261bca522] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: addons-20200701030206-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=addons-20200701030206-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=addons-20200701030206-8084
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: addons-20200701030206-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:06:35 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.105
* Hostname: addons-20200701030206-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* System Info:
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (15 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 3m4s
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m9s
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 3m4s
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 3m9s
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m9s
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m9s
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m9s
* kube-system tiller-deploy-78ff886c54-7kcct 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 3m4s
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 3m4s
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 2m44s
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 2m40s
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 2m40s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 800m (40%) 100m (5%)
* memory 550Mi (22%) 270Mi (11%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 3m10s kubelet, addons-20200701030206-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 3m9s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods
* Normal Starting 3m3s kube-proxy, addons-20200701030206-8084 Starting kube-proxy.
* Normal NodeReady 3m kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.000002] ? finish_task_switch+0x6a/0x270
* [ +0.000001] mem_cgroup_try_charge+0x81/0x170
* [ +0.000002] __add_to_page_cache_locked+0x5f/0x200
* [ +0.000025] add_to_page_cache_lru+0x45/0xe0
* [ +0.000002] generic_file_read_iter+0x77f/0x9b0
* [ +0.000002] ? _cond_resched+0x10/0x40
* [ +0.000004] ? __inode_security_revalidate+0x43/0x60
* [ +0.000001] do_iter_readv_writev+0x16b/0x190
* [ +0.000002] do_iter_read+0xc3/0x170
* [ +0.000005] ovl_read_iter+0xb1/0x100 [overlay]
* [ +0.000002] __vfs_read+0x109/0x170
* [ +0.000002] vfs_read+0x84/0x130
* [ +0.000001] ksys_pread64+0x6c/0x80
* [ +0.000003] do_syscall_64+0x49/0x110
* [ +0.000001] entry_SYSCALL_64_after_hwframe+0x44/0xa9
* [ +0.000001] RIP: 0033:0xe1204f
* [ +0.000001] Code: 0f 05 48 83 f8 da 75 08 4c 89 c0 48 89 d6 0f 05 c3 48 89 f8 4d 89 c2 48 89 f7 4d 89 c8 48 89 d6 4c 8b 4c 24 08 48 89 ca 0f 05 <c3> e9 e1 ff ff ff 41 54 49 89 f0 55 53 89 d3 85 c9 74 05 b9 80 00
* [ +0.000001] RSP: 002b:00007f369435c888 EFLAGS: 00000246 ORIG_RAX: 0000000000000011
* [ +0.000001] RAX: ffffffffffffffda RBX: 0000000000e20000 RCX: 0000000000e1204f
* [ +0.000001] RDX: 0000000000001000 RSI: 0000000006448448 RDI: 0000000000000025
* [ +0.000000] RBP: 0000000000001000 R08: 0000000000000000 R09: 0000000000000000
* [ +0.000001] R10: 0000000000e20000 R11: 0000000000000246 R12: 0000000000001000
* [ +0.000000] R13: 0000000006448448 R14: 0000000006448448 R15: 00000000039a0340
* [ +0.000195] Memory cgroup out of memory: Kill process 8065 (registry-server) score 2044 or sacrifice child
* [ +0.000044] Killed process 8065 (registry-server) total-vm:188040kB, anon-rss:96604kB, file-rss:14388kB, shmem-rss:0kB
*
* ==> etcd [663dada323e9] <==
* raft2020/07/01 03:03:28 INFO: 38dbae10e7efb596 became leader at term 2
* raft2020/07/01 03:03:28 INFO: raft.node: 38dbae10e7efb596 elected leader 38dbae10e7efb596 at term 2
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute
*
* ==> kernel <==
* 03:06:45 up 4 min, 0 users, load average: 0.45, 0.74, 0.36
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [b7ced5cccc0a] <==
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf
* I0701 03:03:39.786250 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:03:39.786317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:03:39.809953 1 controller.go:606] quota admission added evaluator for: clusterserviceversions.operators.coreos.com
* I0701 03:03:39.838120 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:03:39.838409 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
*
* ==> kube-controller-manager [24d686838dec] <==
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
*
* ==> kube-proxy [40c9a46cf08a] <==
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier.
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config
*
* ==> kube-scheduler [a8673db5ff2a] <==
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:06:45 UTC. --
* Jul 01 03:05:27 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:27.859750 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:05:29 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:29.166691 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:48.447519 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:48.453068 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: E0701 03:05:48.453371 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:48.453959 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7
* Jul 01 03:05:49 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:49.464107 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:05:52 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:52.774538 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:05:52 addons-20200701030206-8084 kubelet[3731]: E0701 03:05:52.775578 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:06:03 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:03.630022 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:06:03 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:03.630323 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:06:15 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:15.630056 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:06:15 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:15.768744 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:06:17 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:17.009313 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:44.338520 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:44.347309 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:44.347710 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:44.350850 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:45.360047 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.371917 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.411378 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:45.414996 3731 remote_runtime.go:295] ContainerStatus "1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.419789 3731 reconciler.go:196] operationExecutor.UnmountVolume started for volume "tiller-token-rw5b2" (UniqueName: "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2") pod "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8" (UID: "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8")
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.431558 3731 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2" (OuterVolumeSpecName: "tiller-token-rw5b2") pod "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8" (UID: "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8"). InnerVolumeSpecName "tiller-token-rw5b2". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.520128 3731 reconciler.go:319] Volume detached for volume "tiller-token-rw5b2" (UniqueName: "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2") on node "addons-20200701030206-8084" DevicePath ""
*
* ==> storage-provisioner [94232379c158] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (547ns)
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
--- FAIL: TestAddons/parallel/Ingress (343.33s)
addons_test.go:100: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...
helpers_test.go:332: "ingress-nginx-admission-create-59b72" [6375af40-e914-4b59-8cd2-35cb294ac5a4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:100: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 13.892629ms
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (405ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (442ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (454ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (836ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (523ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (740ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (433ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (470ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (450ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (506ns)
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (898ns)
addons_test.go:116: failed to create ingress: exec: "kubectl": executable file not found in $PATH
addons_test.go:119: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:119: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml: exec: "kubectl": executable file not found in $PATH (76ns)
addons_test.go:121: failed to kubectl replace nginx-pod-svc. args "kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml". exec: "kubectl": executable file not found in $PATH
addons_test.go:124: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
addons_test.go:124: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 4m0s: timed out waiting for the condition ****
addons_test.go:124: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
addons_test.go:124: TestAddons/parallel/Ingress: showing logs for failed pods as of 2020-07-01 03:10:07.970072088 +0000 UTC m=+813.071360688
addons_test.go:125: failed waiting for ngnix pod: run=nginx within 4m0s: timed out waiting for the condition
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:237: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25
helpers_test.go:245: TestAddons/parallel/Ingress logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:10:08 UTC. --
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.808158458Z" level=info msg="shim reaped" id=ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.817866003Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:04:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:44.401425335Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7/shim.sock" debug=false pid=7303
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:07:32 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:07:32.714468798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b/shim.sock" debug=false pid=8814
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734798119Z" level=error msg="stream copy error: reading from a closed fifo"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734807838Z" level=error msg="stream copy error: reading from a closed fifo"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.738961780Z" level=error msg="Error running exec 2f6e2b249139d96c0e8499b70c146bae118aca7838bb26b2fbf9815155067bbb in container: OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"read init-p: connection reset by peer\": unknown"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.789835802Z" level=info msg="shim reaped" id=208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.800056647Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:09:40 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:09:40.710837642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c/shim.sock" debug=false pid=9653
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 6197a0fa774c1 65fedb276e53e 28 seconds ago Running registry-server 5 d8512f3c21a09
* 208781ffec9d6 65fedb276e53e 2 minutes ago Exited registry-server 4 d8512f3c21a09
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 5 minutes ago Running packageserver 0 740d3a15da583
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 5 minutes ago Running packageserver 0 47f59199ff8b2
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 5 minutes ago Running controller 0 eefa25270d8a6
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 6 minutes ago Running olm-operator 0 4a26317d80253
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 6 minutes ago Running catalog-operator 0 87e032f179b67
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 6 minutes ago Exited patch 0 a2f179901974b
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 6 minutes ago Running metrics-server 0 1b8c0d094c10b
* d6a261bca5222 67da37a9a360e 6 minutes ago Running coredns 0 d11b454b968e3
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 6 minutes ago Exited create 0 de667a00fefb0
* 94232379c1581 4689081edb103 6 minutes ago Running storage-provisioner 0 7896015c69c73
* 40c9a46cf08ab 3439b7546f29b 6 minutes ago Running kube-proxy 0 8df7717a34531
* a8673db5ff2ad 76216c34ed0c7 6 minutes ago Running kube-scheduler 0 69d249b151f2d
* 663dada323e98 303ce5db0e90d 6 minutes ago Running etcd 0 4777c338fb836
* 24d686838dec2 da26705ccb4b5 6 minutes ago Running kube-controller-manager 0 ff24f8e852b09
* b7ced5cccc0a4 7e28efa976bd1 6 minutes ago Running kube-apiserver 0 1456a98fec87b
*
* ==> coredns [d6a261bca522] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: addons-20200701030206-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=addons-20200701030206-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=addons-20200701030206-8084
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: addons-20200701030206-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:10:05 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.105
* Hostname: addons-20200701030206-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* System Info:
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (14 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 6m27s
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m32s
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 6m27s
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 6m32s
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 6m32s
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m27s
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m32s
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m27s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m32s
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 6m27s
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 6m27s
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 6m7s
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 6m3s
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 6m3s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 800m (40%) 100m (5%)
* memory 550Mi (22%) 270Mi (11%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 6m33s kubelet, addons-20200701030206-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 6m32s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods
* Normal Starting 6m26s kube-proxy, addons-20200701030206-8084 Starting kube-proxy.
* Normal NodeReady 6m23s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.000001] Call Trace:
* [ +0.000006] dump_stack+0x66/0x8b
* [ +0.000003] dump_header+0x66/0x28e
* [ +0.000002] oom_kill_process+0x251/0x270
* [ +0.000001] out_of_memory+0x10b/0x4a0
* [ +0.000003] mem_cgroup_out_of_memory+0xb0/0xd0
* [ +0.000002] try_charge+0x688/0x770
* [ +0.000002] ? __alloc_pages_nodemask+0x11f/0x2a0
* [ +0.000000] mem_cgroup_try_charge+0x81/0x170
* [ +0.000002] mem_cgroup_try_charge_delay+0x17/0x40
* [ +0.000001] __handle_mm_fault+0x7be/0xe50
* [ +0.000002] handle_mm_fault+0xd7/0x230
* [ +0.000003] __do_page_fault+0x23e/0x4c0
* [ +0.000003] ? async_page_fault+0x8/0x30
* [ +0.000001] async_page_fault+0x1e/0x30
* [ +0.000001] RIP: 0033:0xd65cee
* [ +0.000001] Code: 31 d2 48 c7 83 80 01 00 00 00 00 00 00 66 44 89 b3 64 01 00 00 89 ab 68 01 00 00 85 ff 79 08 eb 31 0f 1f 00 48 89 f0 83 e9 01 <48> 89 10 4a 8d 34 00 48 89 c2 83 f9 ff 75 eb 48 8d 47 01 49 0f af
* [ +0.000001] RSP: 002b:00007f2f5d558d40 EFLAGS: 00010202
* [ +0.000001] RAX: 00000000037c1048 RBX: 0000000002eebce8 RCX: 0000000000000052
* [ +0.000000] RDX: 00000000037c0b98 RSI: 00000000037c1048 RDI: 0000000000000063
* [ +0.000001] RBP: 0000000000000064 R08: 00000000000004b0 R09: 0000000003dfe470
* [ +0.000000] R10: 0000000000000000 R11: 000000000197c600 R12: 0000000000000000
* [ +0.000001] R13: 00000000037bc548 R14: 00000000000004b0 R15: 0000000000000101
* [ +0.000093] Memory cgroup out of memory: Kill process 8832 (registry-server) score 2051 or sacrifice child
* [ +0.000038] Killed process 8832 (registry-server) total-vm:237320kB, anon-rss:97324kB, file-rss:14452kB, shmem-rss:0kB
*
* ==> etcd [663dada323e9] <==
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute
* 2020-07-01 03:09:41.650732 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/olm/operatorhubio-catalog\" " with result "range_response_count:1 size:2026" took too long (196.156523ms) to execute
* 2020-07-01 03:09:41.651243 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (203.221616ms) to execute
*
* ==> kernel <==
* 03:10:08 up 7 min, 0 users, load average: 1.13, 1.07, 0.58
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [b7ced5cccc0a] <==
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:06:57.886892 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:06:57.887194 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:08:32.648453 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:08:32.648470 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:09:32.651434 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:09:32.651549 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
*
* ==> kube-controller-manager [24d686838dec] <==
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
*
* ==> kube-proxy [40c9a46cf08a] <==
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier.
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config
*
* ==> kube-scheduler [a8673db5ff2a] <==
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:10:09 UTC. --
* Jul 01 03:07:04 addons-20200701030206-8084 kubelet[3731]: E0701 03:07:04.630129 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:07:19 addons-20200701030206-8084 kubelet[3731]: I0701 03:07:19.630051 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:07:19 addons-20200701030206-8084 kubelet[3731]: E0701 03:07:19.631116 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:07:32 addons-20200701030206-8084 kubelet[3731]: I0701 03:07:32.629729 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:07:32 addons-20200701030206-8084 kubelet[3731]: W0701 03:07:32.870040 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:07:34 addons-20200701030206-8084 kubelet[3731]: W0701 03:07:34.009338 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: W0701 03:08:16.469339 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:16.474042 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:16.474339 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:16.474701 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:08:17 addons-20200701030206-8084 kubelet[3731]: W0701 03:08:17.484224 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:08:22 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:22.774423 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:22 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:22.774897 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:08:36 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:36.629790 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:36 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:36.630137 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:08:49 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:49.630489 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:49 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:49.630882 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:09:01 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:01.629892 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:09:01 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:01.630798 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:09:12 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:12.630322 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:09:12 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:12.631358 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:09:27 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:27.629743 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:09:27 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:27.630542 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:09:40 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:40.630138 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:09:41 addons-20200701030206-8084 kubelet[3731]: W0701 03:09:41.359091 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
*
* ==> storage-provisioner [94232379c158] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (287ns)
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
--- FAIL: TestAddons/parallel/MetricsServer (454.94s)
addons_test.go:249: metrics-server stabilized in 19.013125ms
addons_test.go:251: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:332: "metrics-server-7bc6d75975-qxr52" [4315b491-aec4-47d9-af19-ba67e84066dc] Running
addons_test.go:251: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018120755s
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (360ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (493ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (459ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (628ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (474ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (524ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (728ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (473ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (487ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (420ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (573ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (428ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (498ns)
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (425ns)
addons_test.go:272: failed checking metric server: exec: "kubectl": executable file not found in $PATH
addons_test.go:275: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:237: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.27287553s)
helpers_test.go:245: TestAddons/parallel/MetricsServer logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:12:00 UTC. --
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:07:32 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:07:32.714468798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b/shim.sock" debug=false pid=8814
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734798119Z" level=error msg="stream copy error: reading from a closed fifo"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734807838Z" level=error msg="stream copy error: reading from a closed fifo"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.738961780Z" level=error msg="Error running exec 2f6e2b249139d96c0e8499b70c146bae118aca7838bb26b2fbf9815155067bbb in container: OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"read init-p: connection reset by peer\": unknown"
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.789835802Z" level=info msg="shim reaped" id=208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.800056647Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:09:40 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:09:40.710837642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c/shim.sock" debug=false pid=9653
* Jul 01 03:10:11 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:10:11.268589252Z" level=info msg="shim reaped" id=6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:11 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:10:11.283756825Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:11:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:11:59.793295024Z" level=info msg="shim reaped" id=f34af38da2a244711c04394f746a798ee2b720389b1e7950ef7a900071a733b6
* Jul 01 03:11:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:11:59.802564544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:12:00 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:12:00.029583706Z" level=info msg="shim reaped" id=1b8c0d094c10b4700bd35471254c00cd98bd77efcab123265e16549fc824452e
* Jul 01 03:12:00 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:12:00.039659383Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 6197a0fa774c1 65fedb276e53e 2 minutes ago Exited registry-server 5 d8512f3c21a09
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running packageserver 0 47f59199ff8b2
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running packageserver 0 740d3a15da583
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 7 minutes ago Running controller 0 eefa25270d8a6
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running olm-operator 0 4a26317d80253
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running catalog-operator 0 87e032f179b67
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 8 minutes ago Exited patch 0 a2f179901974b
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 8 minutes ago Exited create 0 de667a00fefb0
* d6a261bca5222 67da37a9a360e 8 minutes ago Running coredns 0 d11b454b968e3
* 94232379c1581 4689081edb103 8 minutes ago Running storage-provisioner 0 7896015c69c73
* 40c9a46cf08ab 3439b7546f29b 8 minutes ago Running kube-proxy 0 8df7717a34531
* a8673db5ff2ad 76216c34ed0c7 8 minutes ago Running kube-scheduler 0 69d249b151f2d
* 663dada323e98 303ce5db0e90d 8 minutes ago Running etcd 0 4777c338fb836
* 24d686838dec2 da26705ccb4b5 8 minutes ago Running kube-controller-manager 0 ff24f8e852b09
* b7ced5cccc0a4 7e28efa976bd1 8 minutes ago Running kube-apiserver 0 1456a98fec87b
*
* ==> coredns [d6a261bca522] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: addons-20200701030206-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=addons-20200701030206-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=addons-20200701030206-8084
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: addons-20200701030206-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:11:56 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.105
* Hostname: addons-20200701030206-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2470872Ki
* pods: 110
* System Info:
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (14 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 8m19s
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 8m19s
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m24s
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m24s
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m19s
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m24s
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m19s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 8m19s
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 8m19s
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 7m59s
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 7m55s
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 7m55s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 800m (40%) 100m (5%)
* memory 550Mi (22%) 270Mi (11%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 8m25s kubelet, addons-20200701030206-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 8m24s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods
* Normal Starting 8m18s kube-proxy, addons-20200701030206-8084 Starting kube-proxy.
* Normal NodeReady 8m15s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.000000] Call Trace:
* [ +0.000005] dump_stack+0x66/0x8b
* [ +0.000004] dump_header+0x66/0x28e
* [ +0.000001] oom_kill_process+0x251/0x270
* [ +0.000002] out_of_memory+0x10b/0x4a0
* [ +0.000003] mem_cgroup_out_of_memory+0xb0/0xd0
* [ +0.000002] try_charge+0x728/0x770
* [ +0.000001] ? __alloc_pages_nodemask+0x11f/0x2a0
* [ +0.000001] mem_cgroup_try_charge+0x81/0x170
* [ +0.000001] mem_cgroup_try_charge_delay+0x17/0x40
* [ +0.000002] __handle_mm_fault+0x7be/0xe50
* [ +0.000002] handle_mm_fault+0xd7/0x230
* [ +0.000003] __do_page_fault+0x23e/0x4c0
* [ +0.000003] ? async_page_fault+0x8/0x30
* [ +0.000001] async_page_fault+0x1e/0x30
* [ +0.000002] RIP: 0033:0xe0a8f5
* [ +0.000001] Code: c3 48 8b 47 08 48 89 fa 48 83 e0 fe 48 8d 48 f0 48 39 f1 76 29 48 89 c1 49 89 f0 48 8d 3c 37 48 29 f1 49 83 c8 01 48 83 c9 01 <4c> 89 07 48 89 4f 08 48 89 0c 02 4c 89 42 08 e9 d9 fc ff ff c3 41
* [ +0.000025] RSP: 002b:00007fa815df7b78 EFLAGS: 00010202
* [ +0.000001] RAX: 0000000000016ac0 RBX: 0000000006ca0530 RCX: 0000000000001601
* [ +0.000001] RDX: 0000000006ca0530 RSI: 00000000000154c0 RDI: 0000000006cb59f0
* [ +0.000000] RBP: 0000000006ca1000 R08: 00000000000154c1 R09: 00000000011ce91e
* [ +0.000001] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
* [ +0.000000] R13: fc00000000000000 R14: 0000000006c7fb80 R15: 000000000197cb40
* [ +0.000123] Memory cgroup out of memory: Kill process 9672 (registry-server) score 2054 or sacrifice child
* [ +0.000041] Killed process 9672 (registry-server) total-vm:191280kB, anon-rss:97532kB, file-rss:14516kB, shmem-rss:0kB
*
* ==> etcd [663dada323e9] <==
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute
* 2020-07-01 03:09:41.650732 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/olm/operatorhubio-catalog\" " with result "range_response_count:1 size:2026" took too long (196.156523ms) to execute
* 2020-07-01 03:09:41.651243 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (203.221616ms) to execute
*
* ==> kernel <==
* 03:12:00 up 9 min, 0 users, load average: 0.90, 0.89, 0.56
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [b7ced5cccc0a] <==
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint"
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:06:57.886892 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:06:57.887194 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:08:32.648453 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:08:32.648470 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:09:32.651434 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:09:32.651549 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
* E0701 03:11:32.655413 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
* I0701 03:11:32.655497 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
*
* ==> kube-controller-manager [24d686838dec] <==
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
*
* ==> kube-proxy [40c9a46cf08a] <==
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier.
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config
*
* ==> kube-scheduler [a8673db5ff2a] <==
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:12:00 UTC. --
* Jul 01 03:09:41 addons-20200701030206-8084 kubelet[3731]: W0701 03:09:41.359091 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: W0701 03:10:11.710194 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:11.716171 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:11.716706 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:11.724093 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: W0701 03:10:12.727026 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:12.774389 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:12.774757 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:10:24 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:24.629843 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:24 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:24.631034 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:10:37 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:37.631509 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:37 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:37.631889 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:10:51 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:51.630062 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:10:51 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:51.630968 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:04 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:04.629759 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:11:04 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:04.630117 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:16.629761 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:11:16 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:16.630187 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:30 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:30.630363 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:11:30 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:30.630749 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:43 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:43.629779 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:11:43 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:43.630702 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:58 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:58.629807 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c
* Jul 01 03:11:58 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:58.630146 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"
* Jul 01 03:11:59 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:59.995699 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f34af38da2a244711c04394f746a798ee2b720389b1e7950ef7a900071a733b6
*
* ==> storage-provisioner [94232379c158] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (281ns)
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
addons_test.go:71: (dbg) Run: out/minikube-linux-amd64 stop -p addons-20200701030206-8084
addons_test.go:71: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20200701030206-8084: (14.093054537s)
addons_test.go:75: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p addons-20200701030206-8084
addons_test.go:79: (dbg) Run: out/minikube-linux-amd64 addons disable dashboard -p addons-20200701030206-8084
helpers_test.go:170: Cleaning up "addons-20200701030206-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p addons-20200701030206-8084
=== RUN TestCertOptions
=== PAUSE TestCertOptions
=== RUN TestDockerFlags
=== PAUSE TestDockerFlags
=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== RUN TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv
=== RUN TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate
=== RUN TestHyperKitDriverInstallOrUpdate
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)
driver_install_or_update_test.go:102: Skip if not darwin.
=== RUN TestErrorSpam
=== PAUSE TestErrorSpam
=== RUN TestFunctional
=== RUN TestFunctional/serial
=== RUN TestFunctional/serial/CopySyncFile
=== RUN TestFunctional/serial/StartWithProxy
=== RUN TestFunctional/serial/SoftStart
=== RUN TestFunctional/serial/KubeContext
=== RUN TestFunctional/serial/KubectlGetPods
=== RUN TestFunctional/serial/CacheCmd
=== RUN TestFunctional/serial/CacheCmd/cache
=== RUN TestFunctional/serial/CacheCmd/cache/add
=== RUN TestFunctional/serial/CacheCmd/cache/delete_busybox:1.28.4-glibc
=== RUN TestFunctional/serial/CacheCmd/cache/list
=== RUN TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
=== RUN TestFunctional/serial/CacheCmd/cache/cache_reload
=== RUN TestFunctional/serial/CacheCmd/cache/delete
=== RUN TestFunctional/serial/MinikubeKubectlCmd
=== PAUSE TestFunctional
=== RUN TestGvisorAddon
=== PAUSE TestGvisorAddon
=== RUN TestMultiNode
=== RUN TestMultiNode/serial
=== RUN TestMultiNode/serial/FreshStart2Nodes
=== RUN TestMultiNode/serial/AddNode
=== RUN TestMultiNode/serial/StopNode
=== RUN TestMultiNode/serial/StartAfterStop
=== RUN TestMultiNode/serial/DeleteNode
=== RUN TestMultiNode/serial/StopMultiNode
=== RUN TestMultiNode/serial/RestartMultiNode
--- FAIL: TestMultiNode (396.72s)
--- FAIL: TestMultiNode/serial (393.41s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (181.13s)
multinode_test.go:65: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --wait=true --memory=2200 --nodes=2 --driver=kvm2
multinode_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --wait=true --memory=2200 --nodes=2 --driver=kvm2 : (3m0.758212273s)
multinode_test.go:71: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (88.72s)
multinode_test.go:89: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20200701031411-8084 -v 3 --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20200701031411-8084 -v 3 --alsologtostderr: (1m28.212112207s)
multinode_test.go:95: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
--- PASS: TestMultiNode/serial/StopNode (3.80s)
multinode_test.go:111: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node stop m03
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node stop m03: (3.07440874s)
multinode_test.go:117: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status
multinode_test.go:117: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status: exit status 7 (349.328363ms)
-- stdout --
multinode-20200701031411-8084
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-20200701031411-8084-m02
type: Worker
host: Running
kubelet: Running
multinode-20200701031411-8084-m03
type: Worker
host: Stopped
kubelet: Stopped
-- /stdout --
multinode_test.go:124: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
multinode_test.go:124: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr: exit status 7 (377.213435ms)
-- stdout --
multinode-20200701031411-8084
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-20200701031411-8084-m02
type: Worker
host: Running
kubelet: Running
multinode-20200701031411-8084-m03
type: Worker
host: Stopped
kubelet: Stopped
-- /stdout --
** stderr **
I0701 03:18:44.510273 13193 mustload.go:64] Loading cluster: multinode-20200701031411-8084
I0701 03:18:44.510461 13193 status.go:124] checking status of multinode-20200701031411-8084 ...
I0701 03:18:44.510761 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.510820 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.523687 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:34167
I0701 03:18:44.524062 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.524479 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.524513 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.524768 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.524889 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetState
I0701 03:18:44.527159 13193 status.go:188] multinode-20200701031411-8084 host status = "Running" (err=<nil>)
I0701 03:18:44.527172 13193 host.go:65] Checking if "multinode-20200701031411-8084" exists ...
I0701 03:18:44.527429 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.527459 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.540047 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:39119
I0701 03:18:44.540741 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.541067 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.541084 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.541313 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.541439 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetIP
I0701 03:18:44.544944 13193 host.go:65] Checking if "multinode-20200701031411-8084" exists ...
I0701 03:18:44.545201 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.545229 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.556486 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:39935
I0701 03:18:44.556782 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.557080 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.557098 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.557295 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.557402 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .DriverName
I0701 03:18:44.557480 13193 system_pods.go:161] Checking kubelet status ...
I0701 03:18:44.557525 13193 ssh_runner.go:148] Run: systemctl --version
I0701 03:18:44.557541 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetSSHHostname
I0701 03:18:44.561234 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetSSHPort
I0701 03:18:44.561337 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetSSHKeyPath
I0701 03:18:44.561447 13193 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetSSHUsername
I0701 03:18:44.561543 13193 sshutil.go:44] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/.minikube/machines/multinode-20200701031411-8084/id_rsa Username:docker}
I0701 03:18:44.649144 13193 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I0701 03:18:44.657946 13193 status.go:232] multinode-20200701031411-8084 kubelet status = Running
I0701 03:18:44.658632 13193 kubeconfig.go:93] found "multinode-20200701031411-8084" server: "https://192.168.39.89:8443"
I0701 03:18:44.658651 13193 api_server.go:146] Checking apiserver status ...
I0701 03:18:44.658673 13193 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 03:18:44.666550 13193 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/3453/cgroup
I0701 03:18:44.695265 13193 api_server.go:162] apiserver freezer: "5:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e0572a88a3cba62a6f9570b3c55bad.slice/docker-f9c0c6a900dcf3032abae0b484fbe3f468ad57bb34cf953d29303339b1113a12.scope"
I0701 03:18:44.695321 13193 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e0572a88a3cba62a6f9570b3c55bad.slice/docker-f9c0c6a900dcf3032abae0b484fbe3f468ad57bb34cf953d29303339b1113a12.scope/freezer.state
I0701 03:18:44.701508 13193 api_server.go:184] freezer state: "THAWED"
I0701 03:18:44.701523 13193 api_server.go:215] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
I0701 03:18:44.706501 13193 api_server.go:235] https://192.168.39.89:8443/healthz returned 200:
ok
I0701 03:18:44.706520 13193 status.go:253] multinode-20200701031411-8084 apiserver status = Running (err=<nil>)
I0701 03:18:44.706529 13193 status.go:126] multinode-20200701031411-8084 status: &{Name:multinode-20200701031411-8084 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false}
I0701 03:18:44.706553 13193 status.go:124] checking status of multinode-20200701031411-8084-m02 ...
I0701 03:18:44.706858 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.706889 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.719354 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:39831
I0701 03:18:44.720038 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.720376 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.720393 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.720634 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.720753 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetState
I0701 03:18:44.722893 13193 status.go:188] multinode-20200701031411-8084-m02 host status = "Running" (err=<nil>)
I0701 03:18:44.722905 13193 host.go:65] Checking if "multinode-20200701031411-8084-m02" exists ...
I0701 03:18:44.723163 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.723193 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.735597 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:35829
I0701 03:18:44.735917 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.736210 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.736232 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.736494 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.736640 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetIP
I0701 03:18:44.740492 13193 host.go:65] Checking if "multinode-20200701031411-8084-m02" exists ...
I0701 03:18:44.740779 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.740813 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.752184 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:39379
I0701 03:18:44.752837 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.753135 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.753150 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.753353 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.753449 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .DriverName
I0701 03:18:44.753526 13193 system_pods.go:161] Checking kubelet status ...
I0701 03:18:44.753556 13193 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I0701 03:18:44.753569 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetSSHHostname
I0701 03:18:44.756820 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetSSHPort
I0701 03:18:44.756925 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetSSHKeyPath
I0701 03:18:44.757027 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetSSHUsername
I0701 03:18:44.757093 13193 sshutil.go:44] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/.minikube/machines/multinode-20200701031411-8084-m02/id_rsa Username:docker}
I0701 03:18:44.841510 13193 status.go:232] multinode-20200701031411-8084-m02 kubelet status = Running
I0701 03:18:44.841530 13193 status.go:126] multinode-20200701031411-8084-m02 status: &{Name:multinode-20200701031411-8084-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true}
I0701 03:18:44.841541 13193 status.go:124] checking status of multinode-20200701031411-8084-m03 ...
I0701 03:18:44.841952 13193 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:18:44.841986 13193 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:18:44.854394 13193 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:44653
I0701 03:18:44.854708 13193 main.go:115] libmachine: () Calling .GetVersion
I0701 03:18:44.855106 13193 main.go:115] libmachine: Using API Version 1
I0701 03:18:44.855126 13193 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:18:44.855375 13193 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:18:44.855527 13193 main.go:115] libmachine: (multinode-20200701031411-8084-m03) Calling .GetState
I0701 03:18:44.857689 13193 status.go:188] multinode-20200701031411-8084-m03 host status = "Stopped" (err=<nil>)
I0701 03:18:44.857701 13193 status.go:201] host is not running, skipping remaining checks
I0701 03:18:44.857705 13193 status.go:126] multinode-20200701031411-8084-m03 status: &{Name:multinode-20200701031411-8084-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}
** /stderr **
--- FAIL: TestMultiNode/serial/StartAfterStop (19.37s)
multinode_test.go:154: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node start m03 --alsologtostderr
multinode_test.go:154: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node start m03 --alsologtostderr: (17.482218242s)
multinode_test.go:161: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status
multinode_test.go:175: (dbg) Run: kubectl get nodes
multinode_test.go:175: (dbg) Non-zero exit: kubectl get nodes: exec: "kubectl": executable file not found in $PATH (272ns)
multinode_test.go:177: failed to kubectl get nodes. args "kubectl get nodes" : exec: "kubectl": executable file not found in $PATH
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20200701031411-8084 -n multinode-20200701031411-8084
helpers_test.go:237: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 logs -n 25
helpers_test.go:245: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:14:19 UTC, end at Wed 2020-07-01 03:19:03 UTC. --
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.064118154Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.064350644Z" level=info msg="Loading containers: start."
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.175697325Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.220569911Z" level=info msg="Loading containers: done."
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.247569902Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.247801950Z" level=info msg="Daemon has completed initialization"
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.268559125Z" level=info msg="API listen on /var/run/docker.sock"
* Jul 01 03:15:24 multinode-20200701031411-8084 systemd[1]: Started Docker Application Container Engine.
* Jul 01 03:15:24 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:24.269653777Z" level=info msg="API listen on [::]:2376"
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.372729293Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee162b48c7c9614faef6c1a644f125bf30857d8e736c9921b7ff51846b9d3b3a/shim.sock" debug=false pid=3188
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.373079168Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f30e3e875948530cdb0bd20fe7afd9119166c1847510f0a0c5c85260c2c1e7b/shim.sock" debug=false pid=3193
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.404127618Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f5fa90d28b6d033972b8ddec1989efd84dbeb8479b1fc29b2db0aeea58e32e2/shim.sock" debug=false pid=3221
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.496706818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5588b71272198a9db1a7801d2193535c292c4a7ab8b29d6c010c8b35b71fbbcc/shim.sock" debug=false pid=3255
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.802980521Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9c0c6a900dcf3032abae0b484fbe3f468ad57bb34cf953d29303339b1113a12/shim.sock" debug=false pid=3364
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.813128344Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b7a14319e5eba03444dcdb7c7dc6e64dd0fe5c2c05184537fe0e5a87df7993ca/shim.sock" debug=false pid=3377
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.814322491Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2121efe9ad0922175582bb29ab61b12c53560833d84ad52403eb9fb9153d8d19/shim.sock" debug=false pid=3381
* Jul 01 03:15:30 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:30.814610516Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/697c3f39861e177076b8d43f0e7472228030ae98c9896864329f722f2fe84a17/shim.sock" debug=false pid=3380
* Jul 01 03:15:45 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:45.090788584Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bc4958088bd33c35e0850e24345b3456eb8afeb173535b0fc7a76103914a42f0/shim.sock" debug=false pid=4149
* Jul 01 03:15:45 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:45.426017647Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5d17ab07a631df93cae8ccccfc423a811fa168505c11ecff7d72dbb57e222533/shim.sock" debug=false pid=4223
* Jul 01 03:15:50 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:50.830585917Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dfc88820ccde27404c802b61e94d86de48193c09934dea506426119918dbb14e/shim.sock" debug=false pid=4347
* Jul 01 03:15:51 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:51.022951164Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8175d56511719233659980d1b4b0ac19d7b72f1d2f57b3842d4b9b8618584b76/shim.sock" debug=false pid=4381
* Jul 01 03:15:51 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:51.974308411Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0df6a575b8f49b11f3c816fcfd6d8e4ca5e2d01dbe271484241e97082895c3e/shim.sock" debug=false pid=4442
* Jul 01 03:15:52 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:15:52.246222748Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/906b3966fd863b551a1c60c762417cd2fc9c7807a0b42939cd018816a0d0a2dd/shim.sock" debug=false pid=4494
* Jul 01 03:17:13 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:17:13.426918242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6e55bfe94bda56bbd57659cc59cef1257634e28b5c4bb4c99508ab7c905741c4/shim.sock" debug=false pid=4898
* Jul 01 03:17:19 multinode-20200701031411-8084 dockerd[2209]: time="2020-07-01T03:17:19.638042041Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4a263f26ce2dc9f507fce07d6dd762471d9bf52c0d1297299f583052f395fb49/shim.sock" debug=false pid=5033
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 4a263f26ce2dc kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98 About a minute ago Running kindnet-cni 0 6e55bfe94bda5
* 906b3966fd863 67da37a9a360e 3 minutes ago Running coredns 0 d0df6a575b8f4
* 8175d56511719 4689081edb103 3 minutes ago Running storage-provisioner 0 dfc88820ccde2
* 5d17ab07a631d 3439b7546f29b 3 minutes ago Running kube-proxy 0 bc4958088bd33
* 697c3f39861e1 76216c34ed0c7 3 minutes ago Running kube-scheduler 0 5588b71272198
* 2121efe9ad092 da26705ccb4b5 3 minutes ago Running kube-controller-manager 0 2f5fa90d28b6d
* f9c0c6a900dcf 7e28efa976bd1 3 minutes ago Running kube-apiserver 0 0f30e3e875948
* b7a14319e5eba 303ce5db0e90d 3 minutes ago Running etcd 0 ee162b48c7c96
*
* ==> coredns [906b3966fd86] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: multinode-20200701031411-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=multinode-20200701031411-8084
* minikube.k8s.io/updated_at=2020_07_01T03_15_38_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:15:35 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:18:56 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:15:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:15:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:15:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:15:48 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.89
* Hostname: multinode-20200701031411-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: ab95144a8fa94e94a66d45a244abcde3
* System UUID: ab95144a-8fa9-4e94-a66d-45a244abcde3
* Boot ID: 77e0a14d-c5e8-49cc-9d5e-b6fca6f226e6
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (8 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-8nhsz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 3m19s
* kube-system etcd-multinode-20200701031411-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m25s
* kube-system kindnet-kggl9 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 111s
* kube-system kube-apiserver-multinode-20200701031411-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 3m25s
* kube-system kube-controller-manager-multinode-20200701031411-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m24s
* kube-system kube-proxy-dqqlj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m19s
* kube-system kube-scheduler-multinode-20200701031411-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m24s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m18s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 750m (37%) 100m (5%)
* memory 120Mi (5%) 220Mi (10%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 3m34s (x5 over 3m34s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 3m34s (x5 over 3m34s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 3m34s (x4 over 3m34s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 3m34s kubelet, multinode-20200701031411-8084 Updated Node Allocatable limit across pods
* Normal Starting 3m25s kubelet, multinode-20200701031411-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 3m25s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 3m25s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 3m25s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientPID
* Normal NodeNotReady 3m25s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeNotReady
* Normal NodeAllocatableEnforced 3m25s kubelet, multinode-20200701031411-8084 Updated Node Allocatable limit across pods
* Normal Starting 3m18s kube-proxy, multinode-20200701031411-8084 Starting kube-proxy.
* Normal NodeReady 3m15s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeReady
*
*
* Name: multinode-20200701031411-8084-m02
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084-m02
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:17:11 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084-m02
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:18:56 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:17:41 +0000 Wed, 01 Jul 2020 03:17:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:17:41 +0000 Wed, 01 Jul 2020 03:17:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:17:41 +0000 Wed, 01 Jul 2020 03:17:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:17:41 +0000 Wed, 01 Jul 2020 03:17:21 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.66
* Hostname: multinode-20200701031411-8084-m02
* Capacity:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: ab87200d6c4e40c09a467a879b5db5a7
* System UUID: ab87200d-6c4e-40c0-9a46-7a879b5db5a7
* Boot ID: d59a8f65-e7de-448a-8765-0f9d4c5bb6ad
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (2 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system kindnet-m6j4t 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 111s
* kube-system kube-proxy-54r4l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 112s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (5%) 100m (5%)
* memory 50Mi (2%) 50Mi (2%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 113s kubelet, multinode-20200701031411-8084-m02 Starting kubelet.
* Normal NodeHasSufficientMemory 112s (x2 over 113s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 112s (x2 over 113s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 112s (x2 over 113s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 112s kubelet, multinode-20200701031411-8084-m02 Updated Node Allocatable limit across pods
* Normal Starting 111s kube-proxy, multinode-20200701031411-8084-m02 Starting kube-proxy.
* Normal NodeReady 102s kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeReady
*
*
* Name: multinode-20200701031411-8084-m03
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084-m03
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:18:39 +0000
* Taints: node.kubernetes.io/not-ready:NoExecute
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084-m03
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:19:01 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:18:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:18:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:18:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:19:01 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.60
* Hostname: multinode-20200701031411-8084-m03
* Capacity:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: 3eb47f79fa0c4affa4e62f1c1f0a5c49
* System UUID: 3eb47f79-fa0c-4aff-a4e6-2f1c1f0a5c49
* Boot ID: 9d5c0837-0627-4633-a592-3867adcfd049
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (2 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system kindnet-gpk7b 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 24s
* kube-system kube-proxy-mbt6w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (5%) 100m (5%)
* memory 50Mi (2%) 50Mi (2%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 24s kubelet, multinode-20200701031411-8084-m03 Starting kubelet.
* Normal NodeHasSufficientMemory 24s (x2 over 24s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 24s (x2 over 24s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 24s (x2 over 24s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 24s kubelet, multinode-20200701031411-8084-m03 Updated Node Allocatable limit across pods
* Normal NodeHasSufficientMemory 2s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientMemory
* Normal Starting 2s kubelet, multinode-20200701031411-8084-m03 Starting kubelet.
* Normal NodeHasNoDiskPressure 2s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 2s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 2s kubelet, multinode-20200701031411-8084-m03 Updated Node Allocatable limit across pods
* Warning Rebooted 2s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 has been rebooted, boot id: 9d5c0837-0627-4633-a592-3867adcfd049
* Normal NodeReady 2s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeReady
* Normal Starting 1s kube-proxy, multinode-20200701031411-8084-m03 Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.049330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +2.646318] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000193] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +1.771058] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.006035] systemd-fstab-generator[1147]: Ignoring "noauto" for root device
* [ +0.006538] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.502756] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +0.963535] vboxguest: loading out-of-tree module taints kernel.
* [ +0.004212] vboxguest: PCI device not found, probably running on physical hardware.
* [ +4.055232] systemd-fstab-generator[1992]: Ignoring "noauto" for root device
* [ +0.079973] systemd-fstab-generator[2002]: Ignoring "noauto" for root device
* [ +7.855754] systemd-fstab-generator[2198]: Ignoring "noauto" for root device
* [Jul 1 03:15] kauditd_printk_skb: 65 callbacks suppressed
* [ +0.250262] systemd-fstab-generator[2365]: Ignoring "noauto" for root device
* [ +0.307244] systemd-fstab-generator[2439]: Ignoring "noauto" for root device
* [ +1.516279] systemd-fstab-generator[2652]: Ignoring "noauto" for root device
* [ +3.256469] kauditd_printk_skb: 107 callbacks suppressed
* [ +8.279490] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
* [ +8.062719] kauditd_printk_skb: 32 callbacks suppressed
* [ +6.177931] kauditd_printk_skb: 38 callbacks suppressed
* [Jul 1 03:16] NFSD: Unable to end grace period: -110
*
* ==> etcd [b7a14319e5eb] <==
* 2020-07-01 03:18:29.727152 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (688.901943ms) to execute
* 2020-07-01 03:18:29.727405 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (854.021931ms) to execute
* 2020-07-01 03:18:29.728115 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:10125" took too long (901.046927ms) to execute
* 2020-07-01 03:18:32.105979 W | wal: sync duration of 2.374591007s, expected less than 1s
* 2020-07-01 03:18:32.325148 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.521260977s) to execute
* 2020-07-01 03:18:32.325694 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:289" took too long (2.593934375s) to execute
* 2020-07-01 03:18:32.326221 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd\" " with result "range_response_count:1 size:878" took too long (2.590646152s) to execute
* 2020-07-01 03:18:32.326611 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations\" range_end:\"/registry/validatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (607.57694ms) to execute
* 2020-07-01 03:18:32.327524 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.568585503s) to execute
* 2020-07-01 03:18:33.004568 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:135" took too long (675.401124ms) to execute
* 2020-07-01 03:18:33.004853 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-multinode-20200701031411-8084.161d828cda11499f\" " with result "range_response_count:1 size:837" took too long (672.438112ms) to execute
* 2020-07-01 03:18:35.995028 W | wal: sync duration of 1.131054382s, expected less than 1s
* 2020-07-01 03:18:36.481335 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:485" took too long (3.473563997s) to execute
* 2020-07-01 03:18:36.481655 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd\" " with result "range_response_count:1 size:878" took too long (3.469935089s) to execute
* 2020-07-01 03:18:36.482733 W | etcdserver: request "header:<ID:416427970351102022 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" mod_revision:523 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" > >>" with result "size:16" took too long (1.618395387s) to execute
* 2020-07-01 03:18:36.484401 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:10125" took too long (2.339791443s) to execute
* 2020-07-01 03:18:36.484801 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (1.267703684s) to execute
* 2020-07-01 03:18:36.485730 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (977.049824ms) to execute
* 2020-07-01 03:18:36.486005 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (1.23460203s) to execute
* 2020-07-01 03:18:36.489308 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (1.266341853s) to execute
* 2020-07-01 03:18:36.896316 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-public\" " with result "range_response_count:1 size:263" took too long (404.171683ms) to execute
* 2020-07-01 03:18:36.896897 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-multinode-20200701031411-8084.161d828cda11499f\" " with result "range_response_count:1 size:837" took too long (404.237698ms) to execute
* 2020-07-01 03:18:36.900377 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:2 size:1762" took too long (405.634479ms) to execute
* 2020-07-01 03:18:36.900953 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (325.115623ms) to execute
* 2020-07-01 03:18:36.902024 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:2 size:1762" took too long (407.546815ms) to execute
*
* ==> kernel <==
* 03:19:03 up 4 min, 0 users, load average: 2.02, 1.27, 0.55
* Linux multinode-20200701031411-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [f9c0c6a900dc] <==
* Trace[1128868786]: [2.591706137s] [2.591706137s] initial value restored
* I0701 03:18:32.330638 1 trace.go:116] Trace[1819524587]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.89 (started: 2020-07-01 03:18:29.735085623 +0000 UTC m=+178.689278245) (total time: 2.595537233s):
* Trace[1819524587]: [2.591762972s] [2.591744629s] About to apply patch
* I0701 03:18:33.005171 1 trace.go:116] Trace[2118353963]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-07-01 03:18:32.328716199 +0000 UTC m=+181.282908953) (total time: 676.428245ms):
* Trace[2118353963]: [676.428245ms] [676.428245ms] END
* I0701 03:18:33.009508 1 trace.go:116] Trace[1880965644]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-07-01 03:18:32.33185498 +0000 UTC m=+181.286047680) (total time: 677.63086ms):
* Trace[1880965644]: [674.651439ms] [674.651439ms] initial value restored
* I0701 03:18:33.009614 1 trace.go:116] Trace[2141026708]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-multinode-20200701031411-8084.161d828cda11499f,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.89 (started: 2020-07-01 03:18:32.331795943 +0000 UTC m=+181.285988578) (total time: 677.801431ms):
* Trace[2141026708]: [674.714244ms] [674.693634ms] About to apply patch
* I0701 03:18:36.481991 1 trace.go:116] Trace[100012185]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:18:33.005954114 +0000 UTC m=+181.960146726) (total time: 3.476006509s):
* Trace[100012185]: [3.475959421s] [3.475951572s] About to write a response
* I0701 03:18:36.487157 1 trace.go:116] Trace[979801097]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-07-01 03:18:33.01129817 +0000 UTC m=+181.965490840) (total time: 3.475839372s):
* Trace[979801097]: [3.471822164s] [3.471822164s] initial value restored
* I0701 03:18:36.487263 1 trace.go:116] Trace[1090679715]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.89 (started: 2020-07-01 03:18:33.011242007 +0000 UTC m=+181.965434649) (total time: 3.476004762s):
* Trace[1090679715]: [3.471881572s] [3.471859978s] About to apply patch
* I0701 03:18:36.490007 1 trace.go:116] Trace[10631414]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2020-07-01 03:18:34.144151555 +0000 UTC m=+183.098344207) (total time: 2.345838484s):
* Trace[10631414]: [2.345838484s] [2.345838484s] END
* I0701 03:18:36.490524 1 trace.go:116] Trace[279528173]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.66 (started: 2020-07-01 03:18:34.144135661 +0000 UTC m=+183.098328282) (total time: 2.34636636s):
* Trace[279528173]: [2.345917038s] [2.345907773s] Listing from storage done
* I0701 03:18:36.491009 1 trace.go:116] Trace[503222798]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:18:35.251087408 +0000 UTC m=+184.205280022) (total time: 1.239899477s):
* Trace[503222798]: [1.23987315s] [1.239862773s] About to write a response
* I0701 03:18:36.492816 1 trace.go:116] Trace[807989985]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-07-01 03:18:34.379514349 +0000 UTC m=+183.333707112) (total time: 2.113280996s):
* Trace[807989985]: [2.11325552s] [2.112710053s] Transaction committed
* I0701 03:18:36.493332 1 trace.go:116] Trace[256754072]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20200701031411-8084-m02,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.66 (started: 2020-07-01 03:18:34.379366123 +0000 UTC m=+183.333558758) (total time: 2.113936352s):
* Trace[256754072]: [2.113799696s] [2.113681973s] Object stored in database
*
* ==> kube-controller-manager [2121efe9ad09] <==
* W0701 03:15:44.648273 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084. Assuming now as a timestamp.
* I0701 03:15:44.648522 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
* I0701 03:15:44.648899 1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0701 03:15:44.656153 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084", UID:"03d9f3b8-01d7-40c9-9652-2e06d63b3bce", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084 event: Registered Node multinode-20200701031411-8084 in Controller
* I0701 03:15:44.657885 1 shared_informer.go:230] Caches are synced for garbage collector
* I0701 03:15:44.658582 1 shared_informer.go:230] Caches are synced for TTL
* I0701 03:15:44.690372 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dbe9f72f-3835-4ec5-a48d-841f49966ed5", APIVersion:"apps/v1", ResourceVersion:"227", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-dqqlj
* E0701 03:15:44.736729 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"dbe9f72f-3835-4ec5-a48d-841f49966ed5", ResourceVersion:"227", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729170138, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000a1fe80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000a1fea0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a1fec0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00082b140), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a1fee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a1ff00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000a1ff60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000b9d040), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000af6888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009e1420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000f340)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000af68f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I0701 03:15:45.285803 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2a90cea6-3cf0-4757-baef-5fa82110c68e", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
* I0701 03:15:45.315528 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"da35958c-6844-4a7b-bcce-dc204292e5de", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-kpm9z
* I0701 03:15:49.648847 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
* W0701 03:17:11.142129 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20200701031411-8084-m02" does not exist
* I0701 03:17:11.154790 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dbe9f72f-3835-4ec5-a48d-841f49966ed5", APIVersion:"apps/v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-54r4l
* I0701 03:17:12.977125 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"de550a67-6ddb-4ff8-9015-680f13bbc3e7", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-kggl9
* I0701 03:17:13.020996 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"de550a67-6ddb-4ff8-9015-680f13bbc3e7", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-m6j4t
* W0701 03:17:14.654698 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084-m02. Assuming now as a timestamp.
* I0701 03:17:14.655047 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m02", UID:"5a85cb02-f23c-48a4-a04d-5c1d31fd812c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084-m02 event: Registered Node multinode-20200701031411-8084-m02 in Controller
* E0701 03:18:10.897762 1 cronjob_controller.go:125] Failed to extract job list: etcdserver: request timed out
* W0701 03:18:39.902328 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20200701031411-8084-m03" does not exist
* I0701 03:18:39.920782 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"de550a67-6ddb-4ff8-9015-680f13bbc3e7", APIVersion:"apps/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gpk7b
* I0701 03:18:39.938675 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dbe9f72f-3835-4ec5-a48d-841f49966ed5", APIVersion:"apps/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mbt6w
* E0701 03:18:39.980915 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"dbe9f72f-3835-4ec5-a48d-841f49966ed5", ResourceVersion:"487", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729170138, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001c00ca0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c00cc0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001c00ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c00d00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001c00d20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c99240), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c00d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c00d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c00da0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001c0f0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c8e608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00095efc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000eb00)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c8e658)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* E0701 03:18:39.981556 1 daemon_controller.go:292] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"de550a67-6ddb-4ff8-9015-680f13bbc3e7", ResourceVersion:"553", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729170231, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001eb06e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001eb0700)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001eb0720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001eb0740)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001eb0760), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001eb0780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001eb07a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001eb07c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001eb07e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001eb0820)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e9cfa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e52ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00095c9a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001bd2bf8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001e53040)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* W0701 03:18:44.668007 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084-m03. Assuming now as a timestamp.
* I0701 03:18:44.668072 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m03", UID:"bf72541b-a44c-411f-a422-9460a3cb59b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084-m03 event: Registered Node multinode-20200701031411-8084-m03 in Controller
*
* ==> kube-proxy [5d17ab07a631] <==
* W0701 03:15:45.738085 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:15:45.747368 1 node.go:136] Successfully retrieved node IP: 192.168.39.89
* I0701 03:15:45.747467 1 server_others.go:186] Using iptables Proxier.
* W0701 03:15:45.747477 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:15:45.747480 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:15:45.747744 1 server.go:583] Version: v1.18.3
* I0701 03:15:45.748078 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:15:45.748097 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:15:45.748539 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:15:45.753529 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:15:45.753600 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:15:45.756097 1 config.go:315] Starting service config controller
* I0701 03:15:45.756109 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:15:45.756213 1 config.go:133] Starting endpoints config controller
* I0701 03:15:45.756267 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:15:45.862211 1 shared_informer.go:230] Caches are synced for endpoints config
* I0701 03:15:45.865650 1 shared_informer.go:230] Caches are synced for service config
*
* ==> kube-scheduler [697c3f39861e] <==
* W0701 03:15:34.989589 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0701 03:15:34.989865 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0701 03:15:34.989996 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0701 03:15:35.014906 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:15:35.014951 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0701 03:15:35.017210 1 authorization.go:47] Authorization is disabled
* W0701 03:15:35.017258 1 authentication.go:40] Authentication is disabled
* I0701 03:15:35.017274 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:15:35.018827 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:15:35.019022 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:15:35.019239 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:15:35.019389 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:15:35.027621 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:15:35.028061 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:15:35.028180 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:15:35.044632 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:15:35.050664 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:15:35.064237 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:15:35.064718 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:15:35.065093 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:15:35.065378 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:15:35.862783 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:15:36.133154 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* I0701 03:15:36.519493 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:15:44.583234 1 factory.go:503] pod: kube-system/coredns-66bff467f8-8nhsz is already present in the active queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:14:19 UTC, end at Wed 2020-07-01 03:19:03 UTC. --
* Jul 01 03:15:48 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:48.934874 3801 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:15:48 multinode-20200701031411-8084 kubelet[3801]: E0701 03:15:48.936845 3801 reflector.go:178] object-"kube-system"/"storage-provisioner-token-wbqkw": Failed to list *v1.Secret: secrets "storage-provisioner-token-wbqkw" is forbidden: User "system:node:multinode-20200701031411-8084" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "multinode-20200701031411-8084" and this object
* Jul 01 03:15:48 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:48.960659 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-wbqkw" (UniqueName: "kubernetes.io/secret/541529e3-71e3-4945-989c-5e6208dd025c-storage-provisioner-token-wbqkw") pod "storage-provisioner" (UID: "541529e3-71e3-4945-989c-5e6208dd025c")
* Jul 01 03:15:48 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:48.960937 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/541529e3-71e3-4945-989c-5e6208dd025c-tmp") pod "storage-provisioner" (UID: "541529e3-71e3-4945-989c-5e6208dd025c")
* Jul 01 03:15:50 multinode-20200701031411-8084 kubelet[3801]: E0701 03:15:50.061967 3801 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-wbqkw: failed to sync secret cache: timed out waiting for the condition
* Jul 01 03:15:50 multinode-20200701031411-8084 kubelet[3801]: E0701 03:15:50.062726 3801 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/541529e3-71e3-4945-989c-5e6208dd025c-storage-provisioner-token-wbqkw podName:541529e3-71e3-4945-989c-5e6208dd025c nodeName:}" failed. No retries permitted until 2020-07-01 03:15:50.562694653 +0000 UTC m=+12.747686444 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-wbqkw\" (UniqueName: \"kubernetes.io/secret/541529e3-71e3-4945-989c-5e6208dd025c-storage-provisioner-token-wbqkw\") pod \"storage-provisioner\" (UID: \"541529e3-71e3-4945-989c-5e6208dd025c\") : failed to sync secret cache: timed out waiting for the condition"
* Jul 01 03:15:50 multinode-20200701031411-8084 kubelet[3801]: W0701 03:15:50.803480 3801 pod_container_deletor.go:77] Container "dfc88820ccde27404c802b61e94d86de48193c09934dea506426119918dbb14e" not found in pod's containers
* Jul 01 03:15:51 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:51.528673 3801 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:15:51 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:51.673939 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qvq87" (UniqueName: "kubernetes.io/secret/be3a8577-efea-4a39-ac5f-27e3d7021273-coredns-token-qvq87") pod "coredns-66bff467f8-8nhsz" (UID: "be3a8577-efea-4a39-ac5f-27e3d7021273")
* Jul 01 03:15:51 multinode-20200701031411-8084 kubelet[3801]: I0701 03:15:51.674005 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be3a8577-efea-4a39-ac5f-27e3d7021273-config-volume") pod "coredns-66bff467f8-8nhsz" (UID: "be3a8577-efea-4a39-ac5f-27e3d7021273")
* Jul 01 03:15:52 multinode-20200701031411-8084 kubelet[3801]: W0701 03:15:52.176351 3801 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8nhsz through plugin: invalid network status for
* Jul 01 03:15:52 multinode-20200701031411-8084 kubelet[3801]: W0701 03:15:52.818981 3801 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8nhsz through plugin: invalid network status for
* Jul 01 03:17:12 multinode-20200701031411-8084 kubelet[3801]: I0701 03:17:12.982608 3801 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:17:13 multinode-20200701031411-8084 kubelet[3801]: I0701 03:17:13.060112 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2-lib-modules") pod "kindnet-kggl9" (UID: "559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2")
* Jul 01 03:17:13 multinode-20200701031411-8084 kubelet[3801]: I0701 03:17:13.060156 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-qvf5d" (UniqueName: "kubernetes.io/secret/559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2-kindnet-token-qvf5d") pod "kindnet-kggl9" (UID: "559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2")
* Jul 01 03:17:13 multinode-20200701031411-8084 kubelet[3801]: I0701 03:17:13.060189 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2-xtables-lock") pod "kindnet-kggl9" (UID: "559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2")
* Jul 01 03:17:13 multinode-20200701031411-8084 kubelet[3801]: I0701 03:17:13.060212 3801 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2-cni-cfg") pod "kindnet-kggl9" (UID: "559ae3e0-d54c-4eeb-8feb-cceb9b4a7aa2")
* Jul 01 03:17:56 multinode-20200701031411-8084 kubelet[3801]: E0701 03:17:56.897678 3801 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-multinode-20200701031411-8084.161d828cda11499f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"438", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-multinode-20200701031411-8084", UID:"62860af99f3e9000295a050c2dc66fec", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 503", Source:v1.EventSource{Component:"kubelet", Host:"multinode-20200701031411-8084"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729170184, loc:(*time.Location)(0x701d4a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb71df6253ce2f0, ext:126809739524, loc:(*time.Location)(0x701d4a0)}}, Count:4, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
* Jul 01 03:18:04 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:04.517163 3801 controller.go:178] failed to update node lease, error: etcdserver: request timed out
* Jul 01 03:18:10 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:10.900755 3801 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"518", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-multinode-20200701031411-8084", UID:"89e0572a88a3cba62a6f9570b3c55bad", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"multinode-20200701031411-8084"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729170202, loc:(*time.Location)(0x701d4a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb71df82d3610c6, ext:134943510333, loc:(*time.Location)(0x701d4a0)}}, Count:4, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
* Jul 01 03:18:11 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:11.521591 3801 controller.go:178] failed to update node lease, error: etcdserver: request timed out
* Jul 01 03:18:11 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:11.523247 3801 event.go:269] Unable to write event: 'Patch https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/etcd-multinode-20200701031411-8084.161d828cda11499f: read tcp 192.168.39.89:43352->192.168.39.89:8443: use of closed network connection' (may retry after sleeping)
* Jul 01 03:18:18 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:18.553931 3801 controller.go:178] failed to update node lease, error: etcdserver: request timed out
* Jul 01 03:18:18 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:18.554807 3801 event.go:269] Unable to write event: 'Patch https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/etcd-multinode-20200701031411-8084.161d828cda11499f: read tcp 192.168.39.89:43910->192.168.39.89:8443: use of closed network connection' (may retry after sleeping)
* Jul 01 03:18:22 multinode-20200701031411-8084 kubelet[3801]: E0701 03:18:22.193509 3801 controller.go:178] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "multinode-20200701031411-8084": the object has been modified; please apply your changes to the latest version and try again
*
* ==> storage-provisioner [8175d5651171] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20200701031411-8084 -n multinode-20200701031411-8084
helpers_test.go:254: (dbg) Run: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (429ns)
helpers_test.go:256: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
--- PASS: TestMultiNode/serial/DeleteNode (1.39s)
multinode_test.go:245: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node delete m03
multinode_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200701031411-8084 node delete m03: (1.05005781s)
multinode_test.go:251: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
--- PASS: TestMultiNode/serial/StopMultiNode (7.23s)
multinode_test.go:183: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 stop
multinode_test.go:183: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200701031411-8084 stop: (7.107438729s)
multinode_test.go:189: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status
multinode_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status: exit status 7 (59.622212ms)
-- stdout --
multinode-20200701031411-8084
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
multinode-20200701031411-8084-m02
type: Worker
host: Stopped
kubelet: Stopped
-- /stdout --
multinode_test.go:196: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
multinode_test.go:196: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr: exit status 7 (60.885795ms)
-- stdout --
multinode-20200701031411-8084
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
multinode-20200701031411-8084-m02
type: Worker
host: Stopped
kubelet: Stopped
-- /stdout --
** stderr **
I0701 03:19:12.818126 13760 mustload.go:64] Loading cluster: multinode-20200701031411-8084
I0701 03:19:12.818302 13760 status.go:124] checking status of multinode-20200701031411-8084 ...
I0701 03:19:12.818600 13760 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:19:12.818660 13760 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:19:12.830557 13760 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:44455
I0701 03:19:12.830935 13760 main.go:115] libmachine: () Calling .GetVersion
I0701 03:19:12.831286 13760 main.go:115] libmachine: Using API Version 1
I0701 03:19:12.831308 13760 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:19:12.831569 13760 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:19:12.831671 13760 main.go:115] libmachine: (multinode-20200701031411-8084) Calling .GetState
I0701 03:19:12.834017 13760 status.go:188] multinode-20200701031411-8084 host status = "Stopped" (err=<nil>)
I0701 03:19:12.834032 13760 status.go:201] host is not running, skipping remaining checks
I0701 03:19:12.834036 13760 status.go:126] multinode-20200701031411-8084 status: &{Name:multinode-20200701031411-8084 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false}
I0701 03:19:12.834048 13760 status.go:124] checking status of multinode-20200701031411-8084-m02 ...
I0701 03:19:12.834293 13760 main.go:115] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 03:19:12.834325 13760 main.go:115] libmachine: Launching plugin server for driver kvm2
I0701 03:19:12.846367 13760 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:44561
I0701 03:19:12.847086 13760 main.go:115] libmachine: () Calling .GetVersion
I0701 03:19:12.847390 13760 main.go:115] libmachine: Using API Version 1
I0701 03:19:12.847407 13760 main.go:115] libmachine: () Calling .SetConfigRaw
I0701 03:19:12.847656 13760 main.go:115] libmachine: () Calling .GetMachineName
I0701 03:19:12.847802 13760 main.go:115] libmachine: (multinode-20200701031411-8084-m02) Calling .GetState
I0701 03:19:12.849717 13760 status.go:188] multinode-20200701031411-8084-m02 host status = "Stopped" (err=<nil>)
I0701 03:19:12.849726 13760 status.go:201] host is not running, skipping remaining checks
I0701 03:19:12.849730 13760 status.go:126] multinode-20200701031411-8084-m02 status: &{Name:multinode-20200701031411-8084-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}
** /stderr **
--- PASS: TestMultiNode/serial/RestartMultiNode (91.76s)
multinode_test.go:222: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --driver=kvm2
multinode_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --driver=kvm2 : (1m31.396352567s)
multinode_test.go:228: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr
multinode_test.go:60: *** TestMultiNode FAILED at 2020-07-01 03:20:44.608469192 +0000 UTC m=+1449.709757822
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20200701031411-8084 -n multinode-20200701031411-8084
helpers_test.go:237: <<< TestMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestMultiNode]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200701031411-8084 logs -n 25: (1.095203944s)
helpers_test.go:245: TestMultiNode logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:19:21 UTC, end at Wed 2020-07-01 03:20:45 UTC. --
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.124990049Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.125085014Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.125215788Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.125306832Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.125397194Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.125693798Z" level=info msg="Loading containers: start."
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.429048840Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.510086962Z" level=info msg="Loading containers: done."
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.536717916Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.536989189Z" level=info msg="Daemon has completed initialization"
* Jul 01 03:19:26 multinode-20200701031411-8084 systemd[1]: Started Docker Application Container Engine.
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.555677960Z" level=info msg="API listen on /var/run/docker.sock"
* Jul 01 03:19:26 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:26.555757797Z" level=info msg="API listen on [::]:2376"
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.126078172Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/799a62d860b7625a48f630f39f21813791f02396ff215f4931f420264a9b25de/shim.sock" debug=false pid=2908
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.152947717Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae32ff2c4754f7d55d31c2c9f9fd46be421d807d77d7ce5718cb8921f77c359f/shim.sock" debug=false pid=2919
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.226224891Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3c9370fac43efc802e11143fab46fc3b26656c6e305615239ffdda5d84b1ee2b/shim.sock" debug=false pid=2949
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.258056371Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f6540696ea2b32b1951a50b815cede93c218a0f1ed32f282b82f5ac2cfe62998/shim.sock" debug=false pid=2968
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.530654016Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ec57bf5df2408c835b27672394b4feadbeecfb7bd61357c4f061a1b4b7d1805/shim.sock" debug=false pid=3048
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.611137651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54f75a4fa4afd45441834aca8332368d9fa1ab90beb2ff6858d6c24d211628c2/shim.sock" debug=false pid=3075
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.695421024Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/02692c1d6082fcbf05233342fe13df6bf0a2ffefa4a0d9c18db49377712f5c0b/shim.sock" debug=false pid=3112
* Jul 01 03:19:30 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:30.724511510Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/88938b456dfd07ec0a65a9eb7e175c146d472171b2b6380fce846be780d3ef56/shim.sock" debug=false pid=3123
* Jul 01 03:19:37 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:37.876703559Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c5065b7aa3a2f23c0e0552e08269e98d940e86b85a62101c3d0b3d5feeee21c/shim.sock" debug=false pid=3524
* Jul 01 03:19:38 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:38.142386277Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85314c220aff238c685d578cc0c7876f8eeab778b3f430734bafbfe76cda208b/shim.sock" debug=false pid=3570
* Jul 01 03:19:38 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:38.868406625Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b907194e9391a90ba20d7f089376ce7075f84173385cb1310e59051fbbb5897/shim.sock" debug=false pid=3687
* Jul 01 03:19:39 multinode-20200701031411-8084 dockerd[1961]: time="2020-07-01T03:19:39.352992136Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cf1327a33a304ddd156dab6dbb53a8aef48218515946ef4096ca8059f46d020e/shim.sock" debug=false pid=3758
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* cf1327a33a304 2186a1a396deb About a minute ago Running kindnet-cni 0 9b907194e9391
* 85314c220aff2 3439b7546f29b About a minute ago Running kube-proxy 0 9c5065b7aa3a2
* 88938b456dfd0 76216c34ed0c7 About a minute ago Running kube-scheduler 1 3c9370fac43ef
* 02692c1d6082f 303ce5db0e90d About a minute ago Running etcd 1 f6540696ea2b3
* 54f75a4fa4afd da26705ccb4b5 About a minute ago Running kube-controller-manager 0 ae32ff2c4754f
* 7ec57bf5df240 7e28efa976bd1 About a minute ago Running kube-apiserver 1 799a62d860b76
* 697c3f39861e1 76216c34ed0c7 5 minutes ago Exited kube-scheduler 0 5588b71272198
* f9c0c6a900dcf 7e28efa976bd1 5 minutes ago Exited kube-apiserver 0 0f30e3e875948
* b7a14319e5eba 303ce5db0e90d 5 minutes ago Exited etcd 0 ee162b48c7c96
*
* ==> describe nodes <==
* Name: multinode-20200701031411-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=multinode-20200701031411-8084
* minikube.k8s.io/updated_at=2020_07_01T03_15_38_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:15:35 +0000
* Taints: node.kubernetes.io/unreachable:NoSchedule
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:18:56 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure Unknown Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* DiskPressure Unknown Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* PIDPressure Unknown Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Ready Unknown Wed, 01 Jul 2020 03:17:39 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Addresses:
* InternalIP: 192.168.39.89
* Hostname: multinode-20200701031411-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: ab95144a8fa94e94a66d45a244abcde3
* System UUID: ab95144a-8fa9-4e94-a66d-45a244abcde3
* Boot ID: 77e0a14d-c5e8-49cc-9d5e-b6fca6f226e6
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* PodCIDR: 10.244.0.0/24
* PodCIDRs: 10.244.0.0/24
* Non-terminated Pods: (8 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-8nhsz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 5m1s
* kube-system etcd-multinode-20200701031411-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m7s
* kube-system kindnet-kggl9 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 3m33s
* kube-system kube-apiserver-multinode-20200701031411-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m7s
* kube-system kube-controller-manager-multinode-20200701031411-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m6s
* kube-system kube-proxy-dqqlj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
* kube-system kube-scheduler-multinode-20200701031411-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m6s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 750m (37%) 100m (5%)
* memory 120Mi (5%) 220Mi (10%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 5m16s (x5 over 5m16s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 5m16s (x5 over 5m16s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 5m16s (x4 over 5m16s) kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 5m16s kubelet, multinode-20200701031411-8084 Updated Node Allocatable limit across pods
* Normal Starting 5m7s kubelet, multinode-20200701031411-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 5m7s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 5m7s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 5m7s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeHasSufficientPID
* Normal NodeNotReady 5m7s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeNotReady
* Normal NodeAllocatableEnforced 5m7s kubelet, multinode-20200701031411-8084 Updated Node Allocatable limit across pods
* Normal Starting 5m kube-proxy, multinode-20200701031411-8084 Starting kube-proxy.
* Normal NodeReady 4m57s kubelet, multinode-20200701031411-8084 Node multinode-20200701031411-8084 status is now: NodeReady
*
*
* Name: multinode-20200701031411-8084-m02
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084-m02
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:17:11 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084-m02
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:20:43 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:20:43 +0000 Wed, 01 Jul 2020 03:20:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:20:43 +0000 Wed, 01 Jul 2020 03:20:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:20:43 +0000 Wed, 01 Jul 2020 03:20:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:20:43 +0000 Wed, 01 Jul 2020 03:20:43 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.66
* Hostname: multinode-20200701031411-8084-m02
* Capacity:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: ab87200d6c4e40c09a467a879b5db5a7
* System UUID: ab87200d-6c4e-40c0-9a46-7a879b5db5a7
* Boot ID: b9179d5b-b53a-4853-bba7-f435728cd90a
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* PodCIDR: 10.244.1.0/24
* PodCIDRs: 10.244.1.0/24
* Non-terminated Pods: (2 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system kindnet-m6j4t 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 3m33s
* kube-system kube-proxy-54r4l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m34s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (5%) 100m (5%)
* memory 50Mi (2%) 50Mi (2%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 3m35s kubelet, multinode-20200701031411-8084-m02 Starting kubelet.
* Normal NodeHasSufficientMemory 3m34s (x2 over 3m35s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 3m34s (x2 over 3m35s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 3m34s (x2 over 3m35s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 3m34s kubelet, multinode-20200701031411-8084-m02 Updated Node Allocatable limit across pods
* Normal Starting 3m33s kube-proxy, multinode-20200701031411-8084-m02 Starting kube-proxy.
* Normal NodeReady 3m24s kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeReady
* Normal NodeAllocatableEnforced 76s kubelet, multinode-20200701031411-8084-m02 Updated Node Allocatable limit across pods
* Normal Starting 76s kubelet, multinode-20200701031411-8084-m02 Starting kubelet.
* Normal NodeHasSufficientMemory 76s (x8 over 76s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 76s (x8 over 76s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 76s (x7 over 76s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientPID
* Normal Starting 67s kube-proxy, multinode-20200701031411-8084-m02 Starting kube-proxy.
* Normal Starting 2s kubelet, multinode-20200701031411-8084-m02 Starting kubelet.
* Normal NodeHasSufficientMemory 2s (x2 over 2s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 2s (x2 over 2s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 2s (x2 over 2s) kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 2s kubelet, multinode-20200701031411-8084-m02 Updated Node Allocatable limit across pods
* Warning Rebooted 2s kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 has been rebooted, boot id: b9179d5b-b53a-4853-bba7-f435728cd90a
* Normal NodeReady 2s kubelet, multinode-20200701031411-8084-m02 Node multinode-20200701031411-8084-m02 status is now: NodeReady
* Normal Starting 0s kube-proxy, multinode-20200701031411-8084-m02 Starting kube-proxy.
*
*
* Name: multinode-20200701031411-8084-m03
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20200701031411-8084-m03
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:18:39 +0000
* Taints: node.kubernetes.io/unreachable:NoSchedule
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20200701031411-8084-m03
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:19:01 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure Unknown Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* DiskPressure Unknown Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* PIDPressure Unknown Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Ready Unknown Wed, 01 Jul 2020 03:19:01 +0000 Wed, 01 Jul 2020 03:20:32 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Addresses:
* InternalIP: 192.168.39.60
* Hostname: multinode-20200701031411-8084-m03
* Capacity:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 1877108Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: 3eb47f79fa0c4affa4e62f1c1f0a5c49
* System UUID: 3eb47f79-fa0c-4aff-a4e6-2f1c1f0a5c49
* Boot ID: 9d5c0837-0627-4633-a592-3867adcfd049
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* PodCIDR: 10.244.2.0/24
* PodCIDRs: 10.244.2.0/24
* Non-terminated Pods: (2 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system kindnet-gpk7b 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 2m6s
* kube-system kube-proxy-mbt6w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m6s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (5%) 100m (5%)
* memory 50Mi (2%) 50Mi (2%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 2m6s kubelet, multinode-20200701031411-8084-m03 Starting kubelet.
* Normal NodeHasSufficientMemory 2m6s (x2 over 2m6s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 2m6s (x2 over 2m6s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 2m6s (x2 over 2m6s) kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 2m6s kubelet, multinode-20200701031411-8084-m03 Updated Node Allocatable limit across pods
* Normal NodeHasSufficientMemory 104s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientMemory
* Normal Starting 104s kubelet, multinode-20200701031411-8084-m03 Starting kubelet.
* Normal NodeHasNoDiskPressure 104s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 104s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 104s kubelet, multinode-20200701031411-8084-m03 Updated Node Allocatable limit across pods
* Warning Rebooted 104s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 has been rebooted, boot id: 9d5c0837-0627-4633-a592-3867adcfd049
* Normal NodeReady 104s kubelet, multinode-20200701031411-8084-m03 Node multinode-20200701031411-8084-m03 status is now: NodeReady
* Normal Starting 103s kube-proxy, multinode-20200701031411-8084-m03 Starting kube-proxy.
*
* ==> dmesg <==
* [Jul 1 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
* [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [ +0.049776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +2.678474] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +1.942369] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.008060] systemd-fstab-generator[1148]: Ignoring "noauto" for root device
* [ +0.003198] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.533612] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +0.015209] vboxguest: loading out-of-tree module taints kernel.
* [ +0.014059] vboxguest: PCI device not found, probably running on physical hardware.
* [ +3.457841] systemd-fstab-generator[1941]: Ignoring "noauto" for root device
* [ +0.097092] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
* [ +1.102682] systemd-fstab-generator[2176]: Ignoring "noauto" for root device
* [ +0.311035] systemd-fstab-generator[2248]: Ignoring "noauto" for root device
* [ +0.198261] systemd-fstab-generator[2288]: Ignoring "noauto" for root device
* [ +11.106942] kauditd_printk_skb: 149 callbacks suppressed
* [Jul 1 03:20] kauditd_printk_skb: 50 callbacks suppressed
*
* ==> etcd [02692c1d6082] <==
* 2020-07-01 03:19:31.910913 I | embed: initial advertise peer URLs = https://192.168.39.89:2380
* 2020-07-01 03:19:31.911139 I | embed: initial cluster =
* 2020-07-01 03:19:31.933039 I | etcdserver: restarting member 80c025fbdee905c7 in cluster 4f768a61c1c8bc1 at commit index 671
* raft2020/07/01 03:19:31 INFO: 80c025fbdee905c7 switched to configuration voters=()
* raft2020/07/01 03:19:31 INFO: 80c025fbdee905c7 became follower at term 2
* raft2020/07/01 03:19:31 INFO: newRaft 80c025fbdee905c7 [peers: [], term: 2, commit: 671, applied: 0, lastindex: 671, lastterm: 2]
* 2020-07-01 03:19:31.948149 W | auth: simple token is not cryptographically signed
* 2020-07-01 03:19:31.951113 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
* raft2020/07/01 03:19:31 INFO: 80c025fbdee905c7 switched to configuration voters=(9277456996090054087)
* 2020-07-01 03:19:31.956071 I | etcdserver/membership: added member 80c025fbdee905c7 [https://192.168.39.89:2380] to cluster 4f768a61c1c8bc1
* 2020-07-01 03:19:31.956423 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:19:31.956714 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-07-01 03:19:31.957995 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
* 2020-07-01 03:19:31.963673 I | embed: listening for peers on 192.168.39.89:2380
* 2020-07-01 03:19:31.966031 I | embed: listening for metrics on http://127.0.0.1:2381
* raft2020/07/01 03:19:33 INFO: 80c025fbdee905c7 is starting a new election at term 2
* raft2020/07/01 03:19:33 INFO: 80c025fbdee905c7 became candidate at term 3
* raft2020/07/01 03:19:33 INFO: 80c025fbdee905c7 received MsgVoteResp from 80c025fbdee905c7 at term 3
* raft2020/07/01 03:19:33 INFO: 80c025fbdee905c7 became leader at term 3
* raft2020/07/01 03:19:33 INFO: raft.node: 80c025fbdee905c7 elected leader 80c025fbdee905c7 at term 3
* 2020-07-01 03:19:33.636279 I | etcdserver: published {Name:multinode-20200701031411-8084 ClientURLs:[https://192.168.39.89:2379]} to cluster 4f768a61c1c8bc1
* 2020-07-01 03:19:33.636375 I | embed: ready to serve client requests
* 2020-07-01 03:19:33.636464 I | embed: ready to serve client requests
* 2020-07-01 03:19:33.638092 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:19:33.639314 I | embed: serving client requests on 192.168.39.89:2379
*
* ==> etcd [b7a14319e5eb] <==
* 2020-07-01 03:18:32.105979 W | wal: sync duration of 2.374591007s, expected less than 1s
* 2020-07-01 03:18:32.325148 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.521260977s) to execute
* 2020-07-01 03:18:32.325694 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:289" took too long (2.593934375s) to execute
* 2020-07-01 03:18:32.326221 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd\" " with result "range_response_count:1 size:878" took too long (2.590646152s) to execute
* 2020-07-01 03:18:32.326611 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations\" range_end:\"/registry/validatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (607.57694ms) to execute
* 2020-07-01 03:18:32.327524 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.568585503s) to execute
* 2020-07-01 03:18:33.004568 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:135" took too long (675.401124ms) to execute
* 2020-07-01 03:18:33.004853 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-multinode-20200701031411-8084.161d828cda11499f\" " with result "range_response_count:1 size:837" took too long (672.438112ms) to execute
* 2020-07-01 03:18:35.995028 W | wal: sync duration of 1.131054382s, expected less than 1s
* 2020-07-01 03:18:36.481335 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:485" took too long (3.473563997s) to execute
* 2020-07-01 03:18:36.481655 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-multinode-20200701031411-8084.161d829112dbabdd\" " with result "range_response_count:1 size:878" took too long (3.469935089s) to execute
* 2020-07-01 03:18:36.482733 W | etcdserver: request "header:<ID:416427970351102022 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" mod_revision:523 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-20200701031411-8084-m02\" > >>" with result "size:16" took too long (1.618395387s) to execute
* 2020-07-01 03:18:36.484401 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:10125" took too long (2.339791443s) to execute
* 2020-07-01 03:18:36.484801 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (1.267703684s) to execute
* 2020-07-01 03:18:36.485730 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (977.049824ms) to execute
* 2020-07-01 03:18:36.486005 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (1.23460203s) to execute
* 2020-07-01 03:18:36.489308 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (1.266341853s) to execute
* 2020-07-01 03:18:36.896316 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-public\" " with result "range_response_count:1 size:263" took too long (404.171683ms) to execute
* 2020-07-01 03:18:36.896897 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-multinode-20200701031411-8084.161d828cda11499f\" " with result "range_response_count:1 size:837" took too long (404.237698ms) to execute
* 2020-07-01 03:18:36.900377 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:2 size:1762" took too long (405.634479ms) to execute
* 2020-07-01 03:18:36.900953 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (325.115623ms) to execute
* 2020-07-01 03:18:36.902024 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:2 size:1762" took too long (407.546815ms) to execute
* 2020-07-01 03:19:06.247649 N | pkg/osutil: received terminated signal, shutting down...
* 2020-07-01 03:19:06.253107 I | etcdserver: skipped leadership transfer for single voting member cluster
* WARNING: 2020/07/01 03:19:06 grpc: addrConn.createTransport failed to connect to {192.168.39.89:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.39.89:2379: connect: connection refused". Reconnecting...
*
* ==> kernel <==
* 03:20:45 up 1 min, 0 users, load average: 0.40, 0.20, 0.08
* Linux multinode-20200701031411-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [7ec57bf5df24] <==
* I0701 03:19:36.574197 1 naming_controller.go:291] Starting NamingConditionController
* I0701 03:19:36.577508 1 establishing_controller.go:76] Starting EstablishingController
* I0701 03:19:36.577632 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
* I0701 03:19:36.577646 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
* I0701 03:19:36.577709 1 crdregistration_controller.go:111] Starting crd-autoregister controller
* I0701 03:19:36.577742 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
* I0701 03:19:36.577773 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0701 03:19:36.578307 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* E0701 03:19:36.586292 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.89, ResourceVersion: 0, AdditionalErrorMsg:
* I0701 03:19:36.678001 1 shared_informer.go:230] Caches are synced for crd-autoregister
* I0701 03:19:36.720142 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I0701 03:19:36.721506 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0701 03:19:36.723737 1 cache.go:39] Caches are synced for autoregister controller
* I0701 03:19:36.729090 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
* I0701 03:19:37.514948 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0701 03:19:37.515057 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0701 03:19:37.531701 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0701 03:19:38.486977 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0701 03:19:38.719800 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0701 03:19:38.736392 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0701 03:19:38.881037 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0701 03:19:38.895300 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* I0701 03:19:51.700840 1 controller.go:606] quota admission added evaluator for: endpoints
* I0701 03:20:32.333761 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0701 03:20:43.680523 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
*
* ==> kube-apiserver [f9c0c6a900dc] <==
* Trace[807989985]: [2.11325552s] [2.112710053s] Transaction committed
* I0701 03:18:36.493332 1 trace.go:116] Trace[256754072]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20200701031411-8084-m02,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.66 (started: 2020-07-01 03:18:34.379366123 +0000 UTC m=+183.333558758) (total time: 2.113936352s):
* Trace[256754072]: [2.113799696s] [2.113681973s] Object stored in database
* I0701 03:19:06.214721 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0701 03:19:06.214768 1 controller.go:181] Shutting down kubernetes service endpoint reconciler
* I0701 03:19:06.215221 1 controller.go:123] Shutting down OpenAPI controller
* I0701 03:19:06.215236 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
* I0701 03:19:06.215247 1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
* I0701 03:19:06.215256 1 establishing_controller.go:87] Shutting down EstablishingController
* I0701 03:19:06.215265 1 naming_controller.go:302] Shutting down NamingConditionController
* I0701 03:19:06.215274 1 customresource_discovery_controller.go:220] Shutting down DiscoveryController
* I0701 03:19:06.215282 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
* I0701 03:19:06.215291 1 available_controller.go:399] Shutting down AvailableConditionController
* I0701 03:19:06.215306 1 autoregister_controller.go:165] Shutting down autoregister controller
* I0701 03:19:06.215316 1 crd_finalizer.go:278] Shutting down CRDFinalizer
* I0701 03:19:06.215327 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
* I0701 03:19:06.215335 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
* I0701 03:19:06.215832 1 controller.go:87] Shutting down OpenAPI AggregationController
* I0701 03:19:06.215857 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0701 03:19:06.215866 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0701 03:19:06.216192 1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
* I0701 03:19:06.216206 1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
* I0701 03:19:06.216222 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0701 03:19:06.216581 1 secure_serving.go:222] Stopped listening on [::]:8443
* E0701 03:19:06.223228 1 controller.go:184] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kube-controller-manager [54f75a4fa4af] <==
* I0701 03:19:52.228192 1 shared_informer.go:230] Caches are synced for daemon sets
* I0701 03:19:52.231251 1 shared_informer.go:230] Caches are synced for TTL
* I0701 03:19:52.233036 1 shared_informer.go:230] Caches are synced for node
* I0701 03:19:52.233662 1 range_allocator.go:172] Starting range CIDR allocator
* I0701 03:19:52.233746 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
* I0701 03:19:52.233767 1 shared_informer.go:230] Caches are synced for cidrallocator
* I0701 03:19:52.237788 1 shared_informer.go:230] Caches are synced for persistent volume
* I0701 03:19:52.247178 1 range_allocator.go:373] Set node multinode-20200701031411-8084 PodCIDR to [10.244.0.0/24]
* I0701 03:19:52.256376 1 range_allocator.go:373] Set node multinode-20200701031411-8084-m02 PodCIDR to [10.244.1.0/24]
* I0701 03:19:52.256516 1 range_allocator.go:373] Set node multinode-20200701031411-8084-m03 PodCIDR to [10.244.2.0/24]
* I0701 03:19:52.257391 1 shared_informer.go:230] Caches are synced for GC
* I0701 03:19:52.281410 1 shared_informer.go:230] Caches are synced for taint
* I0701 03:19:52.281561 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
* W0701 03:19:52.281913 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084. Assuming now as a timestamp.
* W0701 03:19:52.282058 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084-m02. Assuming now as a timestamp.
* W0701 03:19:52.282182 1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200701031411-8084-m03. Assuming now as a timestamp.
* I0701 03:19:52.282241 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084", UID:"03d9f3b8-01d7-40c9-9652-2e06d63b3bce", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084 event: Registered Node multinode-20200701031411-8084 in Controller
* I0701 03:19:52.282259 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m02", UID:"5a85cb02-f23c-48a4-a04d-5c1d31fd812c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084-m02 event: Registered Node multinode-20200701031411-8084-m02 in Controller
* I0701 03:19:52.282270 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m03", UID:"bf72541b-a44c-411f-a422-9460a3cb59b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200701031411-8084-m03 event: Registered Node multinode-20200701031411-8084-m03 in Controller
* I0701 03:19:52.282278 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal.
* I0701 03:19:52.281967 1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0701 03:20:32.299231 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084", UID:"03d9f3b8-01d7-40c9-9652-2e06d63b3bce", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node multinode-20200701031411-8084 status is now: NodeNotReady
* I0701 03:20:32.397365 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m02", UID:"5a85cb02-f23c-48a4-a04d-5c1d31fd812c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node multinode-20200701031411-8084-m02 status is now: NodeNotReady
* I0701 03:20:32.438019 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200701031411-8084-m03", UID:"bf72541b-a44c-411f-a422-9460a3cb59b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node multinode-20200701031411-8084-m03 status is now: NodeNotReady
* I0701 03:20:32.447744 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
*
* ==> kube-proxy [85314c220aff] <==
* W0701 03:19:38.598789 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:19:38.614268 1 node.go:136] Successfully retrieved node IP: 192.168.39.66
* I0701 03:19:38.614271 1 server_others.go:186] Using iptables Proxier.
* W0701 03:19:38.614294 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:19:38.614294 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:19:38.615389 1 server.go:583] Version: v1.18.3
* I0701 03:19:38.616330 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:19:38.616373 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:19:38.616675 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:19:38.621695 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:19:38.621793 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:19:38.623503 1 config.go:315] Starting service config controller
* I0701 03:19:38.623514 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:19:38.623716 1 config.go:133] Starting endpoints config controller
* I0701 03:19:38.623728 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:19:38.724666 1 shared_informer.go:230] Caches are synced for service config
* I0701 03:19:38.725268 1 shared_informer.go:230] Caches are synced for endpoints config
*
* ==> kube-scheduler [697c3f39861e] <==
* W0701 03:15:34.989589 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0701 03:15:34.989865 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0701 03:15:34.989996 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0701 03:15:35.014906 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:15:35.014951 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0701 03:15:35.017210 1 authorization.go:47] Authorization is disabled
* W0701 03:15:35.017258 1 authentication.go:40] Authentication is disabled
* I0701 03:15:35.017274 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:15:35.018827 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:15:35.019022 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:15:35.019239 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:15:35.019389 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:15:35.027621 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:15:35.028061 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:15:35.028180 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:15:35.044632 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:15:35.050664 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:15:35.064237 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:15:35.064718 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:15:35.065093 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:15:35.065378 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:15:35.862783 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:15:36.133154 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* I0701 03:15:36.519493 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:15:44.583234 1 factory.go:503] pod: kube-system/coredns-66bff467f8-8nhsz is already present in the active queue
*
* ==> kube-scheduler [88938b456dfd] <==
* I0701 03:19:31.646827 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:19:31.646984 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:19:31.900373 1 serving.go:313] Generated self-signed cert in-memory
* W0701 03:19:36.639700 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0701 03:19:36.639722 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0701 03:19:36.639737 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0701 03:19:36.639742 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0701 03:19:36.662985 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:19:36.663168 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0701 03:19:36.669199 1 authorization.go:47] Authorization is disabled
* W0701 03:19:36.669272 1 authentication.go:40] Authentication is disabled
* I0701 03:19:36.669306 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:19:36.671409 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:19:36.672308 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:19:36.672384 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:19:36.672647 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0701 03:19:36.773707 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:19:21 UTC, end at Wed 2020-07-01 03:20:45 UTC. --
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:29.196786 2370 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d17ab07a631df93cae8ccccfc423a811fa168505c11ecff7d72dbb57e222533
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:29.219084 2370 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2121efe9ad0922175582bb29ab61b12c53560833d84ad52403eb9fb9153d8d19
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: W0701 03:20:29.245745 2370 status_manager.go:572] Failed to update status for pod "kube-proxy-54r4l_kube-system(3c06b8fd-6927-49bb-ac39-b0882634206d)": failed to patch status "{\"metadata\":{\"uid\":\"3c06b8fd-6927-49bb-ac39-b0882634206d\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-07-01T03:19:38Z\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-07-01T03:19:38Z\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"docker://85314c220aff238c685d578cc0c7876f8eeab778b3f430734bafbfe76cda208b\",\"image\":\"k8s.gcr.io/kube-proxy:v1.18.3\",\"imageID\":\"docker-pullable://k8s.gcr.io/kube-proxy@sha256:6a093c22e305039b7bd6c3f8eab8f202ad8238066ed210857b25524443aa8aff\",\"lastState\":{},\"name\":\"kube-proxy\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-07-01T03:19:38Z\"}}}]}}" for pod "kube-system"/"kube-proxy-54r4l": pods "kube-proxy-54r4l" is forbidden: node "multinode-20200701031411-8084" can only update pod status for pods with spec.nodeName set to itself
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:29.246762 2370 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4a263f26ce2dc9f507fce07d6dd762471d9bf52c0d1297299f583052f395fb49
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: W0701 03:20:29.253448 2370 status_manager.go:572] Failed to update status for pod "kindnet-m6j4t_kube-system(2f7ff2ee-9f46-43b7-b4e0-b5e741c927b9)": failed to patch status "{\"metadata\":{\"uid\":\"2f7ff2ee-9f46-43b7-b4e0-b5e741c927b9\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-07-01T03:19:40Z\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-07-01T03:19:40Z\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"docker://cf1327a33a304ddd156dab6dbb53a8aef48218515946ef4096ca8059f46d020e\",\"image\":\"kindest/kindnetd:0.5.4\",\"imageID\":\"docker-pullable://kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98\",\"lastState\":{},\"name\":\"kindnet-cni\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-07-01T03:19:39Z\"}}}]}}" for pod "kube-system"/"kindnet-m6j4t": pods "kindnet-m6j4t" is forbidden: node "multinode-20200701031411-8084" can only update pod status for pods with spec.nodeName set to itself
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:29.279057 2370 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 906b3966fd863b551a1c60c762417cd2fc9c7807a0b42939cd018816a0d0a2dd
* Jul 01 03:20:29 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:29.300711 2370 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8175d56511719233659980d1b4b0ac19d7b72f1d2f57b3842d4b9b8618584b76
* Jul 01 03:20:30 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:30.184074 2370 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
* Jul 01 03:20:30 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:30.184455 2370 kubelet_node_status.go:520] Failed to set some node status fields: failed to validate nodeIP: node IP: "192.168.39.66" not found in the host's network interfaces
* Jul 01 03:20:30 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:30.215899 2370 kubelet_node_status.go:70] Attempting to register node multinode-20200701031411-8084-m02
* Jul 01 03:20:30 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:30.218100 2370 kubelet_node_status.go:92] Unable to register node "multinode-20200701031411-8084-m02" with API server: nodes "multinode-20200701031411-8084-m02" is forbidden: node "multinode-20200701031411-8084" is not allowed to modify node "multinode-20200701031411-8084-m02"
* Jul 01 03:20:35 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:35.207487 2370 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "multinode-20200701031411-8084-m02" is forbidden: User "system:node:multinode-20200701031411-8084" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
* Jul 01 03:20:37 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:37.218402 2370 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
* Jul 01 03:20:37 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:37.219168 2370 kubelet_node_status.go:520] Failed to set some node status fields: failed to validate nodeIP: node IP: "192.168.39.66" not found in the host's network interfaces
* Jul 01 03:20:37 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:37.278234 2370 kubelet_node_status.go:70] Attempting to register node multinode-20200701031411-8084-m02
* Jul 01 03:20:37 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:37.280329 2370 kubelet_node_status.go:92] Unable to register node "multinode-20200701031411-8084-m02" with API server: nodes "multinode-20200701031411-8084-m02" is forbidden: node "multinode-20200701031411-8084" is not allowed to modify node "multinode-20200701031411-8084-m02"
* Jul 01 03:20:38 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:38.241254 2370 kubelet.go:1663] Failed creating a mirror pod for "etcd-multinode-20200701031411-8084-m02_kube-system(62860af99f3e9000295a050c2dc66fec)": pods "etcd-multinode-20200701031411-8084-m02" is forbidden: node "multinode-20200701031411-8084" can only create pods with spec.nodeName set to itself
* Jul 01 03:20:39 multinode-20200701031411-8084 kubelet[2370]: W0701 03:20:39.242825 2370 status_manager.go:572] Failed to update status for pod "kindnet-m6j4t_kube-system(2f7ff2ee-9f46-43b7-b4e0-b5e741c927b9)": failed to patch status "{\"metadata\":{\"uid\":\"2f7ff2ee-9f46-43b7-b4e0-b5e741c927b9\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-07-01T03:19:40Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-07-01T03:19:40Z\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"docker://cf1327a33a304ddd156dab6dbb53a8aef48218515946ef4096ca8059f46d020e\",\"image\":\"kindest/kindnetd:0.5.4\",\"imageID\":\"docker-pullable://kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98\",\"lastState\":{},\"name\":\"kindnet-cni\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-07-01T03:19:39Z\"}}}]}}" for pod "kube-system"/"kindnet-m6j4t": pods "kindnet-m6j4t" is forbidden: node "multinode-20200701031411-8084" can only update pod status for pods with spec.nodeName set to itself
* Jul 01 03:20:39 multinode-20200701031411-8084 kubelet[2370]: W0701 03:20:39.250058 2370 status_manager.go:572] Failed to update status for pod "kube-proxy-54r4l_kube-system(3c06b8fd-6927-49bb-ac39-b0882634206d)": failed to patch status "{\"metadata\":{\"uid\":\"3c06b8fd-6927-49bb-ac39-b0882634206d\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-07-01T03:19:38Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-07-01T03:19:38Z\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"docker://85314c220aff238c685d578cc0c7876f8eeab778b3f430734bafbfe76cda208b\",\"image\":\"k8s.gcr.io/kube-proxy:v1.18.3\",\"imageID\":\"docker-pullable://k8s.gcr.io/kube-proxy@sha256:6a093c22e305039b7bd6c3f8eab8f202ad8238066ed210857b25524443aa8aff\",\"lastState\":{},\"name\":\"kube-proxy\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-07-01T03:19:38Z\"}}}]}}" for pod "kube-system"/"kube-proxy-54r4l": pods "kube-proxy-54r4l" is forbidden: node "multinode-20200701031411-8084" can only update pod status for pods with spec.nodeName set to itself
* Jul 01 03:20:40 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:40.236546 2370 kubelet.go:1663] Failed creating a mirror pod for "kube-controller-manager-multinode-20200701031411-8084-m02_kube-system(63bd6055502d129b1c4549fc637836ed)": pods "kube-controller-manager-multinode-20200701031411-8084-m02" is forbidden: node "multinode-20200701031411-8084" can only create pods with spec.nodeName set to itself
* Jul 01 03:20:42 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:42.209766 2370 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "multinode-20200701031411-8084-m02" is forbidden: User "system:node:multinode-20200701031411-8084" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
* Jul 01 03:20:44 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:44.280704 2370 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
* Jul 01 03:20:44 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:44.281026 2370 kubelet_node_status.go:520] Failed to set some node status fields: failed to validate nodeIP: node IP: "192.168.39.66" not found in the host's network interfaces
* Jul 01 03:20:44 multinode-20200701031411-8084 kubelet[2370]: I0701 03:20:44.316061 2370 kubelet_node_status.go:70] Attempting to register node multinode-20200701031411-8084-m02
* Jul 01 03:20:44 multinode-20200701031411-8084 kubelet[2370]: E0701 03:20:44.318807 2370 kubelet_node_status.go:92] Unable to register node "multinode-20200701031411-8084-m02" with API server: nodes "multinode-20200701031411-8084-m02" is forbidden: node "multinode-20200701031411-8084" is not allowed to modify node "multinode-20200701031411-8084-m02"
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20200701031411-8084 -n multinode-20200701031411-8084
helpers_test.go:254: (dbg) Run: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (468ns)
helpers_test.go:256: kubectl --context multinode-20200701031411-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
helpers_test.go:170: Cleaning up "multinode-20200701031411-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20200701031411-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20200701031411-8084: (1.779962777s)
=== RUN TestNetworkPlugins
=== PAUSE TestNetworkPlugins
=== RUN TestChangeNoneUser
--- SKIP: TestChangeNoneUser (0.00s)
none_test.go:38: Only test none driver.
=== RUN TestPause
=== PAUSE TestPause
=== RUN TestPreload
--- PASS: TestPreload (170.09s)
preload_test.go:43: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20200701032047-8084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --kubernetes-version=v1.17.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20200701032047-8084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --kubernetes-version=v1.17.0: (1m58.587039647s)
preload_test.go:50: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20200701032047-8084 -- docker pull busybox
preload_test.go:50: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20200701032047-8084 -- docker pull busybox: (1.836284994s)
preload_test.go:60: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20200701032047-8084 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --kubernetes-version=v1.17.3
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20200701032047-8084 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --kubernetes-version=v1.17.3: (48.344295188s)
preload_test.go:64: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20200701032047-8084 -- docker images
helpers_test.go:170: Cleaning up "test-preload-20200701032047-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-20200701032047-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20200701032047-8084: (1.08171127s)
=== RUN TestStartStop
=== PAUSE TestStartStop
=== RUN TestVersionUpgrade
=== PAUSE TestVersionUpgrade
=== CONT TestCertOptions
=== CONT TestPause
=== RUN TestPause/serial
=== RUN TestPause/serial/Start
=== CONT TestErrorSpam
=== CONT TestVersionUpgrade
--- FAIL: TestCertOptions (201.37s)
cert_options_test.go:46: (dbg) Run: out/minikube-linux-amd64 start -p cert-options-20200701032338-8084 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20200701032338-8084 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (3m7.588276742s)
cert_options_test.go:57: (dbg) Run: out/minikube-linux-amd64 -p cert-options-20200701032338-8084 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:72: (dbg) Run: kubectl --context cert-options-20200701032338-8084 config view
cert_options_test.go:72: (dbg) Non-zero exit: kubectl --context cert-options-20200701032338-8084 config view: exec: "kubectl": executable file not found in $PATH (351ns)
cert_options_test.go:74: failed to get kubectl config. args "kubectl --context cert-options-20200701032338-8084 config view" : exec: "kubectl": executable file not found in $PATH
cert_options_test.go:77: apiserver server port incorrect. Output of 'kubectl config view' = ""
cert_options_test.go:80: *** TestCertOptions FAILED at 2020-07-01 03:26:46.142633828 +0000 UTC m=+1811.243922407
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p cert-options-20200701032338-8084 -n cert-options-20200701032338-8084
helpers_test.go:232: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p cert-options-20200701032338-8084 -n cert-options-20200701032338-8084: (2.678287358s)
helpers_test.go:237: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p cert-options-20200701032338-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p cert-options-20200701032338-8084 logs -n 25: (9.275860571s)
helpers_test.go:245: TestCertOptions logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:23:46 UTC, end at Wed 2020-07-01 03:26:52 UTC. --
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207011394Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207063731Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207073318Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207079353Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207100264Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207100264Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.207260252Z" level=info msg="Loading containers: start."
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.787854261Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.831677938Z" level=info msg="Loading containers: done."
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.856076547Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.856243331Z" level=info msg="Daemon has completed initialization"
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.876102721Z" level=info msg="API listen on /var/run/docker.sock"
* Jul 01 03:25:14 cert-options-20200701032338-8084 systemd[1]: Started Docker Application Container Engine.
* Jul 01 03:25:14 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:14.876998873Z" level=info msg="API listen on [::]:2376"
* Jul 01 03:25:20 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:20.934694804Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0497cfc422022b2eff2abe662124ec54c7b20b43626030b21d8eb73f967846b/shim.sock" debug=false pid=3100
* Jul 01 03:25:20 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:20.940996806Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f938de9429ddb54475c505dfbb0d319bbbe0b271991e7905fb39562509c14254/shim.sock" debug=false pid=3107
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.021236120Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/128c504481d9b1f569f15418236229e2a9e316235e4a487e1fc947e3f55553b5/shim.sock" debug=false pid=3148
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.347039663Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2940984a97129099665538fea2f99c7be56f2052a54182299c2fee4fff379cb3/shim.sock" debug=false pid=3224
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.648998803Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5b3fd921c67a10f7bcbc4c646e7f40853b810d1ad8a15e1664ef8742666b0edf/shim.sock" debug=false pid=3308
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.700994588Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1a22db3bbfd8496616d5682826069fa119d55d04c9f63b2ff0cc8675797836de/shim.sock" debug=false pid=3331
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.704275074Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ae5b63d266106ffcc33227429162c636ca3db3e5b05f742e0c64c6384cf9167/shim.sock" debug=false pid=3332
* Jul 01 03:25:21 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:21.985689406Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9901936725e969d6c87681f18e03cbd744189556ba028dca945c5422881cdd2f/shim.sock" debug=false pid=3440
* Jul 01 03:25:37 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:37.246259663Z" level=info msg="shim reaped" id=5b3fd921c67a10f7bcbc4c646e7f40853b810d1ad8a15e1664ef8742666b0edf
* Jul 01 03:25:37 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:25:37.256294134Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:26:40 cert-options-20200701032338-8084 dockerd[2199]: time="2020-07-01T03:26:40.701820395Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c357d61f4331e9d54a4167749da921a31045c8b80546fbd6f79f04e9238ae4cc/shim.sock" debug=false pid=3992
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* c357d61f4331e da26705ccb4b5 15 seconds ago Running kube-controller-manager 1 128c504481d9b
* 9901936725e96 76216c34ed0c7 About a minute ago Running kube-scheduler 0 2940984a97129
* 7ae5b63d26610 7e28efa976bd1 About a minute ago Running kube-apiserver 0 f938de9429ddb
* 5b3fd921c67a1 da26705ccb4b5 About a minute ago Exited kube-controller-manager 0 128c504481d9b
* 1a22db3bbfd84 303ce5db0e90d About a minute ago Running etcd 0 d0497cfc42202
*
* ==> describe nodes <==
* Name: cert-options-20200701032338-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=cert-options-20200701032338-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=cert-options-20200701032338-8084
* minikube.k8s.io/updated_at=2020_07_01T03_26_43_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:25:27 +0000
* Taints: node.kubernetes.io/not-ready:NoSchedule
* Unschedulable: false
* Lease:
* HolderIdentity: cert-options-20200701032338-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:26:54 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:26:55 +0000 Wed, 01 Jul 2020 03:25:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:26:55 +0000 Wed, 01 Jul 2020 03:25:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:26:55 +0000 Wed, 01 Jul 2020 03:25:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:26:55 +0000 Wed, 01 Jul 2020 03:26:55 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.86
* Hostname: cert-options-20200701032338-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 1796600Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 1796600Ki
* pods: 110
* System Info:
* Machine ID: 936c12ce33bf47a49347bdafacd1e1f9
* System UUID: 936c12ce-33bf-47a4-9347-bdafacd1e1f9
* Boot ID: df67b144-ed1e-49b0-9095-d9c7221d9129
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.8
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* Non-terminated Pods: (4 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system etcd-cert-options-20200701032338-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s
* kube-system kube-apiserver-cert-options-20200701032338-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12s
* kube-system kube-controller-manager-cert-options-20200701032338-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 17s
* kube-system kube-scheduler-cert-options-20200701032338-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 550m (27%) 0 (0%)
* memory 0 (0%) 0 (0%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 97s (x6 over 97s) kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 97s (x6 over 97s) kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 97s (x6 over 97s) kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasSufficientPID
* Normal Starting 13s kubelet, cert-options-20200701032338-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 13s kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 13s kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 13s kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeHasSufficientPID
* Normal NodeNotReady 12s kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeNotReady
* Normal NodeAllocatableEnforced 12s kubelet, cert-options-20200701032338-8084 Updated Node Allocatable limit across pods
* Normal NodeReady 2s kubelet, cert-options-20200701032338-8084 Node cert-options-20200701032338-8084 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [ +0.051634] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +2.839248] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +1.760119] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.007652] systemd-fstab-generator[1145]: Ignoring "noauto" for root device
* [ +0.003215] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.500346] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +0.466397] vboxguest: loading out-of-tree module taints kernel.
* [ +0.004474] vboxguest: PCI device not found, probably running on physical hardware.
* [ +4.450293] systemd-fstab-generator[1985]: Ignoring "noauto" for root device
* [ +0.079237] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
* [Jul 1 03:24] systemd-fstab-generator[2188]: Ignoring "noauto" for root device
* [Jul 1 03:25] kauditd_printk_skb: 65 callbacks suppressed
* [ +0.729089] systemd-fstab-generator[2356]: Ignoring "noauto" for root device
* [ +0.311434] systemd-fstab-generator[2428]: Ignoring "noauto" for root device
* [ +1.583815] systemd-fstab-generator[2637]: Ignoring "noauto" for root device
* [ +3.218416] kauditd_printk_skb: 107 callbacks suppressed
* [ +31.128160] NFSD: Unable to end grace period: -110
* [Jul 1 03:26] systemd-fstab-generator[4038]: Ignoring "noauto" for root device
*
* ==> etcd [1a22db3bbfd8] <==
* 2020-07-01 03:26:48.816016 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations\" range_end:\"/registry/validatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (1.108279094s) to execute
* 2020-07-01 03:26:48.816645 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims\" range_end:\"/registry/persistentvolumeclaimt\" count_only:true " with result "range_response_count:0 size:5" took too long (1.700271816s) to execute
* 2020-07-01 03:26:48.817186 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (387.522103ms) to execute
* 2020-07-01 03:26:48.817614 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.294671465s) to execute
* 2020-07-01 03:26:50.425593 W | wal: sync duration of 1.598345036s, expected less than 1s
* 2020-07-01 03:26:50.426218 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (1.595521511s) to execute
* 2020-07-01 03:26:52.407466 W | wal: sync duration of 1.97572816s, expected less than 1s
* 2020-07-01 03:26:52.624152 W | etcdserver: request "header:<ID:985851848388121833 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:279 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:180 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >>" with result "size:16" took too long (215.870729ms) to execute
* 2020-07-01 03:26:52.624588 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/token-cleaner\" " with result "range_response_count:0 size:5" took too long (2.18610997s) to execute
* 2020-07-01 03:26:54.802723 W | wal: sync duration of 1.105143917s, expected less than 1s
* 2020-07-01 03:26:54.805648 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (2.177496007s) to execute
* 2020-07-01 03:26:54.806324 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (3.125564704s) to execute
* 2020-07-01 03:26:54.806995 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (2.916554692s) to execute
* 2020-07-01 03:26:54.807651 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (1.84289099s) to execute
* 2020-07-01 03:26:54.808510 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.114172827s) to execute
* 2020-07-01 03:26:54.809246 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.769437663s) to execute
* 2020-07-01 03:26:54.810040 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (1.367291804s) to execute
* 2020-07-01 03:26:54.810797 W | etcdserver: request "header:<ID:985851848388121838 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-cert-options-20200701032338-8084.161d831f5192faad\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-cert-options-20200701032338-8084.161d831f5192faad\" value_size:764 lease:985851848388121594 >> failure:<>>" with result "size:16" took too long (1.112955656s) to execute
* 2020-07-01 03:26:54.813823 W | etcdserver: read-only range request "key:\"/registry/controllers\" range_end:\"/registry/controllert\" count_only:true " with result "range_response_count:0 size:5" took too long (901.90604ms) to execute
* 2020-07-01 03:26:54.816158 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-cert-options-20200701032338-8084\" " with result "range_response_count:1 size:3638" took too long (168.74491ms) to execute
* 2020-07-01 03:26:57.615838 W | etcdserver: request "header:<ID:985851848388121861 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/endpointslice-controller-token-4nj9h\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/endpointslice-controller-token-4nj9h\" value_size:2675 >> failure:<>>" with result "size:16" took too long (2.704651346s) to execute
* 2020-07-01 03:26:57.616318 W | wal: sync duration of 2.705976844s, expected less than 1s
* 2020-07-01 03:26:57.616858 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5116" took too long (2.666117849s) to execute
* 2020-07-01 03:26:57.629644 W | etcdserver: read-only range request "key:\"/registry/csidrivers\" range_end:\"/registry/csidrivert\" count_only:true " with result "range_response_count:0 size:5" took too long (518.637402ms) to execute
* 2020-07-01 03:26:57.629860 W | etcdserver: read-only range request "key:\"/registry/cronjobs\" range_end:\"/registry/cronjobt\" count_only:true " with result "range_response_count:0 size:5" took too long (585.266889ms) to execute
*
* ==> kernel <==
* 03:26:57 up 3 min, 0 users, load average: 3.31, 1.64, 0.64
* Linux cert-options-20200701032338-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [7ae5b63d2661] <==
* Trace[1221875591]: [1.597385678s] [1.597378804s] About to write a response
* I0701 03:26:52.625595 1 trace.go:116] Trace[991544081]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/token-cleaner,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/kube-controller-manager,client:192.168.39.86 (started: 2020-07-01 03:26:50.4378388 +0000 UTC m=+88.433228380) (total time: 2.187724809s):
* Trace[991544081]: [2.187724809s] [2.187716108s] END
* I0701 03:26:52.625609 1 trace.go:116] Trace[6472669]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-07-01 03:26:50.434688625 +0000 UTC m=+88.430078321) (total time: 2.19089971s):
* Trace[6472669]: [2.190876207s] [2.190549383s] Transaction committed
* I0701 03:26:52.626397 1 trace.go:116] Trace[1444590178]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/tokens-controller,client:192.168.39.86 (started: 2020-07-01 03:26:50.434614987 +0000 UTC m=+88.430004734) (total time: 2.191762382s):
* Trace[1444590178]: [2.191718934s] [2.191677408s] Object stored in database
* I0701 03:26:54.811751 1 trace.go:116] Trace[227557996]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/kube-controller-manager,client:192.168.39.86 (started: 2020-07-01 03:26:52.627568798 +0000 UTC m=+90.622958269) (total time: 2.184140538s):
* Trace[227557996]: [2.184083603s] [2.184077327s] About to write a response
* I0701 03:26:54.811751 1 trace.go:116] Trace[292095219]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:26:51.680254928 +0000 UTC m=+89.675644392) (total time: 3.131455062s):
* Trace[292095219]: [3.131398486s] [3.131389883s] About to write a response
* I0701 03:26:54.819176 1 trace.go:116] Trace[515621570]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.86 (started: 2020-07-01 03:26:53.696110938 +0000 UTC m=+91.691500588) (total time: 1.122453002s):
* Trace[515621570]: [1.122414869s] [1.122277902s] Object stored in database
* I0701 03:26:57.618123 1 trace.go:116] Trace[429760266]: "Create" url:/api/v1/namespaces/kube-system/secrets,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/tokens-controller,client:192.168.39.86 (started: 2020-07-01 03:26:54.908650252 +0000 UTC m=+92.904039864) (total time: 2.709442809s):
* Trace[429760266]: [2.709390737s] [2.708799807s] Object stored in database
* I0701 03:26:57.620568 1 trace.go:116] Trace[644188368]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-07-01 03:26:55.238645017 +0000 UTC m=+93.234034565) (total time: 2.381904639s):
* Trace[644188368]: [2.381847288s] [2.380277455s] Transaction committed
* I0701 03:26:57.621217 1 trace.go:116] Trace[1088448133]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.86 (started: 2020-07-01 03:26:55.240875833 +0000 UTC m=+93.236265360) (total time: 2.380318773s):
* Trace[1088448133]: [2.380287065s] [2.380028799s] Object stored in database
* I0701 03:26:57.622062 1 trace.go:116] Trace[1516308564]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2020-07-01 03:26:54.950339702 +0000 UTC m=+92.945729212) (total time: 2.671703055s):
* Trace[1516308564]: [2.671703055s] [2.671703055s] END
* I0701 03:26:57.623428 1 trace.go:116] Trace[1263670629]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:26:54.950323479 +0000 UTC m=+92.945712990) (total time: 2.673078951s):
* Trace[1263670629]: [2.671871523s] [2.671863446s] Listing from storage done
* I0701 03:26:57.624747 1 trace.go:116] Trace[1894883565]: "Patch" url:/api/v1/nodes/cert-options-20200701032338-8084/status,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.39.86 (started: 2020-07-01 03:26:55.238528704 +0000 UTC m=+93.233918215) (total time: 2.386193201s):
* Trace[1894883565]: [2.382098313s] [2.380943796s] Object stored in database
*
* ==> kube-controller-manager [5b3fd921c67a] <==
* I0701 03:25:23.152471 1 serving.go:313] Generated self-signed cert in-memory
* I0701 03:25:23.556219 1 controllermanager.go:161] Version: v1.18.3
* I0701 03:25:23.558152 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0701 03:25:23.558624 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0701 03:25:23.559308 1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
* I0701 03:25:23.559437 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0701 03:25:23.560472 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
* F0701 03:25:37.176177 1 controllermanager.go:230] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User "system:kube-controller-manager" cannot get path "/healthz"
*
* ==> kube-controller-manager [c357d61f4331] <==
* I0701 03:26:50.436886 1 controllermanager.go:533] Started "csrapproving"
* I0701 03:26:50.438256 1 certificate_controller.go:119] Starting certificate controller "csrapproving"
* I0701 03:26:50.438313 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
* I0701 03:26:54.851483 1 controllermanager.go:533] Started "tokencleaner"
* I0701 03:26:54.851704 1 tokencleaner.go:118] Starting token cleaner controller
* I0701 03:26:54.851718 1 shared_informer.go:223] Waiting for caches to sync for token_cleaner
* I0701 03:26:54.851727 1 shared_informer.go:230] Caches are synced for token_cleaner
* I0701 03:26:57.636094 1 controllermanager.go:533] Started "endpointslice"
* I0701 03:26:57.637068 1 endpointslice_controller.go:213] Starting endpoint slice controller
* I0701 03:26:57.637365 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
* I0701 03:26:57.697891 1 controllermanager.go:533] Started "replicaset"
* I0701 03:26:57.698380 1 replica_set.go:181] Starting replicaset controller
* I0701 03:26:57.698459 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet
* I0701 03:26:57.719102 1 controllermanager.go:533] Started "csrcleaner"
* I0701 03:26:57.719198 1 cleaner.go:82] Starting CSR cleaner controller
* I0701 03:26:57.822818 1 controllermanager.go:533] Started "horizontalpodautoscaling"
* I0701 03:26:57.824024 1 horizontal.go:169] Starting HPA controller
* I0701 03:26:57.824897 1 shared_informer.go:223] Waiting for caches to sync for HPA
* I0701 03:26:57.835979 1 controllermanager.go:533] Started "csrsigning"
* I0701 03:26:57.836395 1 certificate_controller.go:119] Starting certificate controller "csrsigning"
* I0701 03:26:57.836496 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning
* I0701 03:26:57.836780 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
* I0701 03:26:57.866042 1 controllermanager.go:533] Started "disruption"
* I0701 03:26:57.866367 1 disruption.go:331] Starting disruption controller
* I0701 03:26:57.866509 1 shared_informer.go:223] Waiting for caches to sync for disruption
*
* ==> kube-scheduler [9901936725e9] <==
* E0701 03:25:35.515886 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:25:35.563138 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:25:35.609379 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:25:36.403393 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:25:36.682778 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:25:44.075133 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:25:44.981615 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:25:45.011111 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:25:45.661895 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:25:47.351666 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:25:47.430407 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:25:47.707571 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:25:47.817473 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:25:48.318938 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:26:02.455598 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:26:03.153550 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:26:03.696492 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:26:03.774077 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0701 03:26:04.462183 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:26:05.669316 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:26:07.824552 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:26:07.853676 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:26:09.515789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:26:34.498208 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:26:36.483211 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:23:46 UTC, end at Wed 2020-07-01 03:26:58 UTC. --
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.133552 4046 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-07-01 03:26:45.133494018 +0000 UTC m=+1.587454632 LastTransitionTime:2020-07-01 03:26:45.133494018 +0000 UTC m=+1.587454632 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.284896 4046 cpu_manager.go:184] [cpumanager] starting with none policy
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.285026 4046 cpu_manager.go:185] [cpumanager] reconciling every 10s
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.285059 4046 state_mem.go:36] [cpumanager] initializing new in-memory state store
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.285275 4046 state_mem.go:88] [cpumanager] updated default cpuset: ""
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.285289 4046 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.285303 4046 policy_none.go:43] [cpumanager] none policy: Start
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.287469 4046 plugin_manager.go:114] Starting Kubelet Plugin Manager
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.349228 4046 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.355760 4046 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.366806 4046 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.385801 4046 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: E0701 03:26:45.437652 4046 kubelet.go:1663] Failed creating a mirror pod for "kube-controller-manager-cert-options-20200701032338-8084_kube-system(ba963bc1bff8609dc4fc4d359349c120)": pods "kube-controller-manager-cert-options-20200701032338-8084" already exists
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467677 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e5841320ff784eb8619d510b62e20abf-k8s-certs") pod "kube-apiserver-cert-options-20200701032338-8084" (UID: "e5841320ff784eb8619d510b62e20abf")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467802 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e5841320ff784eb8619d510b62e20abf-usr-share-ca-certificates") pod "kube-apiserver-cert-options-20200701032338-8084" (UID: "e5841320ff784eb8619d510b62e20abf")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467834 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-flexvolume-dir") pod "kube-controller-manager-cert-options-20200701032338-8084" (UID: "ba963bc1bff8609dc4fc4d359349c120")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467860 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-kubeconfig") pod "kube-controller-manager-cert-options-20200701032338-8084" (UID: "ba963bc1bff8609dc4fc4d359349c120")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467874 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-usr-share-ca-certificates") pod "kube-controller-manager-cert-options-20200701032338-8084" (UID: "ba963bc1bff8609dc4fc4d359349c120")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467913 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcddbd0cc8c89e2cbf4de5d3cca8769f-kubeconfig") pod "kube-scheduler-cert-options-20200701032338-8084" (UID: "dcddbd0cc8c89e2cbf4de5d3cca8769f")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.467913 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/b8e0557077bc0db74faf9be2de6103a2-etcd-certs") pod "etcd-cert-options-20200701032338-8084" (UID: "b8e0557077bc0db74faf9be2de6103a2")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.468039 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e5841320ff784eb8619d510b62e20abf-ca-certs") pod "kube-apiserver-cert-options-20200701032338-8084" (UID: "e5841320ff784eb8619d510b62e20abf")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.468039 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-k8s-certs") pod "kube-controller-manager-cert-options-20200701032338-8084" (UID: "ba963bc1bff8609dc4fc4d359349c120")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.468080 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/b8e0557077bc0db74faf9be2de6103a2-etcd-data") pod "etcd-cert-options-20200701032338-8084" (UID: "b8e0557077bc0db74faf9be2de6103a2")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.468080 4046 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-ca-certs") pod "kube-controller-manager-cert-options-20200701032338-8084" (UID: "ba963bc1bff8609dc4fc4d359349c120")
* Jul 01 03:26:45 cert-options-20200701032338-8084 kubelet[4046]: I0701 03:26:45.573342 4046 reconciler.go:157] Reconciler: start to sync state
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-options-20200701032338-8084 -n cert-options-20200701032338-8084
helpers_test.go:254: (dbg) Run: kubectl --context cert-options-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context cert-options-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (389ns)
helpers_test.go:256: kubectl --context cert-options-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
helpers_test.go:170: Cleaning up "cert-options-20200701032338-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p cert-options-20200701032338-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20200701032338-8084: (1.022961833s)
=== CONT TestNetworkPlugins
=== RUN TestNetworkPlugins/group
=== RUN TestNetworkPlugins/group/auto
=== PAUSE TestNetworkPlugins/group/auto
=== RUN TestNetworkPlugins/group/kubenet
=== PAUSE TestNetworkPlugins/group/kubenet
=== RUN TestNetworkPlugins/group/bridge
=== PAUSE TestNetworkPlugins/group/bridge
=== RUN TestNetworkPlugins/group/enable-default-cni
=== PAUSE TestNetworkPlugins/group/enable-default-cni
=== RUN TestNetworkPlugins/group/flannel
=== PAUSE TestNetworkPlugins/group/flannel
=== RUN TestNetworkPlugins/group/kindnet
=== PAUSE TestNetworkPlugins/group/kindnet
=== RUN TestNetworkPlugins/group/false
=== PAUSE TestNetworkPlugins/group/false
=== RUN TestNetworkPlugins/group/custom-weave
=== PAUSE TestNetworkPlugins/group/custom-weave
=== CONT TestGvisorAddon
=== RUN TestPause/serial/SecondStartNoReconfiguration
--- FAIL: TestGvisorAddon (182.14s)
gvisor_addon_test.go:51: (dbg) Run: out/minikube-linux-amd64 start -p gvisor-20200701032659-8084 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2
gvisor_addon_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-20200701032659-8084 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m34.607175143s)
gvisor_addon_test.go:57: (dbg) Run: out/minikube-linux-amd64 -p gvisor-20200701032659-8084 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:57: (dbg) Done: out/minikube-linux-amd64 -p gvisor-20200701032659-8084 cache add gcr.io/k8s-minikube/gvisor-addon:2: (9.091342518s)
gvisor_addon_test.go:62: (dbg) Run: out/minikube-linux-amd64 -p gvisor-20200701032659-8084 addons enable gvisor
gvisor_addon_test.go:62: (dbg) Done: out/minikube-linux-amd64 -p gvisor-20200701032659-8084 addons enable gvisor: (11.475994162s)
gvisor_addon_test.go:67: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:332: "gvisor" [87086a65-c55f-4c5e-afc5-2012f0877fb5] Running
gvisor_addon_test.go:67: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.015381173s
gvisor_addon_test.go:72: (dbg) Run: kubectl --context gvisor-20200701032659-8084 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:72: (dbg) Non-zero exit: kubectl --context gvisor-20200701032659-8084 replace --force -f testdata/nginx-untrusted.yaml: exec: "kubectl": executable file not found in $PATH (409ns)
gvisor_addon_test.go:74: kubectl --context gvisor-20200701032659-8084 replace --force -f testdata/nginx-untrusted.yaml failed: exec: "kubectl": executable file not found in $PATH
gvisor_addon_test.go:41: (dbg) Run: kubectl --context gvisor-20200701032659-8084 logs gvisor -n kube-system
gvisor_addon_test.go:41: (dbg) Non-zero exit: kubectl --context gvisor-20200701032659-8084 logs gvisor -n kube-system: exec: "kubectl": executable file not found in $PATH (55ns)
gvisor_addon_test.go:43: failed to get gvisor post-mortem logs: exec: "kubectl": executable file not found in $PATH
gvisor_addon_test.go:45: gvisor post-mortem: kubectl --context gvisor-20200701032659-8084 logs gvisor -n kube-system:
gvisor_addon_test.go:47: *** TestGvisorAddon FAILED at 2020-07-01 03:29:59.577795346 +0000 UTC m=+2004.679083922
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p gvisor-20200701032659-8084 -n gvisor-20200701032659-8084
helpers_test.go:237: <<< TestGvisorAddon FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestGvisorAddon]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p gvisor-20200701032659-8084 logs -n 25
helpers_test.go:245: TestGvisorAddon logs:
-- stdout --
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 2ef915581a27f 52372175c42a0 6 seconds ago Running gvisor 0 5600c257a5afd
* 69e29210382ce 67da37a9a360e 9 seconds ago Running coredns 0 5e4cf5a7cffd7
* 54b14667a3811 3439b7546f29b 11 seconds ago Running kube-proxy 0 30f7867b5536d
* e0e4ec565e716 4689081edb103 11 seconds ago Running storage-provisioner 0 823a8078efa6a
* 1c0689a244eb1 303ce5db0e90d 37 seconds ago Running etcd 0 9608609744646
* c76a37906e51f 76216c34ed0c7 37 seconds ago Running kube-scheduler 0 6294fdae4dc42
* d4c8e8d6b31ef da26705ccb4b5 37 seconds ago Running kube-controller-manager 0 2665ea8ae5cdb
* 7bbba6ee79e8d 7e28efa976bd1 37 seconds ago Running kube-apiserver 0 ad003d6beb432
*
* ==> containerd <==
* -- Logs begin at Wed 2020-07-01 03:27:12 UTC, end at Wed 2020-07-01 03:30:00 UTC. --
* Jul 01 03:29:48 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:48.866677444Z" level=info msg="StartContainer for "54b14667a381147ef7a90bab68c5e2ebaac74fbdb1a0d51d59a039eaafa2abd6""
* Jul 01 03:29:48 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:48.875660259Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/54b14667a381147ef7a90bab68c5e2ebaac74fbdb1a0d51d59a039eaafa2abd6/shim.sock" debug=false pid=3003
* Jul 01 03:29:49 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:49.059396977Z" level=info msg="StartContainer for "54b14667a381147ef7a90bab68c5e2ebaac74fbdb1a0d51d59a039eaafa2abd6" returns successfully"
* Jul 01 03:29:49 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:49.880844446Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-66bff467f8-bs6rq,Uid:27aee0fc-6cd3-46e0-afb8-ac027b25e8da,Namespace:kube-system,Attempt:0,}"
* Jul 01 03:29:49 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:49.985446310Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/5e4cf5a7cffd7468519b0d654acb9651c335542df4671e79d987665f420e827e/shim.sock" debug=false pid=3699
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.079215475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bff467f8-bs6rq,Uid:27aee0fc-6cd3-46e0-afb8-ac027b25e8da,Namespace:kube-system,Attempt:0,} returns sandbox id "5e4cf5a7cffd7468519b0d654acb9651c335542df4671e79d987665f420e827e""
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.084104986Z" level=info msg="CreateContainer within sandbox "5e4cf5a7cffd7468519b0d654acb9651c335542df4671e79d987665f420e827e" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.136107082Z" level=info msg="CreateContainer within sandbox "5e4cf5a7cffd7468519b0d654acb9651c335542df4671e79d987665f420e827e" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id "69e29210382cec23de73ddc2295ba4780399561dfaf23358936ffebcc024ae7d""
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.140116795Z" level=info msg="StartContainer for "69e29210382cec23de73ddc2295ba4780399561dfaf23358936ffebcc024ae7d""
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.145625557Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/69e29210382cec23de73ddc2295ba4780399561dfaf23358936ffebcc024ae7d/shim.sock" debug=false pid=3805
* Jul 01 03:29:50 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:50.277238078Z" level=info msg="StartContainer for "69e29210382cec23de73ddc2295ba4780399561dfaf23358936ffebcc024ae7d" returns successfully"
* Jul 01 03:29:51 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:51.408540105Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:gvisor,Uid:87086a65-c55f-4c5e-afc5-2012f0877fb5,Namespace:kube-system,Attempt:0,}"
* Jul 01 03:29:51 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:51.522701090Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/5600c257a5afdb0520d118c79938df3511edcda6b5382461c0a523f4e2ff2ac6/shim.sock" debug=false pid=4872
* Jul 01 03:29:51 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:51.697905708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:gvisor,Uid:87086a65-c55f-4c5e-afc5-2012f0877fb5,Namespace:kube-system,Attempt:0,} returns sandbox id "5600c257a5afdb0520d118c79938df3511edcda6b5382461c0a523f4e2ff2ac6""
* Jul 01 03:29:51 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:51.709886298Z" level=info msg="PullImage "gcr.io/k8s-minikube/gvisor-addon:3""
* Jul 01 03:29:53 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:53.292757187Z" level=info msg="ImageCreate event &ImageCreate{Name:gcr.io/k8s-minikube/gvisor-addon:3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.027643170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:52372175c42a0577f1aca4f4ab39c6866fd09c2c6be692ec624cf46e597b14cd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.030854207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:gcr.io/k8s-minikube/gvisor-addon:3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.034043073Z" level=info msg="ImageCreate event &ImageCreate{Name:gcr.io/k8s-minikube/gvisor-addon@sha256:23eb17d48a66fc2b09c31454fb54ecae520c3e9c9197ef17fcb398b4f31d505a,Labels:map[string]string{io.cri-containerd.image: managed,},}"
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.038777610Z" level=info msg="PullImage "gcr.io/k8s-minikube/gvisor-addon:3" returns image reference "sha256:52372175c42a0577f1aca4f4ab39c6866fd09c2c6be692ec624cf46e597b14cd""
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.049949223Z" level=info msg="CreateContainer within sandbox "5600c257a5afdb0520d118c79938df3511edcda6b5382461c0a523f4e2ff2ac6" for container &ContainerMetadata{Name:gvisor,Attempt:0,}"
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.109480163Z" level=info msg="CreateContainer within sandbox "5600c257a5afdb0520d118c79938df3511edcda6b5382461c0a523f4e2ff2ac6" for &ContainerMetadata{Name:gvisor,Attempt:0,} returns container id "2ef915581a27f5fd1beec3783f05089320664b3cf8e0b62fab41953dafecdde9""
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.111875203Z" level=info msg="StartContainer for "2ef915581a27f5fd1beec3783f05089320664b3cf8e0b62fab41953dafecdde9""
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.116148190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/2ef915581a27f5fd1beec3783f05089320664b3cf8e0b62fab41953dafecdde9/shim.sock" debug=false pid=6952
* Jul 01 03:29:54 gvisor-20200701032659-8084 containerd[2064]: time="2020-07-01T03:29:54.271530700Z" level=info msg="StartContainer for "2ef915581a27f5fd1beec3783f05089320664b3cf8e0b62fab41953dafecdde9" returns successfully"
*
* ==> coredns [69e29210382cec23de73ddc2295ba4780399561dfaf23358936ffebcc024ae7d] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
* [INFO] plugin/ready: Still waiting on: "kubernetes"
*
* ==> describe nodes <==
* Name: gvisor-20200701032659-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=gvisor-20200701032659-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f
* minikube.k8s.io/name=gvisor-20200701032659-8084
* minikube.k8s.io/updated_at=2020_07_01T03_29_32_0700
* minikube.k8s.io/version=v1.12.0-beta.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:29:28 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: gvisor-20200701032659-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:29:53 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:29:43 +0000 Wed, 01 Jul 2020 03:29:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:29:43 +0000 Wed, 01 Jul 2020 03:29:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:29:43 +0000 Wed, 01 Jul 2020 03:29:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:29:43 +0000 Wed, 01 Jul 2020 03:29:43 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.251
* Hostname: gvisor-20200701032659-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085672Ki
* pods: 110
* System Info:
* Machine ID: e7299bbfce4f4c648d920b5794c8fdac
* System UUID: e7299bbf-ce4f-4c64-8d92-0b5794c8fdac
* Boot ID: b85139e3-83c3-4de3-a1ff-aa867fc04a39
* Kernel Version: 4.19.107
* OS Image: Buildroot 2019.02.10
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: containerd://1.2.13
* Kubelet Version: v1.18.3
* Kube-Proxy Version: v1.18.3
* PodCIDR: 10.244.0.0/24
* PodCIDRs: 10.244.0.0/24
* Non-terminated Pods: (8 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-bs6rq 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 13s
* kube-system etcd-gvisor-20200701032659-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27s
* kube-system gvisor 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13s
* kube-system kube-apiserver-gvisor-20200701032659-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 27s
* kube-system kube-controller-manager-gvisor-20200701032659-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 27s
* kube-system kube-proxy-n2gg4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s
* kube-system kube-scheduler-gvisor-20200701032659-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 27s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 650m (32%) 0 (0%)
* memory 70Mi (3%) 170Mi (8%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 39s (x5 over 39s) kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 39s (x4 over 39s) kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 39s (x4 over 39s) kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeHasSufficientPID
* Normal Starting 28s kubelet, gvisor-20200701032659-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 27s kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasSufficientPID 27s kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 27s kubelet, gvisor-20200701032659-8084 Updated Node Allocatable limit across pods
* Normal NodeReady 17s kubelet, gvisor-20200701032659-8084 Node gvisor-20200701032659-8084 status is now: NodeReady
* Normal Starting 6s kube-proxy, gvisor-20200701032659-8084 Starting kube-proxy.
*
* ==> dmesg <==
* [Jul 1 03:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
* [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [ +0.064867] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +3.114299] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000093] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +1.937116] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.009447] systemd-fstab-generator[1147]: Ignoring "noauto" for root device
* [ +0.001373] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.557906] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +15.214546] vboxguest: loading out-of-tree module taints kernel.
* [ +0.005385] vboxguest: PCI device not found, probably running on physical hardware.
* [ +21.164078] systemd-fstab-generator[2006]: Ignoring "noauto" for root device
* [ +0.187632] systemd-fstab-generator[2053]: Ignoring "noauto" for root device
* [Jul 1 03:29] systemd-fstab-generator[2124]: Ignoring "noauto" for root device
* [ +0.653125] NFSD: Unable to end grace period: -110
* [ +1.439075] systemd-fstab-generator[2285]: Ignoring "noauto" for root device
* [ +14.549613] systemd-fstab-generator[2708]: Ignoring "noauto" for root device
* [ +17.675021] kauditd_printk_skb: 35 callbacks suppressed
*
* ==> etcd [1c0689a244eb180954bc8cfeb81d48d263fc44740590277cd3fa029894794ace] <==
* raft2020/07/01 03:29:23 INFO: 9ebeb2ab026a2136 became follower at term 0
* raft2020/07/01 03:29:23 INFO: newRaft 9ebeb2ab026a2136 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
* raft2020/07/01 03:29:23 INFO: 9ebeb2ab026a2136 became follower at term 1
* raft2020/07/01 03:29:23 INFO: 9ebeb2ab026a2136 switched to configuration voters=(11438776551117300022)
* 2020-07-01 03:29:24.314332 W | auth: simple token is not cryptographically signed
* 2020-07-01 03:29:24.535524 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
* 2020-07-01 03:29:24.552133 I | etcdserver: 9ebeb2ab026a2136 as single-node; fast-forwarding 9 ticks (election ticks 10)
* raft2020/07/01 03:29:24 INFO: 9ebeb2ab026a2136 switched to configuration voters=(11438776551117300022)
* 2020-07-01 03:29:24.552907 I | etcdserver/membership: added member 9ebeb2ab026a2136 [https://192.168.39.251:2380] to cluster e90308ed4eec0237
* 2020-07-01 03:29:24.554703 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
* 2020-07-01 03:29:24.554871 I | embed: listening for peers on 192.168.39.251:2380
* 2020-07-01 03:29:24.555651 I | embed: listening for metrics on http://127.0.0.1:2381
* raft2020/07/01 03:29:25 INFO: 9ebeb2ab026a2136 is starting a new election at term 1
* raft2020/07/01 03:29:25 INFO: 9ebeb2ab026a2136 became candidate at term 2
* raft2020/07/01 03:29:25 INFO: 9ebeb2ab026a2136 received MsgVoteResp from 9ebeb2ab026a2136 at term 2
* raft2020/07/01 03:29:25 INFO: 9ebeb2ab026a2136 became leader at term 2
* raft2020/07/01 03:29:25 INFO: raft.node: 9ebeb2ab026a2136 elected leader 9ebeb2ab026a2136 at term 2
* 2020-07-01 03:29:25.364634 I | etcdserver: published {Name:gvisor-20200701032659-8084 ClientURLs:[https://192.168.39.251:2379]} to cluster e90308ed4eec0237
* 2020-07-01 03:29:25.364660 I | embed: ready to serve client requests
* 2020-07-01 03:29:25.366164 I | embed: serving client requests on 192.168.39.251:2379
* 2020-07-01 03:29:25.366351 I | embed: ready to serve client requests
* 2020-07-01 03:29:25.367605 I | embed: serving client requests on 127.0.0.1:2379
* 2020-07-01 03:29:25.369469 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-07-01 03:29:25.383698 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-07-01 03:29:25.383811 I | etcdserver/api: enabled capabilities for version 3.4
*
* ==> kernel <==
* 03:30:00 up 2 min, 0 users, load average: 1.72, 0.82, 0.32
* Linux gvisor-20200701032659-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.10"
*
* ==> kube-apiserver [7bbba6ee79e8df080a4da293770124102d3eb9f2c0562dd9a01866e3323112c1] <==
* I0701 03:29:28.846967 1 cache.go:39] Caches are synced for autoregister controller
* I0701 03:29:28.851888 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
* I0701 03:29:28.852896 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0701 03:29:28.853161 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I0701 03:29:28.892448 1 shared_informer.go:230] Caches are synced for crd-autoregister
* I0701 03:29:29.721396 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0701 03:29:29.721450 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0701 03:29:29.737678 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
* I0701 03:29:29.754349 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
* I0701 03:29:29.755234 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0701 03:29:30.470193 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0701 03:29:30.581869 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W0701 03:29:30.701125 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.251]
* I0701 03:29:30.702134 1 controller.go:606] quota admission added evaluator for: endpoints
* I0701 03:29:30.706514 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0701 03:29:31.131673 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0701 03:29:32.263842 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I0701 03:29:32.495169 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0701 03:29:32.641543 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0701 03:29:44.908601 1 trace.go:116] Trace[1569710455]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:29:43.434740527 +0000 UTC m=+20.702734778) (total time: 1.473821606s):
* Trace[1569710455]: [1.473821606s] [1.473661543s] END
* I0701 03:29:46.947823 1 trace.go:116] Trace[128539431]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-07-01 03:29:45.323791871 +0000 UTC m=+22.591786028) (total time: 1.624018283s):
* Trace[128539431]: [1.624018283s] [1.623359356s] END
* I0701 03:29:47.836173 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0701 03:29:48.244708 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
*
* ==> kube-controller-manager [d4c8e8d6b31ef1a467ac141a9602f41f39f9ee13565a063bd5c6f7d53b47804c] <==
* I0701 03:29:47.943458 1 shared_informer.go:230] Caches are synced for node
* I0701 03:29:47.943858 1 range_allocator.go:172] Starting range CIDR allocator
* I0701 03:29:47.944003 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
* I0701 03:29:47.944088 1 shared_informer.go:230] Caches are synced for cidrallocator
* I0701 03:29:47.946176 1 shared_informer.go:230] Caches are synced for GC
* I0701 03:29:47.947402 1 shared_informer.go:230] Caches are synced for persistent volume
* I0701 03:29:48.007236 1 range_allocator.go:373] Set node gvisor-20200701032659-8084 PodCIDR to [10.244.0.0/24]
* I0701 03:29:48.015400 1 shared_informer.go:230] Caches are synced for taint
* I0701 03:29:48.015792 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
* W0701 03:29:48.016492 1 node_lifecycle_controller.go:1048] Missing timestamp for Node gvisor-20200701032659-8084. Assuming now as a timestamp.
* I0701 03:29:48.016584 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal.
* I0701 03:29:48.017754 1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0701 03:29:48.018432 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"gvisor-20200701032659-8084", UID:"9b7b92df-74cb-4fdc-b1d9-ae2a3b17b723", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node gvisor-20200701032659-8084 event: Registered Node gvisor-20200701032659-8084 in Controller
* I0701 03:29:48.168574 1 shared_informer.go:230] Caches are synced for job
* I0701 03:29:48.190327 1 shared_informer.go:230] Caches are synced for stateful set
* I0701 03:29:48.231124 1 shared_informer.go:230] Caches are synced for daemon sets
* I0701 03:29:48.239523 1 shared_informer.go:230] Caches are synced for disruption
* I0701 03:29:48.239575 1 disruption.go:339] Sending events to api server.
* I0701 03:29:48.265529 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"62037947-3d9e-4722-85ca-c9cc02056e07", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n2gg4
* I0701 03:29:48.265973 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:29:48.281570 1 shared_informer.go:230] Caches are synced for ReplicationController
* I0701 03:29:48.336232 1 shared_informer.go:230] Caches are synced for resource quota
* I0701 03:29:48.389512 1 shared_informer.go:230] Caches are synced for garbage collector
* I0701 03:29:48.389661 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0701 03:29:48.390114 1 shared_informer.go:230] Caches are synced for garbage collector
*
* ==> kube-proxy [54b14667a381147ef7a90bab68c5e2ebaac74fbdb1a0d51d59a039eaafa2abd6] <==
* W0701 03:29:54.288932 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:29:54.300743 1 node.go:136] Successfully retrieved node IP: 192.168.39.251
* I0701 03:29:54.300831 1 server_others.go:186] Using iptables Proxier.
* I0701 03:29:54.301393 1 server.go:583] Version: v1.18.3
* I0701 03:29:54.301896 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0701 03:29:54.302005 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:29:54.302555 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I0701 03:29:54.307905 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0701 03:29:54.308915 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0701 03:29:54.311419 1 config.go:315] Starting service config controller
* I0701 03:29:54.311500 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:29:54.311647 1 config.go:133] Starting endpoints config controller
* I0701 03:29:54.312189 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:29:54.413061 1 shared_informer.go:230] Caches are synced for service config
* I0701 03:29:54.413581 1 shared_informer.go:230] Caches are synced for endpoints config
*
* ==> kube-scheduler [c76a37906e51f821c45e5bee8980d51301812f935864662de9622115cd3ef963] <==
* W0701 03:29:28.866434 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0701 03:29:28.917977 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0701 03:29:28.918586 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0701 03:29:28.927039 1 authorization.go:47] Authorization is disabled
* W0701 03:29:28.927095 1 authentication.go:40] Authentication is disabled
* I0701 03:29:28.927110 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0701 03:29:28.929639 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0701 03:29:28.934561 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0701 03:29:28.935086 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:29:28.946428 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:29:28.936761 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0701 03:29:28.947029 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0701 03:29:28.947358 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0701 03:29:28.948233 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0701 03:29:28.948596 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0701 03:29:28.948825 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:29:28.949016 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0701 03:29:28.953922 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:29:28.954061 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0701 03:29:29.939020 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0701 03:29:29.996669 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0701 03:29:30.282086 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0701 03:29:32.346437 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* E0701 03:29:47.965687 1 factory.go:503] pod: kube-system/coredns-66bff467f8-bs6rq is already present in unschedulable queue
* E0701 03:29:48.059106 1 factory.go:503] pod kube-system/gvisor is already present in the backoff queue
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:27:12 UTC, end at Wed 2020-07-01 03:30:00 UTC. --
* Jul 01 03:29:33 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:33.665892 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/63bd6055502d129b1c4549fc637836ed-ca-certs") pod "kube-controller-manager-gvisor-20200701032659-8084" (UID: "63bd6055502d129b1c4549fc637836ed")
* Jul 01 03:29:33 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:33.665913 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/63bd6055502d129b1c4549fc637836ed-k8s-certs") pod "kube-controller-manager-gvisor-20200701032659-8084" (UID: "63bd6055502d129b1c4549fc637836ed")
* Jul 01 03:29:33 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:33.665937 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/63bd6055502d129b1c4549fc637836ed-kubeconfig") pod "kube-controller-manager-gvisor-20200701032659-8084" (UID: "63bd6055502d129b1c4549fc637836ed")
* Jul 01 03:29:33 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:33.665947 2717 reconciler.go:157] Reconciler: start to sync state
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.019051 2717 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.021134 2717 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.073419 2717 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.119664 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/06db70cc-90fc-4f9f-9cdf-6f18e1443962-tmp") pod "storage-provisioner" (UID: "06db70cc-90fc-4f9f-9cdf-6f18e1443962")
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.119751 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hrx69" (UniqueName: "kubernetes.io/secret/06db70cc-90fc-4f9f-9cdf-6f18e1443962-storage-provisioner-token-hrx69") pod "storage-provisioner" (UID: "06db70cc-90fc-4f9f-9cdf-6f18e1443962")
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.274468 2717 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.320391 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae-xtables-lock") pod "kube-proxy-n2gg4" (UID: "84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae")
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.320452 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae-lib-modules") pod "kube-proxy-n2gg4" (UID: "84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae")
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.320473 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae-kube-proxy") pod "kube-proxy-n2gg4" (UID: "84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae")
* Jul 01 03:29:48 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:48.320491 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-6fhml" (UniqueName: "kubernetes.io/secret/84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae-kube-proxy-token-6fhml") pod "kube-proxy-n2gg4" (UID: "84fe169a-bfe0-4bc3-8bff-8baf31dcb1ae")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.552111 2717 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.583418 2717 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: E0701 03:29:49.586613 2717 reflector.go:178] object-"kube-system"/"default-token-cqr2d": Failed to list *v1.Secret: secrets "default-token-cqr2d" is forbidden: User "system:node:gvisor-20200701032659-8084" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "gvisor-20200701032659-8084" and this object
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.633756 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27aee0fc-6cd3-46e0-afb8-ac027b25e8da-config-volume") pod "coredns-66bff467f8-bs6rq" (UID: "27aee0fc-6cd3-46e0-afb8-ac027b25e8da")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.633905 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8kd9d" (UniqueName: "kubernetes.io/secret/27aee0fc-6cd3-46e0-afb8-ac027b25e8da-coredns-token-8kd9d") pod "coredns-66bff467f8-bs6rq" (UID: "27aee0fc-6cd3-46e0-afb8-ac027b25e8da")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.633996 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-root" (UniqueName: "kubernetes.io/host-path/87086a65-c55f-4c5e-afc5-2012f0877fb5-node-root") pod "gvisor" (UID: "87086a65-c55f-4c5e-afc5-2012f0877fb5")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.634029 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-run" (UniqueName: "kubernetes.io/host-path/87086a65-c55f-4c5e-afc5-2012f0877fb5-node-run") pod "gvisor" (UID: "87086a65-c55f-4c5e-afc5-2012f0877fb5")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.634104 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-tmp" (UniqueName: "kubernetes.io/host-path/87086a65-c55f-4c5e-afc5-2012f0877fb5-node-tmp") pod "gvisor" (UID: "87086a65-c55f-4c5e-afc5-2012f0877fb5")
* Jul 01 03:29:49 gvisor-20200701032659-8084 kubelet[2717]: I0701 03:29:49.634131 2717 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cqr2d" (UniqueName: "kubernetes.io/secret/87086a65-c55f-4c5e-afc5-2012f0877fb5-default-token-cqr2d") pod "gvisor" (UID: "87086a65-c55f-4c5e-afc5-2012f0877fb5")
* Jul 01 03:29:50 gvisor-20200701032659-8084 kubelet[2717]: E0701 03:29:50.736432 2717 secret.go:195] Couldn't get secret kube-system/default-token-cqr2d: failed to sync secret cache: timed out waiting for the condition
* Jul 01 03:29:50 gvisor-20200701032659-8084 kubelet[2717]: E0701 03:29:50.736633 2717 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/87086a65-c55f-4c5e-afc5-2012f0877fb5-default-token-cqr2d podName:87086a65-c55f-4c5e-afc5-2012f0877fb5 nodeName:}" failed. No retries permitted until 2020-07-01 03:29:51.236582438 +0000 UTC m=+18.837742880 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-cqr2d\" (UniqueName: \"kubernetes.io/secret/87086a65-c55f-4c5e-afc5-2012f0877fb5-default-token-cqr2d\") pod \"gvisor\" (UID: \"87086a65-c55f-4c5e-afc5-2012f0877fb5\") : failed to sync secret cache: timed out waiting for the condition"
*
* ==> storage-provisioner [e0e4ec565e71691041fa10a50b67460ccaf74c6bc5a1686e44bdefb94801ad7e] <==
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p gvisor-20200701032659-8084 -n gvisor-20200701032659-8084
helpers_test.go:254: (dbg) Run: kubectl --context gvisor-20200701032659-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context gvisor-20200701032659-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (271ns)
helpers_test.go:256: kubectl --context gvisor-20200701032659-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
helpers_test.go:170: Cleaning up "gvisor-20200701032659-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p gvisor-20200701032659-8084
=== CONT TestFunctional
=== RUN TestFunctional/parallel
=== RUN TestFunctional/parallel/ComponentHealth
=== PAUSE TestFunctional/parallel/ComponentHealth
=== RUN TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd
=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== RUN TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun
=== RUN TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd
=== RUN TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd
=== RUN TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd
=== RUN TestFunctional/parallel/ProfileCmd
=== PAUSE TestFunctional/parallel/ProfileCmd
=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== RUN TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd
=== RUN TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim
=== RUN TestFunctional/parallel/TunnelCmd
=== PAUSE TestFunctional/parallel/TunnelCmd
=== RUN TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd
=== RUN TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL
=== RUN TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync
=== RUN TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync
=== RUN TestFunctional/parallel/UpdateContextCmd
=== PAUSE TestFunctional/parallel/UpdateContextCmd
=== RUN TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv
=== RUN TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels
=== CONT TestStartStop
=== RUN TestStartStop/group
=== RUN TestStartStop/group/old-k8s-version
=== PAUSE TestStartStop/group/old-k8s-version
=== RUN TestStartStop/group/newest-cni
=== PAUSE TestStartStop/group/newest-cni
=== RUN TestStartStop/group/containerd
=== PAUSE TestStartStop/group/containerd
=== RUN TestStartStop/group/crio
=== PAUSE TestStartStop/group/crio
=== RUN TestStartStop/group/embed-certs
=== PAUSE TestStartStop/group/embed-certs
=== CONT TestForceSystemdEnv
=== RUN TestPause/serial/Pause
=== RUN TestPause/serial/Unpause
=== RUN TestPause/serial/PauseAgain
=== RUN TestPause/serial/DeletePaused
=== RUN TestPause/serial/VerifyDeletedResources
--- PASS: TestPause (467.42s)
--- PASS: TestPause/serial (467.16s)
--- PASS: TestPause/serial/Start (378.09s)
pause_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p pause-20200701032338-8084 --memory=1800 --install-addons=false --wait=all --driver=kvm2
pause_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p pause-20200701032338-8084 --memory=1800 --install-addons=false --wait=all --driver=kvm2 : (6m18.087485854s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (84.99s)
pause_test.go:78: (dbg) Run: out/minikube-linux-amd64 start -p pause-20200701032338-8084 --alsologtostderr -v=1
pause_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p pause-20200701032338-8084 --alsologtostderr -v=1: (1m24.97421538s)
--- PASS: TestPause/serial/Pause (0.58s)
pause_test.go:95: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20200701032338-8084 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)
pause_test.go:105: (dbg) Run: out/minikube-linux-amd64 unpause -p pause-20200701032338-8084 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (1.36s)
pause_test.go:95: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20200701032338-8084 --alsologtostderr -v=5
pause_test.go:95: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20200701032338-8084 --alsologtostderr -v=5: (1.360534247s)
--- PASS: TestPause/serial/DeletePaused (1.42s)
pause_test.go:115: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20200701032338-8084 --alsologtostderr -v=5
pause_test.go:115: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20200701032338-8084 --alsologtostderr -v=5: (1.416286547s)
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)
pause_test.go:125: (dbg) Run: out/minikube-linux-amd64 profile list --output json
helpers_test.go:170: Cleaning up "pause-20200701032338-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20200701032338-8084
=== CONT TestForceSystemdFlag
--- PASS: TestForceSystemdEnv (116.66s)
docker_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-env-20200701033001-8084 --memory=1800 --alsologtostderr -v=5 --driver=kvm2
docker_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20200701033001-8084 --memory=1800 --alsologtostderr -v=5 --driver=kvm2 : (1m45.624627298s)
docker_test.go:113: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-env-20200701033001-8084 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:170: Cleaning up "force-systemd-env-20200701033001-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-env-20200701033001-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20200701033001-8084: (10.79446023s)
=== CONT TestDockerFlags
--- FAIL: TestErrorSpam (619.05s)
error_spam_test.go:58: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20200701032338-8084 -n=1 --memory=2250 --wait=false --driver=kvm2
error_spam_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-20200701032338-8084 -n=1 --memory=2250 --wait=false --driver=kvm2 : exit status 78 (9m44.324861638s)
-- stdout --
* [nospam-20200701032338-8084] minikube v1.12.0-beta.0 on Debian 10.4
- MINIKUBE_LOCATION=master
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-20200701032338-8084 in cluster nospam-20200701032338-8084
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
-- /stdout --
** stderr **
! initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.004495 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
stderr:
W0701 03:26:42.733636 2507 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:26:46.644242 2507 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:26:46.653413 2507 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create RBAC rolebinding: etcdserver: request timed out
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
*
* [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
error_spam_test.go:60: "out/minikube-linux-amd64 start -p nospam-20200701032338-8084 -n=1 --memory=2250 --wait=false --driver=kvm2 " failed: exit status 78
error_spam_test.go:77: unexpected stderr: "! initialization failed, will try again: run: /bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap\": Process exited with status 1"
error_spam_test.go:77: unexpected stderr: "stdout:"
error_spam_test.go:77: unexpected stderr: "[init] Using Kubernetes version: v1.18.3"
error_spam_test.go:77: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:77: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:77: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:77: unexpected stderr: "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:77: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:77: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:77: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:77: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s"
error_spam_test.go:77: unexpected stderr: "[apiclient] All control plane components are healthy after 21.004495 seconds"
error_spam_test.go:77: unexpected stderr: "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace"
error_spam_test.go:77: unexpected stderr: "stderr:"
error_spam_test.go:77: unexpected stderr: "W0701 03:26:42.733636 2507 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]"
error_spam_test.go:77: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:77: unexpected stderr: "W0701 03:26:46.644242 2507 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "W0701 03:26:46.653413 2507 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create RBAC rolebinding: etcdserver: request timed out"
error_spam_test.go:77: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:77: unexpected stderr: "* "
error_spam_test.go:77: unexpected stderr: "X Error starting cluster: run: /bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap\": Process exited with status 1"
error_spam_test.go:77: unexpected stderr: "stdout:"
error_spam_test.go:77: unexpected stderr: "[init] Using Kubernetes version: v1.18.3"
error_spam_test.go:77: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:77: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:77: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:77: unexpected stderr: "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:77: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:77: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:77: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s"
error_spam_test.go:77: unexpected stderr: "[kubelet-check] Initial timeout of 40s passed."
error_spam_test.go:77: unexpected stderr: "[kubelet-check] It seems like the kubelet isn't running or healthy."
error_spam_test.go:77: unexpected stderr: "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused."
error_spam_test.go:77: unexpected stderr: "\tUnfortunately, an error has occurred:"
error_spam_test.go:77: unexpected stderr: "\t\ttimed out waiting for the condition"
error_spam_test.go:77: unexpected stderr: "\tThis error is likely caused by:"
error_spam_test.go:77: unexpected stderr: "\t\t- The kubelet is not running"
error_spam_test.go:77: unexpected stderr: "\t\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)"
error_spam_test.go:77: unexpected stderr: "\tIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'systemctl status kubelet'"
error_spam_test.go:77: unexpected stderr: "\t\t- 'journalctl -xeu kubelet'"
error_spam_test.go:77: unexpected stderr: "\tAdditionally, a control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:77: unexpected stderr: "\tTo troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:77: unexpected stderr: "\tHere is one example how you may list all Kubernetes containers running in docker:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'docker ps -a | grep kube | grep -v pause'"
error_spam_test.go:77: unexpected stderr: "\t\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'docker logs CONTAINERID'"
error_spam_test.go:77: unexpected stderr: "stderr:"
error_spam_test.go:77: unexpected stderr: "W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]"
error_spam_test.go:77: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:77: unexpected stderr: "W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"
error_spam_test.go:77: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:77: unexpected stderr: "* "
error_spam_test.go:77: unexpected stderr: "* minikube is exiting due to an error. If the above message is not useful, open an issue:"
error_spam_test.go:77: unexpected stderr: " - https://github.com/kubernetes/minikube/issues/new/choose"
error_spam_test.go:77: unexpected stderr: "* "
error_spam_test.go:77: unexpected stderr: "* [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap\": Process exited with status 1"
error_spam_test.go:77: unexpected stderr: "stdout:"
error_spam_test.go:77: unexpected stderr: "[init] Using Kubernetes version: v1.18.3"
error_spam_test.go:77: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:77: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:77: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:77: unexpected stderr: "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:77: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:77: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:77: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:77: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:77: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:77: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:77: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:77: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s"
error_spam_test.go:77: unexpected stderr: "[kubelet-check] Initial timeout of 40s passed."
error_spam_test.go:77: unexpected stderr: "[kubelet-check] It seems like the kubelet isn't running or healthy."
error_spam_test.go:77: unexpected stderr: "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused."
error_spam_test.go:77: unexpected stderr: "\tUnfortunately, an error has occurred:"
error_spam_test.go:77: unexpected stderr: "\t\ttimed out waiting for the condition"
error_spam_test.go:77: unexpected stderr: "\tThis error is likely caused by:"
error_spam_test.go:77: unexpected stderr: "\t\t- The kubelet is not running"
error_spam_test.go:77: unexpected stderr: "\t\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)"
error_spam_test.go:77: unexpected stderr: "\tIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'systemctl status kubelet'"
error_spam_test.go:77: unexpected stderr: "\t\t- 'journalctl -xeu kubelet'"
error_spam_test.go:77: unexpected stderr: "\tAdditionally, a control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:77: unexpected stderr: "\tTo troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:77: unexpected stderr: "\tHere is one example how you may list all Kubernetes containers running in docker:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'docker ps -a | grep kube | grep -v pause'"
error_spam_test.go:77: unexpected stderr: "\t\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:77: unexpected stderr: "\t\t- 'docker logs CONTAINERID'"
error_spam_test.go:77: unexpected stderr: "stderr:"
error_spam_test.go:77: unexpected stderr: "W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]"
error_spam_test.go:77: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:77: unexpected stderr: "W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
error_spam_test.go:77: unexpected stderr: "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"
error_spam_test.go:77: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:77: unexpected stderr: "* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start"
error_spam_test.go:77: unexpected stderr: "* Related issue: https://github.com/kubernetes/minikube/issues/4172"
error_spam_test.go:91: minikube stdout:
* [nospam-20200701032338-8084] minikube v1.12.0-beta.0 on Debian 10.4
- MINIKUBE_LOCATION=master
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-20200701032338-8084 in cluster nospam-20200701032338-8084
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
error_spam_test.go:92: minikube stderr:
! initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [nospam-20200701032338-8084 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.004495 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
stderr:
W0701 03:26:42.733636 2507 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:26:46.644242 2507 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:26:46.653413 2507 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create RBAC rolebinding: etcdserver: request timed out
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
*
* [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0701 03:29:17.619643 4239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0701 03:29:19.627807 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0701 03:29:19.629315 4239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
error_spam_test.go:94: *** TestErrorSpam FAILED at 2020-07-01 03:33:22.342669526 +0000 UTC m=+2207.443958164
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20200701032338-8084 -n nospam-20200701032338-8084
helpers_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20200701032338-8084 -n nospam-20200701032338-8084: exit status 6 (192.058701ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0701 03:33:22.526294 18415 status.go:247] kubeconfig endpoint: extract IP: "nospam-20200701032338-8084" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/kubeconfig
** /stderr **
helpers_test.go:232: status error: exit status 6 (may be ok)
helpers_test.go:234: "nospam-20200701032338-8084" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:170: Cleaning up "nospam-20200701032338-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p nospam-20200701032338-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20200701032338-8084: (34.528591584s)
=== CONT TestKVMDriverInstallOrUpdate
> docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 152.55 KiB / 48.57 MiB [>_____] 0.31% ? p/s ? > docker-machine-driver-kvm2: 1.43 MiB / 48.57 MiB [>_______] 2.94% ? p/s ? > docker-machine-driver-kvm2: 11.29 MiB / 48.57 MiB [->____] 23.24% ? p/s ? > docker-machine-driver-kvm2: 20.24 MiB / 48.57 MiB 41.67% 33.49 MiB p/s E > docker-machine-driver-kvm2: 31.58 MiB / 48.57 MiB 65.02% 33.49 MiB p/s E > docker-machine-driver-kvm2: 41.16 MiB / 48.57 MiB 84.74% 33.49 MiB p/s E > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB 100.00% 42.49 MiB p/s > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 237.55 KiB / 48.57 MiB [>_____] 0.48% ? p/s ? > docker-machine-driver-kvm2: 2.04 MiB / 48.57 MiB [>_______] 4.19% ? p/s ? > docker-machine-driver-kvm2: 10.18 MiB / 48.57 MiB [->____] 20.95% ? p/s ? > docker-machine-driver-kvm2: 22.02 MiB / 48.57 MiB 45.33% 36.31 MiB p/s E > docker-machine-driver-kvm2: 30.97 MiB / 48.57 MiB 63.77% 36.31 MiB p/s E > docker-machine-driver-kvm2: 39.96 MiB / 48.57 MiB 82.26% 36.31 MiB p/s E > docker-machine-driver-kvm2: 42.96 MiB / 48.57 MiB 88.44% 36.22 MiB p/s E > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB 100.00% 37.47 MiB p/s --- PASS: TestKVMDriverInstallOrUpdate (7.26s)
=== CONT TestNetworkPlugins/group/auto
=== RUN TestNetworkPlugins/group/auto/Start
--- PASS: TestForceSystemdFlag (273.86s)
docker_test.go:80: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-flag-20200701033125-8084 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=kvm2
docker_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20200701033125-8084 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (4m32.807081373s)
docker_test.go:85: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-flag-20200701033125-8084 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:170: Cleaning up "force-systemd-flag-20200701033125-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-flag-20200701033125-8084
=== CONT TestNetworkPlugins/group/flannel
=== RUN TestNetworkPlugins/group/flannel/Start
--- PASS: TestDockerFlags (241.87s)
docker_test.go:41: (dbg) Run: out/minikube-linux-amd64 start -p docker-flags-20200701033158-8084 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2
docker_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20200701033158-8084 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (3m59.968327667s)
docker_test.go:46: (dbg) Run: out/minikube-linux-amd64 -p docker-flags-20200701033158-8084 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:57: (dbg) Run: out/minikube-linux-amd64 -p docker-flags-20200701033158-8084 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:170: Cleaning up "docker-flags-20200701033158-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p docker-flags-20200701033158-8084
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20200701033158-8084: (1.462013162s)
=== CONT TestNetworkPlugins/group/custom-weave
=== RUN TestNetworkPlugins/group/custom-weave/Start
--- FAIL: TestVersionUpgrade (742.85s)
version_upgrade_test.go:74: (dbg) Run: /tmp/minikube-release.454852847.exe start -p vupgrade-20200701032338-8084 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.13.0 --alsologtostderr --driver=kvm2
version_upgrade_test.go:74: (dbg) Done: /tmp/minikube-release.454852847.exe start -p vupgrade-20200701032338-8084 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.13.0 --alsologtostderr --driver=kvm2 : (6m18.22363697s)
version_upgrade_test.go:83: (dbg) Run: /tmp/minikube-release.454852847.exe stop -p vupgrade-20200701032338-8084
version_upgrade_test.go:83: (dbg) Done: /tmp/minikube-release.454852847.exe stop -p vupgrade-20200701032338-8084: (13.512769582s)
version_upgrade_test.go:88: (dbg) Run: /tmp/minikube-release.454852847.exe -p vupgrade-20200701032338-8084 status --format={{.Host}}
version_upgrade_test.go:88: (dbg) Non-zero exit: /tmp/minikube-release.454852847.exe -p vupgrade-20200701032338-8084 status --format={{.Host}}: exit status 7 (51.501233ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:90: status error: exit status 7 (may be ok)
version_upgrade_test.go:99: (dbg) Run: out/minikube-linux-amd64 start -p vupgrade-20200701032338-8084 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=kvm2
version_upgrade_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p vupgrade-20200701032338-8084 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (5m47.680115602s)
version_upgrade_test.go:104: (dbg) Run: kubectl --context vupgrade-20200701032338-8084 version --output=json
version_upgrade_test.go:104: (dbg) Non-zero exit: kubectl --context vupgrade-20200701032338-8084 version --output=json: exec: "kubectl": executable file not found in $PATH (465ns)
version_upgrade_test.go:106: error running kubectl: exec: "kubectl": executable file not found in $PATH
panic.go:563: *** TestVersionUpgrade FAILED at 2020-07-01 03:35:58.124947777 +0000 UTC m=+2363.226236410
helpers_test.go:215: -----------------------post-mortem--------------------------------
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p vupgrade-20200701032338-8084 -n vupgrade-20200701032338-8084
helpers_test.go:237: <<< TestVersionUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:238: ======> post-mortem[TestVersionUpgrade]: minikube logs <======
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p vupgrade-20200701032338-8084 logs -n 25
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p vupgrade-20200701032338-8084 logs -n 25: (1.22599703s)
helpers_test.go:245: TestVersionUpgrade logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Wed 2020-07-01 03:30:26 UTC, end at Wed 2020-07-01 03:35:58 UTC. --
* Jul 01 03:34:36 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:36.529149460Z" level=info msg="shim reaped" id=bd78c7852dec179d5429e4509a3141f822169b45e126edbb9fc54bfe17774a98
* Jul 01 03:34:36 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:36.547171388Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.097007850Z" level=info msg="shim reaped" id=9fd83952e2bb21a0a2fbb473d5bbf7f473b89b5a5ac50a907658e0d535371756
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.107345623Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.189523084Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7060380fa63d0a2f11d64203b76dc95f303e4a6a4fd3c111b51b79981101e999/shim.sock" debug=false pid=5723
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.212960937Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/129a1e9bbe5d11f0d52ed5d56b91be492bd70bd670b71598fccd8146d1ad24a2/shim.sock" debug=false pid=5725
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.400049649Z" level=info msg="shim reaped" id=18739ee1d99476f95ff230aa1335bbf44fa08acccdde738981279f2212596ad9
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.425568890Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.942122677Z" level=info msg="shim reaped" id=3c2dc4d630484cdda8d889193cfe9659cc30dedf507f2d2cbbe7f4aa5f240b21
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.943279366Z" level=info msg="shim reaped" id=b6595543f6330c7f8ff263882b762446019181158b681fee802975592512e0b6
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.949994576Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:37 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:37.958292621Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:38 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:38.167772981Z" level=info msg="shim reaped" id=14426f69e7d432f2a750ca7ab8cc5aaecb0705675fcd5f500fef4daadbdbabd0
* Jul 01 03:34:38 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:38.183028415Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:38 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:38.317875242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/094ae2631ef81b4ccf6629387b57ab5dde273ad7b1579b37c7b52828eb071829/shim.sock" debug=false pid=5912
* Jul 01 03:34:39 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:39.387900570Z" level=info msg="shim reaped" id=7060380fa63d0a2f11d64203b76dc95f303e4a6a4fd3c111b51b79981101e999
* Jul 01 03:34:39 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:39.398379666Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:34:56 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:56.911967674Z" level=info msg="shim reaped" id=868d21da82e8f1a18e5507927cfc9850adc446f39c4b232cfd5cdb306a4926e1
* Jul 01 03:34:56 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:34:56.921943356Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Jul 01 03:35:54 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:54.246112651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4947220c5e56d07e008643587c4b62d323c77d65e27bb8883093c032fa96b812/shim.sock" debug=false pid=6258
* Jul 01 03:35:54 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:54.420716028Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/93a8652b8e377f1e14995323fc79516c60b9a3c4f30a3c7ec775b727ee35e16c/shim.sock" debug=false pid=6282
* Jul 01 03:35:54 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:54.440995273Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ca364d7290bc7c0f4f08c185ec472758abeaf1beba03f4fb29d6822425f831a/shim.sock" debug=false pid=6286
* Jul 01 03:35:54 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:54.953893530Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a276d19cb73a5dd05be98e04acd0a5bf9166f36f51254549fd79e1fb9c7e12da/shim.sock" debug=false pid=6355
* Jul 01 03:35:55 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:55.372527435Z" level=info msg="shim reaped" id=2ca364d7290bc7c0f4f08c185ec472758abeaf1beba03f4fb29d6822425f831a
* Jul 01 03:35:55 vupgrade-20200701032338-8084 dockerd[1952]: time="2020-07-01T03:35:55.383322860Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* a276d19cb73a5 dd0566861ca7e 4 seconds ago Running kube-proxy 0 93a8652b8e377
* 4947220c5e56d 4689081edb103 8 seconds ago Running storage-provisioner 2 4f0525e76a8a5
* 094ae2631ef81 67da37a9a360e About a minute ago Running coredns 0 129a1e9bbe5d1
* 8e4f1d4dc42f7 3e869c25095c0 About a minute ago Running kube-controller-manager 2 db8b8f9bfe262
* 868d21da82e8f 4689081edb103 About a minute ago Exited storage-provisioner 1 4f0525e76a8a5
* 1766250e2291c 3e869c25095c0 2 minutes ago Exited kube-controller-manager 1 db8b8f9bfe262
* 605e259d1247a 84d647bc30481 2 minutes ago Running kube-apiserver 1 9bee2a63bba6c
* f7e77b020a8d8 303ce5db0e90d 3 minutes ago Running etcd 0 ede6b0a9bb25c
* 1bc7fe00fede9 34534fbf16513 3 minutes ago Running kube-scheduler 0 d329d9412310f
* fc5601ae180af 84d647bc30481 3 minutes ago Exited kube-apiserver 0 9bee2a63bba6c
*
* ==> coredns [094ae2631ef8] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
*
* ==> describe nodes <==
* Name: vupgrade-20200701032338-8084
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=vupgrade-20200701032338-8084
* kubernetes.io/os=linux
* minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
* minikube.k8s.io/name=vupgrade-20200701032338-8084
* minikube.k8s.io/updated_at=2020_07_01T03_29_51_0700
* minikube.k8s.io/version=v1.11.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Wed, 01 Jul 2020 03:29:42 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: vupgrade-20200701032338-8084
* AcquireTime: <unset>
* RenewTime: Wed, 01 Jul 2020 03:35:53 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Wed, 01 Jul 2020 03:35:53 +0000 Wed, 01 Jul 2020 03:29:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Wed, 01 Jul 2020 03:35:53 +0000 Wed, 01 Jul 2020 03:29:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Wed, 01 Jul 2020 03:35:53 +0000 Wed, 01 Jul 2020 03:29:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Wed, 01 Jul 2020 03:35:53 +0000 Wed, 01 Jul 2020 03:29:29 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.39.129
* Hostname: vupgrade-20200701032338-8084
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085684Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2085684Ki
* pods: 110
* System Info:
* Machine ID: 03b25f97529f4920b126a89d880f8ede
* System UUID: 03b25f97-529f-4920-b126-a89d880f8ede
* Boot ID: dcfc0a84-6def-4a69-a377-29692150c00a
* Kernel Version: 4.19.94
* OS Image: Buildroot 2019.02.9
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.6
* Kubelet Version: v1.18.4-rc.0
* Kube-Proxy Version: v1.18.4-rc.0
* Non-terminated Pods: (4 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-66bff467f8-nb44d 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 83s
* kube-system kube-controller-manager-vupgrade-20200701032338-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 95s
* kube-system kube-proxy-xmmd6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m7s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 300m (15%) 0 (0%)
* memory 70Mi (3%) 170Mi (8%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 6m31s (x8 over 6m32s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 6m31s (x8 over 6m32s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 6m31s (x7 over 6m32s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientPID
* Normal Starting 3m35s kubelet, vupgrade-20200701032338-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 3m35s kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 3m35s kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 3m35s kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientPID
* Normal Starting 2m2s kubelet, vupgrade-20200701032338-8084 Starting kubelet.
* Normal NodeHasSufficientMemory 2m2s (x8 over 2m2s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 2m2s (x8 over 2m2s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 2m2s (x7 over 2m2s) kubelet, vupgrade-20200701032338-8084 Node vupgrade-20200701032338-8084 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 2m2s kubelet, vupgrade-20200701032338-8084 Updated Node Allocatable limit across pods
* Normal Starting 93s kube-proxy, vupgrade-20200701032338-8084 Starting kube-proxy.
* Normal Starting 4s kube-proxy, vupgrade-20200701032338-8084 Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [ +0.052972] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +2.803958] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000023] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +2.625958] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [ +0.006546] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
* [ +0.006445] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [ +1.579946] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +1.587843] vboxguest: loading out-of-tree module taints kernel.
* [ +0.005068] vboxguest: PCI device not found, probably running on physical hardware.
* [ +6.298799] systemd-fstab-generator[1932]: Ignoring "noauto" for root device
* [ +0.087145] systemd-fstab-generator[1942]: Ignoring "noauto" for root device
* [Jul 1 03:31] kauditd_printk_skb: 44 callbacks suppressed
* [ +2.533566] systemd-fstab-generator[2231]: Ignoring "noauto" for root device
* [Jul 1 03:32] systemd-fstab-generator[2709]: Ignoring "noauto" for root device
* [ +1.405230] kauditd_printk_skb: 53 callbacks suppressed
* [ +5.682986] NFSD: Unable to end grace period: -110
* [Jul 1 03:34] kauditd_printk_skb: 32 callbacks suppressed
* [ +6.761962] hrtimer: interrupt took 2375322 ns
* [ +2.493443] kauditd_printk_skb: 50 callbacks suppressed
* [ +5.964605] kauditd_printk_skb: 20 callbacks suppressed
*
* ==> etcd [f7e77b020a8d] <==
* WARNING: 2020/07/01 03:35:50 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
* 2020-07-01 03:35:53.215842 W | etcdserver: request "header:<ID:16204359414313048234 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:60e173086ee2d8a9>" with result "size:41" took too long (2.864487857s) to execute
* 2020-07-01 03:35:53.216319 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (13.243939401s) to execute
* 2020-07-01 03:35:53.216335 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (12.67723852s) to execute
* 2020-07-01 03:35:53.216841 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations\" range_end:\"/registry/validatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (15.139334973s) to execute
* 2020-07-01 03:35:53.218101 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (12.389515223s) to execute
* 2020-07-01 03:35:53.219067 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (12.577365529s) to execute
* 2020-07-01 03:35:53.220117 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (15.596569005s) to execute
* 2020-07-01 03:35:53.616253 W | wal: sync duration of 3.264934186s, expected less than 1s
* 2020-07-01 03:35:53.637343 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/vupgrade-20200701032338-8084\" " with result "range_response_count:1 size:679" took too long (2.442097611s) to execute
* 2020-07-01 03:35:53.639026 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (3.386236288s) to execute
* 2020-07-01 03:35:53.645993 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (3.991540914s) to execute
* 2020-07-01 03:35:53.647203 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (4.628011114s) to execute
* 2020-07-01 03:35:53.647870 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-c9ctq\" " with result "range_response_count:1 size:1891" took too long (5.000084322s) to execute
* 2020-07-01 03:35:53.648215 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:170" took too long (9.597281224s) to execute
* 2020-07-01 03:35:53.651868 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (8.277671057s) to execute
* 2020-07-01 03:35:53.652717 W | etcdserver: read-only range request "key:\"/registry/minions/vupgrade-20200701032338-8084\" " with result "range_response_count:1 size:2503" took too long (5.987282765s) to execute
* 2020-07-01 03:35:53.653967 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (8.262097281s) to execute
* 2020-07-01 03:35:53.654897 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (9.116711156s) to execute
* 2020-07-01 03:35:53.768683 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (198.523537ms) to execute
* 2020-07-01 03:35:53.769279 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (544.342369ms) to execute
* 2020-07-01 03:35:53.793202 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-86c58d9df4-c9ctq\" " with result "range_response_count:1 size:1891" took too long (130.295614ms) to execute
* 2020-07-01 03:35:53.796595 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:291" took too long (132.858475ms) to execute
* 2020-07-01 03:35:53.994114 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.39.129\" " with result "range_response_count:0 size:5" took too long (189.221758ms) to execute
* 2020-07-01 03:35:54.008132 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-pzmdw\" " with result "range_response_count:1 size:3593" took too long (202.329233ms) to execute
*
* ==> kernel <==
* 03:35:59 up 5 min, 0 users, load average: 4.59, 3.34, 1.51
* Linux vupgrade-20200701032338-8084 4.19.94 #1 SMP Thu Feb 20 00:37:50 PST 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.9"
*
* ==> kube-apiserver [605e259d1247] <==
* I0701 03:35:50.792131 1 trace.go:116] Trace[1843943276]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/vupgrade-20200701032338-8084,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:35:48.647239511 +0000 UTC m=+110.468516173) (total time: 2.144861966s):
* Trace[1843943276]: [2.144861966s] [2.144839728s] END
* E0701 03:35:51.062333 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* I0701 03:35:51.062691 1 trace.go:116] Trace[177357220]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.18.4 (linux/amd64) kubernetes/e874ceb/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.129 (started: 2020-07-01 03:35:41.150505773 +0000 UTC m=+102.971782780) (total time: 9.912144521s):
* Trace[177357220]: [9.912144521s] [9.912029711s] END
* I0701 03:35:53.222040 1 trace.go:116] Trace[933785349]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-07-01 03:35:37.620635454 +0000 UTC m=+99.441912216) (total time: 15.601369988s):
* Trace[933785349]: [15.601369988s] [15.601369988s] END
* I0701 03:35:53.222222 1 trace.go:116] Trace[1204850284]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.4 (linux/amd64) kubernetes/e874ceb/system:serviceaccount:kube-system:cronjob-controller,client:192.168.39.129 (started: 2020-07-01 03:35:37.620609925 +0000 UTC m=+99.441886568) (total time: 15.601584685s):
* Trace[1204850284]: [15.60144525s] [15.601425956s] Listing from storage done
* I0701 03:35:53.640109 1 trace.go:116] Trace[161508058]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.18.4 (linux/amd64) kubernetes/e874ceb/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.129 (started: 2020-07-01 03:35:51.086845265 +0000 UTC m=+112.908122122) (total time: 2.553225166s):
* Trace[161508058]: [2.553151887s] [2.552965989s] Object stored in database
* I0701 03:35:53.656168 1 trace.go:116] Trace[1935286983]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-86c58d9df4-c9ctq,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:35:48.647303442 +0000 UTC m=+110.468580086) (total time: 5.008833768s):
* Trace[1935286983]: [5.008761466s] [5.008754674s] About to write a response
* I0701 03:35:53.657114 1 trace.go:116] Trace[634573168]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/vupgrade-20200701032338-8084,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:35:51.193572877 +0000 UTC m=+113.014930127) (total time: 2.463357831s):
* Trace[634573168]: [2.463316673s] [2.463294614s] About to write a response
* I0701 03:35:53.658880 1 trace.go:116] Trace[225971914]: "Get" url:/api/v1/nodes/vupgrade-20200701032338-8084,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:35:47.660547509 +0000 UTC m=+109.481824282) (total time: 5.998307776s):
* Trace[225971914]: [5.998254631s] [5.998233286s] About to write a response
* I0701 03:35:53.660735 1 trace.go:116] Trace[2055359851]: "Create" url:/api/v1/namespaces,user-agent:kube-apiserver/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:127.0.0.1 (started: 2020-07-01 03:35:37.578360558 +0000 UTC m=+99.399637225) (total time: 16.082348823s):
* Trace[2055359851]: [16.082348823s] [16.082304038s] END
* I0701 03:35:53.771529 1 trace.go:116] Trace[970644458]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2020-07-01 03:35:53.223981389 +0000 UTC m=+115.045258228) (total time: 547.443322ms):
* Trace[970644458]: [547.443322ms] [547.443322ms] END
* I0701 03:35:53.771629 1 trace.go:116] Trace[703650403]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.18.4 (linux/amd64) kubernetes/e874ceb/system:serviceaccount:kube-system:cronjob-controller,client:192.168.39.129 (started: 2020-07-01 03:35:53.223949634 +0000 UTC m=+115.045226387) (total time: 547.658297ms):
* Trace[703650403]: [547.594173ms] [547.569334ms] Listing from storage done
* I0701 03:35:53.797247 1 trace.go:116] Trace[1015203571]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kube-controller-manager/v1.18.4 (linux/amd64) kubernetes/e874ceb/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.129 (started: 2020-07-01 03:35:51.065080202 +0000 UTC m=+112.886356950) (total time: 2.732131538s):
* Trace[1015203571]: [2.732091694s] [2.731938195s] Object stored in database
*
* ==> kube-apiserver [fc5601ae180a] <==
* Trace[1968026727]: [7.006292675s] [7.006269734s] END
* E0701 03:32:57.865083 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* I0701 03:32:57.865611 1 trace.go:116] Trace[2088988543]: "Delete" url:/apis/apiregistration.k8s.io/v1/apiservices/v1beta2.apps,user-agent:kube-apiserver/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:127.0.0.1 (started: 2020-07-01 03:32:40.420093575 +0000 UTC m=+10.333509214) (total time: 17.44546826s):
* Trace[2088988543]: [17.44546826s] [17.44540515s] END
* E0701 03:32:57.866138 1 autoregister_controller.go:195] v1beta2.apps failed with : etcdserver: request timed out
* E0701 03:32:57.866149 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* E0701 03:32:57.866922 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* I0701 03:32:57.867427 1 trace.go:116] Trace[990284754]: "Delete" url:/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.apps,user-agent:kube-apiserver/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:127.0.0.1 (started: 2020-07-01 03:32:40.420677997 +0000 UTC m=+10.334093623) (total time: 17.446730108s):
* Trace[990284754]: [17.446730108s] [17.446634003s] END
* E0701 03:32:57.868063 1 autoregister_controller.go:195] v1beta1.apps failed with : etcdserver: request timed out
* I0701 03:32:57.868635 1 trace.go:116] Trace[139608946]: "Create" url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:32:50.860137694 +0000 UTC m=+20.773552740) (total time: 7.008480657s):
* Trace[139608946]: [7.008480657s] [7.008413147s] END
* I0701 03:33:04.862687 1 trace.go:116] Trace[1569870962]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-07-01 03:32:50.863532334 +0000 UTC m=+20.776947909) (total time: 13.999129123s):
* Trace[1569870962]: [13.999129123s] [13.999129123s] END
* E0701 03:33:04.862756 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* I0701 03:33:04.862945 1 trace.go:116] Trace[1900699042]: "Patch" url:/api/v1/namespaces/default/events/vupgrade-20200701032338-8084.161d836c50121a73,user-agent:kubelet/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:192.168.39.129 (started: 2020-07-01 03:32:50.863408272 +0000 UTC m=+20.776823830) (total time: 13.999516822s):
* Trace[1900699042]: [13.999516822s] [13.999501519s] END
* I0701 03:33:04.863229 1 trace.go:116] Trace[1497495805]: "List etcd3" key:/services/specs,resourceVersion:,limit:0,continue: (started: 2020-07-01 03:32:50.857994038 +0000 UTC m=+20.771409704) (total time: 14.005218052s):
* Trace[1497495805]: [14.005218052s] [14.005218052s] END
* E0701 03:33:04.863246 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* E0701 03:33:04.863297 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* E0701 03:33:04.863675 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
* I0701 03:33:04.864078 1 trace.go:116] Trace[1640943464]: "List" url:/api/v1/services,user-agent:kube-apiserver/v1.18.4 (linux/amd64) kubernetes/e874ceb,client:127.0.0.1 (started: 2020-07-01 03:32:50.857983181 +0000 UTC m=+20.771398740) (total time: 14.006074746s):
* Trace[1640943464]: [14.006074746s] [14.006069842s] END
* F0701 03:33:04.864303 1 controller.go:161] Unable to perform initial IP allocation check: unable to refresh the service IP block: etcdserver: request timed out
*
* ==> kube-controller-manager [1766250e2291] <==
* I0701 03:33:59.343106 1 serving.go:313] Generated self-signed cert in-memory
* I0701 03:33:59.719825 1 controllermanager.go:161] Version: v1.18.4-rc.0
* I0701 03:33:59.721412 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0701 03:33:59.723639 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0701 03:33:59.724484 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0701 03:33:59.724807 1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
* I0701 03:33:59.725824 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
* F0701 03:34:15.691990 1 controllermanager.go:230] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding
*
* ==> kube-controller-manager [8e4f1d4dc42f] <==
* I0701 03:34:36.262052 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f85bc8d3-c0b8-4f4e-a94f-d1de62df3a6b", APIVersion:"apps/v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8z7p2
* I0701 03:34:36.296940 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f4f6909-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"511", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-86c58d9df4 to 1
* I0701 03:34:36.308365 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-86c58d9df4", UID:"2290a4a0-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-86c58d9df4-9b89t
* I0701 03:34:36.352805 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f4f6909-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"516", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
* I0701 03:34:36.384838 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f85bc8d3-c0b8-4f4e-a94f-d1de62df3a6b", APIVersion:"apps/v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nb44d
* I0701 03:34:36.552063 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"546", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kube-proxy-8lb8k
* I0701 03:34:36.944678 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f4f6909-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
* I0701 03:34:36.971275 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f4f6909-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-86c58d9df4 to 0
* I0701 03:34:36.971301 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f85bc8d3-c0b8-4f4e-a94f-d1de62df3a6b", APIVersion:"apps/v1", ResourceVersion:"555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-8z7p2
* I0701 03:34:37.004844 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-86c58d9df4", UID:"2290a4a0-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-86c58d9df4-c9ctq
* E0701 03:35:06.613798 1 cronjob_controller.go:147] Failed to extract cronJobs list: etcdserver: request timed out
* E0701 03:35:27.619003 1 cronjob_controller.go:125] Failed to extract job list: etcdserver: request timed out
* E0701 03:35:31.125670 1 daemon_controller.go:957] Timeout: request did not complete within requested timeout 34s
* E0701 03:35:31.125706 1 daemon_controller.go:292] kube-system/kube-proxy failed with : Timeout: request did not complete within requested timeout 34s
* I0701 03:35:31.125764 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Timeout: request did not complete within requested timeout 34s
* E0701 03:35:38.130383 1 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy.161d8397cada95f0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}, Reason:"FailedCreate", Message:"Error creating: Timeout: request did not complete within requested timeout 34s", Source:v1.EventSource{Component:"daemonset-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb71f00c77d37f0, ext:63289349610, loc:(*time.Location)(0x6d09200)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb71f00c77d37f0, ext:63289349610, loc:(*time.Location)(0x6d09200)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
* E0701 03:35:41.137265 1 daemon_controller.go:957] Internal error occurred: resource quota evaluates timeout
* E0701 03:35:41.137624 1 daemon_controller.go:292] kube-system/kube-proxy failed with : Internal error occurred: resource quota evaluates timeout
* I0701 03:35:41.137372 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: resource quota evaluates timeout
* E0701 03:35:48.142857 1 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy.161d839a1f97664b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}, Reason:"FailedCreate", Message:"Error creating: Internal error occurred: resource quota evaluates timeout", Source:v1.EventSource{Component:"daemonset-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb71f03482e244b, ext:73300944436, loc:(*time.Location)(0x6d09200)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb71f03482e244b, ext:73300944436, loc:(*time.Location)(0x6d09200)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
* E0701 03:35:51.063615 1 daemon_controller.go:957] etcdserver: request timed out
* E0701 03:35:51.063867 1 daemon_controller.go:292] kube-system/kube-proxy failed with : etcdserver: request timed out
* I0701 03:35:51.064025 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: etcdserver: request timed out
* I0701 03:35:53.642696 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-pzmdw
* I0701 03:35:54.053320 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1f571ffa-bb4b-11ea-8f20-203e676ba02a", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kube-proxy-pzmdw
*
* ==> kube-proxy [a276d19cb73a] <==
* W0701 03:35:55.487223 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0701 03:35:55.504341 1 node.go:136] Successfully retrieved node IP: 192.168.39.129
* I0701 03:35:55.504596 1 server_others.go:186] Using iptables Proxier.
* W0701 03:35:55.505092 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0701 03:35:55.505335 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0701 03:35:55.509113 1 server.go:583] Version: v1.18.4-rc.0
* I0701 03:35:55.511757 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I0701 03:35:55.513584 1 config.go:315] Starting service config controller
* I0701 03:35:55.513760 1 shared_informer.go:223] Waiting for caches to sync for service config
* I0701 03:35:55.516283 1 config.go:133] Starting endpoints config controller
* I0701 03:35:55.516336 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0701 03:35:55.614099 1 shared_informer.go:230] Caches are synced for service config
* I0701 03:35:55.616795 1 shared_informer.go:230] Caches are synced for endpoints config
*
* ==> kube-scheduler [1bc7fe00fede] <==
* E0701 03:33:12.460979 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:12.570658 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:12.664102 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:12.784210 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:13.537822 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:14.257378 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:15.063373 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:15.345405 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:19.519785 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:22.404101 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:22.639945 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:22.747877 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:23.673118 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:24.925760 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:25.092727 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:38.640664 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:39.440756 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:40.537588 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:42.805106 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:44.053386 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:44.230761 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:44.665800 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:50.032814 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=350: dial tcp 192.168.39.129:8443: connect: connection refused
* E0701 03:33:58.035558 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.39.129:8443: connect: connection refused
* I0701 03:34:38.961247 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2020-07-01 03:30:26 UTC, end at Wed 2020-07-01 03:35:59 UTC. --
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.696823 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-qtpfp" (UniqueName: "kubernetes.io/secret/1a27d8b1-832f-4224-a53b-658fd1e0f491-kube-proxy-token-qtpfp") pod "kube-proxy-xmmd6" (UID: "1a27d8b1-832f-4224-a53b-658fd1e0f491")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.696895 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1a27d8b1-832f-4224-a53b-658fd1e0f491-kube-proxy") pod "kube-proxy-xmmd6" (UID: "1a27d8b1-832f-4224-a53b-658fd1e0f491")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.696916 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/1a27d8b1-832f-4224-a53b-658fd1e0f491-xtables-lock") pod "kube-proxy-xmmd6" (UID: "1a27d8b1-832f-4224-a53b-658fd1e0f491")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.804008 4009 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.826411 4009 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e6e5c7d7d0e6ecea017f9943381d6d37ec767a4452618b831a4670f558cb12d9
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.903860 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-qtpfp" (UniqueName: "kubernetes.io/secret/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy-token-qtpfp") pod "kube-proxy-pzmdw" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.904075 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy") pod "kube-proxy-pzmdw" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.904386 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-lib-modules") pod "kube-proxy-pzmdw" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:53 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:53.904751 4009 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-xtables-lock") pod "kube-proxy-pzmdw" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.222376 4009 reconciler.go:196] operationExecutor.UnmountVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.222583 4009 reconciler.go:196] operationExecutor.UnmountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-xtables-lock") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.222778 4009 reconciler.go:196] operationExecutor.UnmountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-lib-modules") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.222822 4009 reconciler.go:196] operationExecutor.UnmountVolume started for volume "kube-proxy-token-qtpfp" (UniqueName: "kubernetes.io/secret/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy-token-qtpfp") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f")
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: W0701 03:35:54.225068 4009 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/303db952-d79d-486a-a43a-3b597d098f7f/volumes/kubernetes.io~configmap/kube-proxy: ClearQuota called, but quotas disabled
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.229581 4009 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.231245 4009 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.231210 4009 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.236771 4009 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy-token-qtpfp" (OuterVolumeSpecName: "kube-proxy-token-qtpfp") pod "303db952-d79d-486a-a43a-3b597d098f7f" (UID: "303db952-d79d-486a-a43a-3b597d098f7f"). InnerVolumeSpecName "kube-proxy-token-qtpfp". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.323182 4009 reconciler.go:319] Volume detached for volume "kube-proxy-token-qtpfp" (UniqueName: "kubernetes.io/secret/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy-token-qtpfp") on node "vupgrade-20200701032338-8084" DevicePath ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.323216 4009 reconciler.go:319] Volume detached for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/303db952-d79d-486a-a43a-3b597d098f7f-kube-proxy") on node "vupgrade-20200701032338-8084" DevicePath ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.323229 4009 reconciler.go:319] Volume detached for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-xtables-lock") on node "vupgrade-20200701032338-8084" DevicePath ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: I0701 03:35:54.323240 4009 reconciler.go:319] Volume detached for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/303db952-d79d-486a-a43a-3b597d098f7f-lib-modules") on node "vupgrade-20200701032338-8084" DevicePath ""
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: E0701 03:35:54.911225 4009 kubelet_pods.go:147] Mount cannot be satisfied for container "kube-proxy", because the volume is missing or the volume mounter is nil: {Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:<nil> SubPathExpr:}
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: E0701 03:35:54.911352 4009 kuberuntime_manager.go:801] container start failed: CreateContainerConfigError: cannot find volume "kube-proxy" to mount into container "kube-proxy"
* Jul 01 03:35:54 vupgrade-20200701032338-8084 kubelet[4009]: E0701 03:35:54.911554 4009 pod_workers.go:191] Error syncing pod 303db952-d79d-486a-a43a-3b597d098f7f ("kube-proxy-pzmdw_kube-system(303db952-d79d-486a-a43a-3b597d098f7f)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "cannot find volume \"kube-proxy\" to mount into container \"kube-proxy\""
*
* ==> storage-provisioner [4947220c5e56] <==
*
* ==> storage-provisioner [868d21da82e8] <==
* F0701 03:34:56.843334 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p vupgrade-20200701032338-8084 -n vupgrade-20200701032338-8084
helpers_test.go:254: (dbg) Run: kubectl --context vupgrade-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context vupgrade-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (471ns)
helpers_test.go:256: kubectl --context vupgrade-20200701032338-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH
helpers_test.go:170: Cleaning up "vupgrade-20200701032338-8084" profile ...
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p vupgrade-20200701032338-8084
=== CONT TestNetworkPlugins/group/false
=== RUN TestNetworkPlugins/group/false/Start
=== RUN TestNetworkPlugins/group/auto/KubeletFlags
=== RUN TestNetworkPlugins/group/auto/NetCatPod
=== RUN TestNetworkPlugins/group/flannel/ControllerPod
=== RUN TestNetworkPlugins/group/false/KubeletFlags
=== RUN TestNetworkPlugins/group/false/NetCatPod
=== RUN TestNetworkPlugins/group/custom-weave/KubeletFlags
=== RUN TestNetworkPlugins/group/custom-weave/NetCatPod
=== RUN TestNetworkPlugins/group/flannel/KubeletFlags
=== RUN TestNetworkPlugins/group/flannel/NetCatPod
panic: test timed out after 1h10m0s
goroutine 4041 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1377 +0xdf
created by time.goFunc
/usr/local/go/src/time/sleep.go:168 +0x44
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1(0xc000502200)
/usr/local/go/src/testing/testing.go:885 +0x202
testing.tRunner(0xc000502200, 0xc000681d18)
/usr/local/go/src/testing/testing.go:913 +0xd3
testing.runTests(0xc0001bfb00, 0x2668e60, 0x14, 0x14, 0xa)
/usr/local/go/src/testing/testing.go:1200 +0x2a7
testing.(*M).Run(0xc000132380, 0x0)
/usr/local/go/src/testing/testing.go:1117 +0x176
k8s.io/minikube/test/integration.TestMain(0xc000132380)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:59 +0xbe
main.main()
_testmain.go:80 +0x135
goroutine 21 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x267f380)
/var/lib/jenkins/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
/var/lib/jenkins/go/pkg/mod/k8s.io/klog@v1.0.0/klog.go:411 +0xd6
goroutine 22 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x267f100)
/var/lib/jenkins/go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:882 +0x8b
created by github.com/golang/glog.init.0
/var/lib/jenkins/go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:410 +0x26f
goroutine 24 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0000f7a90)
/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.22.3/stats/view/worker.go:154 +0x100
created by go.opencensus.io/stats/view.init.0
/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.22.3/stats/view/worker.go:32 +0x57
goroutine 25 [syscall, 70 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 704 [chan receive, 39 minutes]:
testing.tRunner.func1(0xc00046a300)
/usr/local/go/src/testing/testing.go:885 +0x202
testing.tRunner(0xc00046a300, 0x18f43a0)
/usr/local/go/src/testing/testing.go:913 +0xd3
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 682 [chan receive, 43 minutes]:
testing.runTests.func1.1(0xc000502200)
/usr/local/go/src/testing/testing.go:1207 +0x3b
created by testing.runTests.func1
/usr/local/go/src/testing/testing.go:1207 +0xac
goroutine 775 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000189200, 0x181e61e, 0x9, 0xc00084ca80, 0xc000532301)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046aa00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:113 +0x9cb
testing.tRunner(0xc00046aa00, 0xc00011c9c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 707 [chan send, 35 minutes]:
testing.tRunner.func1(0xc000502300)
/usr/local/go/src/testing/testing.go:904 +0x282
testing.tRunner(0xc000502300, 0x18f43c0)
/usr/local/go/src/testing/testing.go:913 +0xd3
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1111 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0006d46c0, 0xc0007cbd20, 0xc00004f680, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:432 +0x12e
k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0006d46c0, 0xc0008b9d20, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:320 +0x9f
k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0006d46c0, 0xc0008b9d20, 0xc0006d46c0, 0x7e)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:345 +0x70
k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3b9aca00, 0xd18c2e2800, 0xc0008b9d20, 0xc0008b9b70, 0x4)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:334 +0x4d
k8s.io/minikube/test/integration.PodWait(0x1aa7b40, 0xc0003fe360, 0xc000914800, 0xc0006120a0, 0x20, 0x181bd27, 0x7, 0x181ffe8, 0xa, 0xd18c2e2800, ...)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:359 +0x415
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc000914800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:128 +0x448
testing.tRunner(0xc000914800, 0xc0008a3a70)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 810 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502b00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502b00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502b00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502b00, 0xc000348f00)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 681 [chan send, 30 minutes]:
testing.tRunner.func1(0xc000502500)
/usr/local/go/src/testing/testing.go:904 +0x282
runtime.Goexit()
/usr/local/go/src/runtime/panic.go:563 +0xec
testing.(*common).FailNow(0xc000502500)
/usr/local/go/src/testing/testing.go:653 +0x39
testing.(*common).Fatalf(0xc000502500, 0x1834036, 0x19, 0xc000239b98, 0x1, 0x1)
/usr/local/go/src/testing/testing.go:716 +0x90
k8s.io/minikube/test/integration.TestVersionUpgrade(0xc000502500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:106 +0x1144
testing.tRunner(0xc000502500, 0x18f43e0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 805 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502600)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502600, 0xc000348880)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 776 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000914800, 0x181e61e, 0x9, 0xc0008a3a70, 0xc000532001)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046ab00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:113 +0x9cb
testing.tRunner(0xc00046ab00, 0xc00011ca20)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 599 [chan send, 36 minutes]:
testing.tRunner.func1(0xc00046a100)
/usr/local/go/src/testing/testing.go:904 +0x282
runtime.Goexit()
/usr/local/go/src/runtime/panic.go:563 +0xec
testing.(*common).FailNow(0xc00046a100)
/usr/local/go/src/testing/testing.go:653 +0x39
testing.(*common).Fatalf(0xc00046a100, 0x182307d, 0xd, 0xc0004a9bf0, 0x2, 0x2)
/usr/local/go/src/testing/testing.go:716 +0x90
k8s.io/minikube/test/integration.TestGvisorAddon(0xc00046a100)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/gvisor_addon_test.go:74 +0x160a
testing.tRunner(0xc00046a100, 0x18f4380)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 774 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc00046a900)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc00046a900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:68 +0x8f
testing.tRunner(0xc00046a900, 0xc00011c960)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 809 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502a00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502a00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502a00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502a00, 0xc000348dc0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1080 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000648140, 0xc00080fd20, 0xc00056c000, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:432 +0x12e
k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc000648140, 0xc0007cbd20, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:320 +0x9f
k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000648140, 0xc0007cbd20, 0xc000648140, 0x76)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:345 +0x70
k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3b9aca00, 0xd18c2e2800, 0xc0007cbd20, 0xc0007cbb70, 0x4)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:334 +0x4d
k8s.io/minikube/test/integration.PodWait(0x1aa7b40, 0xc0007444e0, 0xc00046ae00, 0xc000648200, 0x18, 0x181bd27, 0x7, 0x181ffe8, 0xa, 0xd18c2e2800, ...)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:359 +0x415
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc00046ae00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:128 +0x448
testing.tRunner(0xc00046ae00, 0xc00077eba0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 811 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502c00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502c00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502c00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502c00, 0xc000349280)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 705 [chan receive, 30 minutes]:
testing.(*T).Run(0xc00046ae00, 0x181e61e, 0x9, 0xc00077eba0, 0xc00040eb01)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a400)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:113 +0x9cb
testing.tRunner(0xc00046a400, 0xc00011c6c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 770 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc00046a500)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc00046a500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:68 +0x8f
testing.tRunner(0xc00046a500, 0xc00011c7e0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 773 [chan receive, 26 minutes]:
testing.(*T).Run(0xc000188500, 0x181e61e, 0x9, 0xc00084c090, 0xc00040f201)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:113 +0x9cb
testing.tRunner(0xc00046a800, 0xc00011c900)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 812 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502d00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502d00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502d00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502d00, 0xc0003492c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 772 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc00046a700)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc00046a700)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a700)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:68 +0x8f
testing.tRunner(0xc00046a700, 0xc00011c8a0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 808 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502900)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502900, 0xc000348d80)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 807 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502800)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502800, 0xc0003489c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 532 [chan receive, 36 minutes]:
testing.(*T).Run(0xc000502400, 0x181dabd, 0x8, 0xc000152e40, 0xc000744000)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestFunctional(0xc000189000)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:102 +0x244
testing.tRunner(0xc000189000, 0x18f4370)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 680 [chan receive, 36 minutes]:
testing.(*T).Run(0xc000503a00, 0x181928a, 0x5, 0x18f43d0, 0x2124de844cf)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestStartStop(0xc000502100)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:41 +0x5a
testing.tRunner(0xc000502100, 0x18f43d8)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 806 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502700)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502700)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502700)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502700, 0xc0003488c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 804 [chan receive, 36 minutes]:
testing.tRunner.func1(0xc000502400)
/usr/local/go/src/testing/testing.go:885 +0x202
testing.tRunner(0xc000502400, 0xc000152e40)
/usr/local/go/src/testing/testing.go:913 +0xd3
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1084 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9104cac38, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc00061c398, 0x72, 0x900, 0x90e, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00061c380, 0xc0001f1500, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x1cf
net.(*netFD).Read(0xc00061c380, 0xc0001f1500, 0x90e, 0x90e, 0x203000, 0x0, 0x8ce)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc0002d0098, 0xc0001f1500, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x68
crypto/tls.(*atLeastReader).Read(0xc0006a7f40, 0xc0001f1500, 0x90e, 0x90e, 0xa0, 0xc0009ad400, 0xc000222920)
/usr/local/go/src/crypto/tls/conn.go:780 +0x60
bytes.(*Buffer).ReadFrom(0xc000310958, 0x1a6b2a0, 0xc0006a7f40, 0x40bf45, 0x164c2a0, 0x17afb00)
/usr/local/go/src/bytes/buffer.go:204 +0xb4
crypto/tls.(*Conn).readFromUntil(0xc000310700, 0x1a6d6c0, 0xc0002d0098, 0x5, 0xc0002d0098, 0x203000)
/usr/local/go/src/crypto/tls/conn.go:802 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc000310700, 0x0, 0x0, 0xc0006a7ee0)
/usr/local/go/src/crypto/tls/conn.go:609 +0x124
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:577
crypto/tls.(*Conn).Read(0xc000310700, 0xc000855000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1255 +0x161
bufio.(*Reader).Read(0xc000483d40, 0xc000566ff8, 0x9, 0x9, 0xc000222cc0, 0x0, 0x8cb145)
/usr/local/go/src/bufio/bufio.go:226 +0x26a
io.ReadAtLeast(0x1a6b060, 0xc000483d40, 0xc000566ff8, 0x9, 0x9, 0x9, 0xc0000c2050, 0x0, 0x1a6b440)
/usr/local/go/src/io/io.go:310 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc000566ff8, 0x9, 0x9, 0x1a6b060, 0xc000483d40, 0x0, 0x0, 0xc00040cd50, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000566fc0, 0xc00040cd50, 0x0, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000222fb8, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1794 +0xbe
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00057b200)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1716 +0xa3
created by golang.org/x/net/http2.(*Transport).newClientConn
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:695 +0x62f
goroutine 537 [IO wait, 54 minutes]:
internal/poll.runtime_pollWait(0x7ff9104ca828, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc00066c118, 0x72, 0x0, 0x0, 0x181bab1)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00066c100, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1f8
net.(*netFD).accept(0xc00066c100, 0xc00021ed88, 0x760ce4, 0xc00016e0a0)
/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc00049a1c0, 0x5efbff10, 0xc00021ed88, 0x4c5316)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc00049a1c0, 0xc00021edd8, 0x18, 0xc000506180, 0x760214)
/usr/local/go/src/net/tcpsock.go:261 +0x47
net/http.(*Server).Serve(0xc00016e000, 0x1aa1b80, 0xc00049a1c0, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:2896 +0x280
net/http.(*Server).ListenAndServe(0xc00016e000, 0x50fea9, 0x18f5088)
/usr/local/go/src/net/http/server.go:2825 +0xb7
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00016e000, 0xc000189300)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:1029 +0x2f
created by k8s.io/minikube/test/integration.startHTTPProxy
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:1028 +0x12f
goroutine 1091 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0000f9280, 0xc00080bd20, 0xc000306300, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:432 +0x12e
k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0000f9280, 0xc0007c7d20, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:320 +0x9f
k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0000f9280, 0xc0007c7d20, 0xc0000f9280, 0x77)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:345 +0x70
k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3b9aca00, 0xd18c2e2800, 0xc0007c7d20, 0xc0007c7b70, 0x4)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:334 +0x4d
k8s.io/minikube/test/integration.PodWait(0x1aa7b40, 0xc0003fed20, 0xc000189200, 0xc0006122c0, 0x19, 0x181bd27, 0x7, 0x181ffe8, 0xa, 0xd18c2e2800, ...)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:359 +0x415
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc000189200)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:128 +0x448
testing.tRunner(0xc000189200, 0xc00084ca80)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1095 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9104caea8, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc000727a18, 0x72, 0x900, 0x90e, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000727a00, 0xc0001f0000, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x1cf
net.(*netFD).Read(0xc000727a00, 0xc0001f0000, 0x90e, 0x90e, 0x203000, 0x0, 0x909)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000010530, 0xc0001f0000, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x68
crypto/tls.(*atLeastReader).Read(0xc000616840, 0xc0001f0000, 0x90e, 0x90e, 0xa1, 0x1a6b440, 0xc0002ae920)
/usr/local/go/src/crypto/tls/conn.go:780 +0x60
bytes.(*Buffer).ReadFrom(0xc00037b3d8, 0x1a6b2a0, 0xc000616840, 0x40bf45, 0x164c2a0, 0x17afb00)
/usr/local/go/src/bytes/buffer.go:204 +0xb4
crypto/tls.(*Conn).readFromUntil(0xc00037b180, 0x1a6d6c0, 0xc000010530, 0x5, 0xc000010530, 0x203000)
/usr/local/go/src/crypto/tls/conn.go:802 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc00037b180, 0x0, 0x0, 0xc0006167a0)
/usr/local/go/src/crypto/tls/conn.go:609 +0x124
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:577
crypto/tls.(*Conn).Read(0xc00037b180, 0xc000b79000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1255 +0x161
bufio.(*Reader).Read(0xc000ae2f60, 0xc00065e1f8, 0x9, 0x9, 0xc0002aecc0, 0x0, 0x8cb145)
/usr/local/go/src/bufio/bufio.go:226 +0x26a
io.ReadAtLeast(0x1a6b060, 0xc000ae2f60, 0xc00065e1f8, 0x9, 0x9, 0x9, 0xc0000c2050, 0x0, 0x1a6b440)
/usr/local/go/src/io/io.go:310 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc00065e1f8, 0x9, 0x9, 0x1a6b060, 0xc000ae2f60, 0x0, 0x0, 0xc0006c6660, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00065e1c0, 0xc0006c6660, 0x0, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0002aefb8, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1794 +0xbe
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00057b680)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1716 +0xa3
created by golang.org/x/net/http2.(*Transport).newClientConn
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:695 +0x62f
goroutine 771 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc00046a600)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc00046a600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046a600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:68 +0x8f
testing.tRunner(0xc00046a600, 0xc00011c840)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 641 [chan receive, 39 minutes]:
testing.(*T).Run(0xc00046a300, 0x181928a, 0x5, 0x18f43a0, 0x5efc010f)
/usr/local/go/src/testing/testing.go:961 +0x377
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000502000)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:41 +0x69
testing.tRunner(0xc000502000, 0x18f43a8)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 813 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502e00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502e00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502e00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502e00, 0xc000349300)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 814 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000502f00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000502f00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000502f00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000502f00, 0xc000349340)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 815 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503000)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503000)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503000)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503000, 0xc000349380)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 816 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503100)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503100)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503100)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503100, 0xc0003493c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 817 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503200)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503200)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503200)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503200, 0xc000349400)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 818 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503300)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503300)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503300)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503300, 0xc000349440)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 819 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503400)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503400)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503400)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503400, 0xc000349480)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 820 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503500)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503500, 0xc0003494c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 821 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503600)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503600)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503600, 0xc000349500)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 822 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503800)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503800)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503800, 0xc000349540)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 823 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503900)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestFunctional.func3.1(0xc000503900)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:130 +0x58
testing.tRunner(0xc000503900, 0xc000349580)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 824 [chan receive, 36 minutes]:
testing.tRunner.func1(0xc000503a00)
/usr/local/go/src/testing/testing.go:885 +0x202
testing.tRunner(0xc000503a00, 0x18f43d0)
/usr/local/go/src/testing/testing.go:913 +0xd3
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 825 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503b00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503b00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000503b00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:82 +0x4b
testing.tRunner(0xc000503b00, 0xc000349600)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 826 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503c00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503c00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000503c00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:82 +0x4b
testing.tRunner(0xc000503c00, 0xc000349640)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 827 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503d00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503d00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000503d00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:82 +0x4b
testing.tRunner(0xc000503d00, 0xc000349680)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 828 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503e00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503e00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000503e00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:82 +0x4b
testing.tRunner(0xc000503e00, 0xc0003496c0)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 829 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0001c1650)
/usr/local/go/src/testing/testing.go:1008 +0xa7
testing.(*T).Parallel(0xc000503f00)
/usr/local/go/src/testing/testing.go:815 +0x1ed
k8s.io/minikube/test/integration.MaybeParallel(0xc000503f00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:421 +0x50
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000503f00)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:82 +0x4b
testing.tRunner(0xc000503f00, 0xc000349740)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1260 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00004f740, 0x3b9aca00, 0xd18c2e2800, 0xc00004f6e0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:481 +0x18e
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:464 +0x8c
goroutine 1100 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9104cab68, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc00061d718, 0x72, 0x900, 0x90e, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00061d700, 0xc0002c8000, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x1cf
net.(*netFD).Read(0xc00061d700, 0xc0002c8000, 0x90e, 0x90e, 0x203000, 0x0, 0x909)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc0000103c0, 0xc0002c8000, 0x90e, 0x90e, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x68
crypto/tls.(*atLeastReader).Read(0xc0000d0660, 0xc0002c8000, 0x90e, 0x90e, 0xa1, 0x1a6b440, 0xc0002ad920)
/usr/local/go/src/crypto/tls/conn.go:780 +0x60
bytes.(*Buffer).ReadFrom(0xc00037b758, 0x1a6b2a0, 0xc0000d0660, 0x40bf45, 0x164c2a0, 0x17afb00)
/usr/local/go/src/bytes/buffer.go:204 +0xb4
crypto/tls.(*Conn).readFromUntil(0xc00037b500, 0x1a6d6c0, 0xc0000103c0, 0x5, 0xc0000103c0, 0x203000)
/usr/local/go/src/crypto/tls/conn.go:802 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc00037b500, 0x0, 0x0, 0xc0000d05c0)
/usr/local/go/src/crypto/tls/conn.go:609 +0x124
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:577
crypto/tls.(*Conn).Read(0xc00037b500, 0xc00084a000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1255 +0x161
bufio.(*Reader).Read(0xc00003da40, 0xc000566b98, 0x9, 0x9, 0xc0002adcc0, 0x0, 0x8cb145)
/usr/local/go/src/bufio/bufio.go:226 +0x26a
io.ReadAtLeast(0x1a6b060, 0xc00003da40, 0xc000566b98, 0x9, 0x9, 0x9, 0xc0000c2050, 0x0, 0x1a6b440)
/usr/local/go/src/io/io.go:310 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc000566b98, 0x9, 0x9, 0x1a6b060, 0xc00003da40, 0x0, 0x0, 0xc00040d1d0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000566b60, 0xc00040d1d0, 0x0, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0002adfb8, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1794 +0xbe
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000230c00)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1716 +0xa3
created by golang.org/x/net/http2.(*Transport).newClientConn
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:695 +0x62f
goroutine 1066 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9104cb118, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc00047e118, 0x72, 0x3f00, 0x3f81, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00047e100, 0xc00087e000, 0x3f81, 0x3f81, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x1cf
net.(*netFD).Read(0xc00047e100, 0xc00087e000, 0x3f81, 0x3f81, 0x203000, 0x0, 0x3f7c)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000010010, 0xc00087e000, 0x3f81, 0x3f81, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x68
crypto/tls.(*atLeastReader).Read(0xc00075c8e0, 0xc00087e000, 0x3f81, 0x3f81, 0x400, 0x400, 0xc00052d920)
/usr/local/go/src/crypto/tls/conn.go:780 +0x60
bytes.(*Buffer).ReadFrom(0xc00037acd8, 0x1a6b2a0, 0xc00075c8e0, 0x40bf45, 0x164c2a0, 0x17afb00)
/usr/local/go/src/bytes/buffer.go:204 +0xb4
crypto/tls.(*Conn).readFromUntil(0xc00037aa80, 0x1a6d6c0, 0xc000010010, 0x5, 0xc000010010, 0x203000)
/usr/local/go/src/crypto/tls/conn.go:802 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc00037aa80, 0x0, 0x0, 0xc00075c8a0)
/usr/local/go/src/crypto/tls/conn.go:609 +0x124
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:577
crypto/tls.(*Conn).Read(0xc00037aa80, 0xc00001f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1255 +0x161
bufio.(*Reader).Read(0xc00003cde0, 0xc0003d2d58, 0x9, 0x9, 0xc00052dcc0, 0x0, 0x8cb145)
/usr/local/go/src/bufio/bufio.go:226 +0x26a
io.ReadAtLeast(0x1a6b060, 0xc00003cde0, 0xc0003d2d58, 0x9, 0x9, 0x9, 0xc0000c2050, 0x0, 0x1a6b440)
/usr/local/go/src/io/io.go:310 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc0003d2d58, 0x9, 0x9, 0x1a6b060, 0xc00003cde0, 0x0, 0x0, 0xc00070fdd0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0003d2d20, 0xc00070fdd0, 0x0, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00052dfb8, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1794 +0xbe
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000230480)
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:1716 +0xa3
created by golang.org/x/net/http2.(*Transport).newClientConn
/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.0.0-20200520182314-0ba52f642ac2/http2/transport.go:695 +0x62f
goroutine 1033 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0009206e0, 0xc0007c7d20, 0xc00004f9e0, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:432 +0x12e
k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0009206e0, 0xc0008b5d20, 0x0, 0x0)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:320 +0x9f
k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009206e0, 0xc0008b5d20, 0xc0009206e0, 0x79)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:345 +0x70
k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3b9aca00, 0xd18c2e2800, 0xc0008b5d20, 0xc0008b5b70, 0x4)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:334 +0x4d
k8s.io/minikube/test/integration.PodWait(0x1aa7b40, 0xc000745320, 0xc000188500, 0xc000649320, 0x1b, 0x181bd27, 0x7, 0x181ffe8, 0xa, 0xd18c2e2800, ...)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:359 +0x415
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc000188500)
/var/lib/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:128 +0x448
testing.tRunner(0xc000188500, 0xc00084c090)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
goroutine 1122 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00056c1e0, 0x3b9aca00, 0xd18c2e2800, 0xc00056c180)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:481 +0x18e
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:464 +0x8c
goroutine 1263 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00004faa0, 0x3b9aca00, 0xd18c2e2800, 0xc00004fa40)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:481 +0x18e
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:464 +0x8c
goroutine 1282 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0003063c0, 0x3b9aca00, 0xd18c2e2800, 0xc000306360)
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:481 +0x18e
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.17.3/pkg/util/wait/wait.go:464 +0x8c
++ result=2
++ set +x
>> out/e2e-linux-amd64 exited with 2 at Wed Jul 1 04:06:34 UTC 2020
minikube: FAIL
>> Copying /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt to gs://minikube-builds/logs/master/8e52b6b/KVM_Linuxout.txt
AccessDeniedException: 403 Insufficient Permission
CommandException: 1 file/object could not be transferred.
Build step 'Execute shell' marked build as failure
[Google Cloud Storage Plugin] Uploading: KVM_Linux.txt
Finished: FAILURE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment