Created
July 1, 2020 17:34
-
-
Save medyagh/b7b6d9ff8226a6476fdbf77a4bf92021 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Started by upstream project "Build_Cross" build number 12926 | |
originally caused by: | |
Started by timer | |
Started by user Medya Ghazizadeh | |
Rebuilds build #10879 | |
Running as SYSTEM | |
[EnvInject] - Loading node environment variables. | |
[EnvInject] - Preparing an environment for the build. | |
[EnvInject] - Keeping Jenkins system variables. | |
[EnvInject] - Keeping Jenkins build variables. | |
[EnvInject] - Evaluating the Groovy script content | |
[EnvInject] - Injecting contributions. | |
Building remotely on GCP - Debian Agent 1 (debian10) in workspace /home/jenkins/workspace/KVM_Linux_integration | |
[WS-CLEANUP] Deleting project workspace... | |
[WS-CLEANUP] Deferred wipeout is used... | |
[KVM_Linux_integration] $ /bin/bash -xe /tmp/jenkins10965592934645667523.sh | |
+ set -e | |
+ gsutil -m cp -r gs://minikube-builds/master/installers . | |
Copying gs://minikube-builds/master/installers/check_install_golang.sh... | |
/ [0/1 files][ 0.0 B/ 2.2 KiB] 0% Done | |
/ [1/1 files][ 2.2 KiB/ 2.2 KiB] 100% Done | |
Operation completed over 1 objects/2.2 KiB. | |
+ chmod +x ./installers/check_install_golang.sh | |
+ gsutil -m cp -r gs://minikube-builds/master/common.sh . | |
Copying gs://minikube-builds/master/common.sh... | |
/ [0/1 files][ 0.0 B/ 13.7 KiB] 0% Done | |
/ [1/1 files][ 13.7 KiB/ 13.7 KiB] 100% Done | |
Operation completed over 1 objects/13.7 KiB. | |
+ gsutil cp gs://minikube-builds/master/linux_integration_tests_kvm.sh . | |
Copying gs://minikube-builds/master/linux_integration_tests_kvm.sh... | |
/ [0 files][ 0.0 B/ 1.5 KiB] | |
/ [1 files][ 1.5 KiB/ 1.5 KiB] | |
Operation completed over 1 objects/1.5 KiB. | |
+ sudo gsutil cp gs://minikube-builds/master/docker-machine-driver-kvm2 /usr/local/bin/docker-machine-driver-kvm2 | |
Copying gs://minikube-builds/master/docker-machine-driver-kvm2... | |
/ [0 files][ 0.0 B/ 13.9 MiB] | |
/ [1 files][ 13.9 MiB/ 13.9 MiB] | |
Operation completed over 1 objects/13.9 MiB. | |
+ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm2 | |
+ bash linux_integration_tests_kvm.sh | |
+ (( 2 < 2 )) | |
+ VERSION_TO_INSTALL=1.13.9 | |
+ INSTALL_PATH=/usr/local | |
+ check_and_install_golang | |
+ go version | |
+ echo 'WARNING: No golang installation found in your environment.' | |
WARNING: No golang installation found in your environment. | |
+ install_golang 1.13.9 /usr/local | |
+ echo 'Installing golang version: 1.13.9 on /usr/local' | |
Installing golang version: 1.13.9 on /usr/local | |
+ pushd /tmp | |
+ sudo curl -qL -O https://storage.googleapis.com/golang/go1.13.9.linux-amd64.tar.gz | |
% Total % Received % Xferd Average Speed Time Time Time Current | |
Dload Upload Total Spent Left Speed | |
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 | |
3 114M 3 4096k 0 0 19.8M 0 0:00:05 --:--:-- 0:00:05 19.7M | |
100 114M 100 114M 0 0 184M 0 --:--:-- --:--:-- --:--:-- 183M | |
+ sudo tar xfa go1.13.9.linux-amd64.tar.gz | |
+ sudo rm -rf /usr/local/go | |
+ sudo mv go /usr/local/ | |
++ whoami | |
+ sudo chown -R root: /usr/local/go | |
+ popd | |
+ return | |
Total reclaimed space: 0B | |
TYPE TOTAL ACTIVE SIZE RECLAIMABLE | |
Images 1 0 973.5MB 973.5MB (100%) | |
Containers 0 0 0B 0B | |
Local Volumes 0 0 0B 0B | |
Build Cache 0 0 0B 0B | |
>> Starting at Wed Jul 1 02:56:06 UTC 2020 | |
arch: linux-amd64 | |
build: master | |
driver: kvm2 | |
job: KVM_Linux | |
test home: /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
sudo: | |
kernel: #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) | |
uptime: 02:56:06 up 10 min, 0 users, load average: 0.11, 0.04, 0.01 | |
env: ‘kubectl’: No such file or directory | |
kubectl: | |
docker: 19.03.12 | |
sudo: podman: command not found | |
podman: | |
go: go version go1.13.9 linux/amd64 | |
virsh: 5.0.0 | |
>> Downloading test inputs from master ... | |
minikube version: v1.12.0-beta.0 | |
commit: 8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
>> Cleaning up after previous test runs ... | |
/usr/bin/virsh | |
>> virsh VM list after clean up (should be empty): | |
Id Name State | |
-------------------- | |
sudo: lsof: command not found | |
Sending build context to Docker daemon 199.3MB | |
Step 1/4 : FROM ubuntu:18.04 | |
18.04: Pulling from library/ubuntu | |
d7c3167c320d: Pulling fs layer | |
131f805ec7fd: Pulling fs layer | |
322ed380e680: Pulling fs layer | |
6ac240b13098: Pulling fs layer | |
6ac240b13098: Waiting | |
322ed380e680: Verifying Checksum | |
322ed380e680: Download complete | |
131f805ec7fd: Verifying Checksum | |
131f805ec7fd: Download complete | |
d7c3167c320d: Download complete | |
6ac240b13098: Download complete | |
d7c3167c320d: Pull complete | |
131f805ec7fd: Pull complete | |
322ed380e680: Pull complete | |
6ac240b13098: Pull complete | |
Digest: sha256:86510528ab9cd7b64209cbbe6946e094a6d10c6db21def64a93ebdd20011de1d | |
Status: Downloaded newer image for ubuntu:18.04 | |
---> 8e4ce0a6ce69 | |
Step 2/4 : RUN apt-get update && apt-get install -y kmod gcc wget xz-utils libc6-dev bc libelf-dev bison flex openssl libssl-dev libidn2-0 sudo libcap2 && rm -rf /var/lib/apt/lists/* | |
---> Running in e82c2391cc0e | |
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] | |
Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB] | |
Get:3 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [977 kB] | |
Get:4 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [9012 B] | |
Get:5 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [863 kB] | |
Get:6 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [82.2 kB] | |
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] | |
Get:8 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] | |
Get:9 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB] | |
Get:10 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB] | |
Get:11 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB] | |
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB] | |
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [13.4 kB] | |
Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [93.8 kB] | |
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1399 kB] | |
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1271 kB] | |
Get:17 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [8158 B] | |
Get:18 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8286 B] | |
Fetched 18.1 MB in 3s (5617 kB/s) | |
Reading package lists... | |
Reading package lists... | |
Building dependency tree... | |
Reading state information... | |
libidn2-0 is already the newest version (2.0.4-1.1ubuntu0.2). | |
The following additional packages will be installed: | |
binutils binutils-common binutils-x86-64-linux-gnu ca-certificates cpp cpp-7 | |
gcc-7 gcc-7-base libasan4 libatomic1 libbinutils libbison-dev libc-dev-bin | |
libcc1-0 libcilkrts5 libelf1 libfl-dev libfl2 libgcc-7-dev libgomp1 libisl19 | |
libitm1 libkmod2 liblsan0 libmpc3 libmpfr6 libmpx2 libpsl5 libquadmath0 | |
libreadline7 libsigsegv2 libssl1.1 libtsan0 libubsan0 linux-libc-dev m4 | |
manpages manpages-dev publicsuffix readline-common zlib1g-dev | |
Suggested packages: | |
binutils-doc bison-doc cpp-doc gcc-7-locales build-essential flex-doc | |
gcc-multilib make autoconf automake libtool gdb gcc-doc gcc-7-multilib | |
gcc-7-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan4-dbg | |
liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx2-dbg | |
libquadmath0-dbg glibc-doc libssl-doc m4-doc man-browser readline-doc | |
The following NEW packages will be installed: | |
bc binutils binutils-common binutils-x86-64-linux-gnu bison ca-certificates | |
cpp cpp-7 flex gcc gcc-7 gcc-7-base kmod libasan4 libatomic1 libbinutils | |
libbison-dev libc-dev-bin libc6-dev libcap2 libcc1-0 libcilkrts5 libelf-dev | |
libelf1 libfl-dev libfl2 libgcc-7-dev libgomp1 libisl19 libitm1 libkmod2 | |
liblsan0 libmpc3 libmpfr6 libmpx2 libpsl5 libquadmath0 libreadline7 | |
libsigsegv2 libssl-dev libssl1.1 libtsan0 libubsan0 linux-libc-dev m4 | |
manpages manpages-dev openssl publicsuffix readline-common sudo wget | |
xz-utils zlib1g-dev | |
0 upgraded, 54 newly installed, 0 to remove and 1 not upgraded. | |
Need to get 38.5 MB of archives. | |
After this operation, 143 MB of additional disk space will be used. | |
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 libsigsegv2 amd64 2.12-1 [14.7 kB] | |
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 m4 amd64 1.4.18-1 [197 kB] | |
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 flex amd64 2.6.4-6 [316 kB] | |
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.1-1ubuntu2.1~18.04.6 [1301 kB] | |
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 openssl amd64 1.1.1-1ubuntu2.1~18.04.6 [614 kB] | |
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ca-certificates all 20190110~18.04.1 [146 kB] | |
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libkmod2 amd64 24-1ubuntu3.4 [40.1 kB] | |
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 kmod amd64 24-1ubuntu3.4 [88.7 kB] | |
Get:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 libcap2 amd64 1:2.25-1.2 [13.0 kB] | |
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libelf1 amd64 0.170-0.4ubuntu0.1 [44.8 kB] | |
Get:11 http://archive.ubuntu.com/ubuntu bionic/main amd64 readline-common all 7.0-3 [52.9 kB] | |
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 libreadline7 amd64 7.0-3 [124 kB] | |
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 sudo amd64 1.8.21p2-3ubuntu1.2 [427 kB] | |
Get:14 http://archive.ubuntu.com/ubuntu bionic/main amd64 xz-utils amd64 5.2.2-1.3 [83.8 kB] | |
Get:15 http://archive.ubuntu.com/ubuntu bionic/main amd64 libpsl5 amd64 0.19.1-5build1 [41.8 kB] | |
Get:16 http://archive.ubuntu.com/ubuntu bionic/main amd64 manpages all 4.15-1 [1234 kB] | |
Get:17 http://archive.ubuntu.com/ubuntu bionic/main amd64 publicsuffix all 20180223.1310-1 [97.6 kB] | |
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 wget amd64 1.19.4-1ubuntu2.2 [316 kB] | |
Get:19 http://archive.ubuntu.com/ubuntu bionic/main amd64 bc amd64 1.07.1-2 [86.2 kB] | |
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils-common amd64 2.30-21ubuntu1~18.04.3 [196 kB] | |
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libbinutils amd64 2.30-21ubuntu1~18.04.3 [488 kB] | |
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils-x86-64-linux-gnu amd64 2.30-21ubuntu1~18.04.3 [1839 kB] | |
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 binutils amd64 2.30-21ubuntu1~18.04.3 [3388 B] | |
Get:24 http://archive.ubuntu.com/ubuntu bionic/main amd64 libbison-dev amd64 2:3.0.4.dfsg-1build1 [339 kB] | |
Get:25 http://archive.ubuntu.com/ubuntu bionic/main amd64 bison amd64 2:3.0.4.dfsg-1build1 [266 kB] | |
Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc-7-base amd64 7.5.0-3ubuntu1~18.04 [18.3 kB] | |
Get:27 http://archive.ubuntu.com/ubuntu bionic/main amd64 libisl19 amd64 0.19-1 [551 kB] | |
Get:28 http://archive.ubuntu.com/ubuntu bionic/main amd64 libmpfr6 amd64 4.0.1-1 [243 kB] | |
Get:29 http://archive.ubuntu.com/ubuntu bionic/main amd64 libmpc3 amd64 1.1.0-1 [40.8 kB] | |
Get:30 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 cpp-7 amd64 7.5.0-3ubuntu1~18.04 [8591 kB] | |
Get:31 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 cpp amd64 4:7.4.0-1ubuntu2.3 [27.7 kB] | |
Get:32 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcc1-0 amd64 8.4.0-1ubuntu1~18.04 [39.4 kB] | |
Get:33 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgomp1 amd64 8.4.0-1ubuntu1~18.04 [76.5 kB] | |
Get:34 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libitm1 amd64 8.4.0-1ubuntu1~18.04 [27.9 kB] | |
Get:35 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libatomic1 amd64 8.4.0-1ubuntu1~18.04 [9192 B] | |
Get:36 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libasan4 amd64 7.5.0-3ubuntu1~18.04 [358 kB] | |
Get:37 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 liblsan0 amd64 8.4.0-1ubuntu1~18.04 [133 kB] | |
Get:38 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libtsan0 amd64 8.4.0-1ubuntu1~18.04 [288 kB] | |
Get:39 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libubsan0 amd64 7.5.0-3ubuntu1~18.04 [126 kB] | |
Get:40 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcilkrts5 amd64 7.5.0-3ubuntu1~18.04 [42.5 kB] | |
Get:41 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmpx2 amd64 8.4.0-1ubuntu1~18.04 [11.6 kB] | |
Get:42 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libquadmath0 amd64 8.4.0-1ubuntu1~18.04 [134 kB] | |
Get:43 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgcc-7-dev amd64 7.5.0-3ubuntu1~18.04 [2378 kB] | |
Get:44 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc-7 amd64 7.5.0-3ubuntu1~18.04 [9381 kB] | |
Get:45 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc amd64 4:7.4.0-1ubuntu2.3 [5184 B] | |
Get:46 http://archive.ubuntu.com/ubuntu bionic/main amd64 libc-dev-bin amd64 2.27-3ubuntu1 [71.8 kB] | |
Get:47 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-108.109 [991 kB] | |
Get:48 http://archive.ubuntu.com/ubuntu bionic/main amd64 libc6-dev amd64 2.27-3ubuntu1 [2587 kB] | |
Get:49 http://archive.ubuntu.com/ubuntu bionic/main amd64 zlib1g-dev amd64 1:1.2.11.dfsg-0ubuntu2 [176 kB] | |
Get:50 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libelf-dev amd64 0.170-0.4ubuntu0.1 [57.3 kB] | |
Get:51 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfl2 amd64 2.6.4-6 [11.4 kB] | |
Get:52 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfl-dev amd64 2.6.4-6 [6320 B] | |
Get:53 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl-dev amd64 1.1.1-1ubuntu2.1~18.04.6 [1566 kB] | |
Get:54 http://archive.ubuntu.com/ubuntu bionic/main amd64 manpages-dev all 4.15-1 [2217 kB] | |
[91mdebconf: delaying package configuration, since apt-utils is not installed | |
[0mFetched 38.5 MB in 3s (11.5 MB/s) | |
Selecting previously unselected package libsigsegv2:amd64. | |
(Reading database ... | |
(Reading database ... 5% | |
(Reading database ... 10% | |
(Reading database ... 15% | |
(Reading database ... 20% | |
(Reading database ... 25% | |
(Reading database ... 30% | |
(Reading database ... 35% | |
(Reading database ... 40% | |
(Reading database ... 45% | |
(Reading database ... 50% | |
(Reading database ... 55% | |
(Reading database ... 60% | |
(Reading database ... 65% | |
(Reading database ... 70% | |
(Reading database ... 75% | |
(Reading database ... 80% | |
(Reading database ... 85% | |
(Reading database ... 90% | |
(Reading database ... 95% | |
(Reading database ... 100% | |
(Reading database ... 4046 files and directories currently installed.) | |
Preparing to unpack .../00-libsigsegv2_2.12-1_amd64.deb ... | |
Unpacking libsigsegv2:amd64 (2.12-1) ... | |
Selecting previously unselected package m4. | |
Preparing to unpack .../01-m4_1.4.18-1_amd64.deb ... | |
Unpacking m4 (1.4.18-1) ... | |
Selecting previously unselected package flex. | |
Preparing to unpack .../02-flex_2.6.4-6_amd64.deb ... | |
Unpacking flex (2.6.4-6) ... | |
Selecting previously unselected package libssl1.1:amd64. | |
Preparing to unpack .../03-libssl1.1_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ... | |
Unpacking libssl1.1:amd64 (1.1.1-1ubuntu2.1~18.04.6) ... | |
Selecting previously unselected package openssl. | |
Preparing to unpack .../04-openssl_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ... | |
Unpacking openssl (1.1.1-1ubuntu2.1~18.04.6) ... | |
Selecting previously unselected package ca-certificates. | |
Preparing to unpack .../05-ca-certificates_20190110~18.04.1_all.deb ... | |
Unpacking ca-certificates (20190110~18.04.1) ... | |
Selecting previously unselected package libkmod2:amd64. | |
Preparing to unpack .../06-libkmod2_24-1ubuntu3.4_amd64.deb ... | |
Unpacking libkmod2:amd64 (24-1ubuntu3.4) ... | |
Selecting previously unselected package kmod. | |
Preparing to unpack .../07-kmod_24-1ubuntu3.4_amd64.deb ... | |
Unpacking kmod (24-1ubuntu3.4) ... | |
Selecting previously unselected package libcap2:amd64. | |
Preparing to unpack .../08-libcap2_1%3a2.25-1.2_amd64.deb ... | |
Unpacking libcap2:amd64 (1:2.25-1.2) ... | |
Selecting previously unselected package libelf1:amd64. | |
Preparing to unpack .../09-libelf1_0.170-0.4ubuntu0.1_amd64.deb ... | |
Unpacking libelf1:amd64 (0.170-0.4ubuntu0.1) ... | |
Selecting previously unselected package readline-common. | |
Preparing to unpack .../10-readline-common_7.0-3_all.deb ... | |
Unpacking readline-common (7.0-3) ... | |
Selecting previously unselected package libreadline7:amd64. | |
Preparing to unpack .../11-libreadline7_7.0-3_amd64.deb ... | |
Unpacking libreadline7:amd64 (7.0-3) ... | |
Selecting previously unselected package sudo. | |
Preparing to unpack .../12-sudo_1.8.21p2-3ubuntu1.2_amd64.deb ... | |
Unpacking sudo (1.8.21p2-3ubuntu1.2) ... | |
Selecting previously unselected package xz-utils. | |
Preparing to unpack .../13-xz-utils_5.2.2-1.3_amd64.deb ... | |
Unpacking xz-utils (5.2.2-1.3) ... | |
Selecting previously unselected package libpsl5:amd64. | |
Preparing to unpack .../14-libpsl5_0.19.1-5build1_amd64.deb ... | |
Unpacking libpsl5:amd64 (0.19.1-5build1) ... | |
Selecting previously unselected package manpages. | |
Preparing to unpack .../15-manpages_4.15-1_all.deb ... | |
Unpacking manpages (4.15-1) ... | |
Selecting previously unselected package publicsuffix. | |
Preparing to unpack .../16-publicsuffix_20180223.1310-1_all.deb ... | |
Unpacking publicsuffix (20180223.1310-1) ... | |
Selecting previously unselected package wget. | |
Preparing to unpack .../17-wget_1.19.4-1ubuntu2.2_amd64.deb ... | |
Unpacking wget (1.19.4-1ubuntu2.2) ... | |
Selecting previously unselected package bc. | |
Preparing to unpack .../18-bc_1.07.1-2_amd64.deb ... | |
Unpacking bc (1.07.1-2) ... | |
Selecting previously unselected package binutils-common:amd64. | |
Preparing to unpack .../19-binutils-common_2.30-21ubuntu1~18.04.3_amd64.deb ... | |
Unpacking binutils-common:amd64 (2.30-21ubuntu1~18.04.3) ... | |
Selecting previously unselected package libbinutils:amd64. | |
Preparing to unpack .../20-libbinutils_2.30-21ubuntu1~18.04.3_amd64.deb ... | |
Unpacking libbinutils:amd64 (2.30-21ubuntu1~18.04.3) ... | |
Selecting previously unselected package binutils-x86-64-linux-gnu. | |
Preparing to unpack .../21-binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04.3_amd64.deb ... | |
Unpacking binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.3) ... | |
Selecting previously unselected package binutils. | |
Preparing to unpack .../22-binutils_2.30-21ubuntu1~18.04.3_amd64.deb ... | |
Unpacking binutils (2.30-21ubuntu1~18.04.3) ... | |
Selecting previously unselected package libbison-dev:amd64. | |
Preparing to unpack .../23-libbison-dev_2%3a3.0.4.dfsg-1build1_amd64.deb ... | |
Unpacking libbison-dev:amd64 (2:3.0.4.dfsg-1build1) ... | |
Selecting previously unselected package bison. | |
Preparing to unpack .../24-bison_2%3a3.0.4.dfsg-1build1_amd64.deb ... | |
Unpacking bison (2:3.0.4.dfsg-1build1) ... | |
Selecting previously unselected package gcc-7-base:amd64. | |
Preparing to unpack .../25-gcc-7-base_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking gcc-7-base:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package libisl19:amd64. | |
Preparing to unpack .../26-libisl19_0.19-1_amd64.deb ... | |
Unpacking libisl19:amd64 (0.19-1) ... | |
Selecting previously unselected package libmpfr6:amd64. | |
Preparing to unpack .../27-libmpfr6_4.0.1-1_amd64.deb ... | |
Unpacking libmpfr6:amd64 (4.0.1-1) ... | |
Selecting previously unselected package libmpc3:amd64. | |
Preparing to unpack .../28-libmpc3_1.1.0-1_amd64.deb ... | |
Unpacking libmpc3:amd64 (1.1.0-1) ... | |
Selecting previously unselected package cpp-7. | |
Preparing to unpack .../29-cpp-7_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking cpp-7 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package cpp. | |
Preparing to unpack .../30-cpp_4%3a7.4.0-1ubuntu2.3_amd64.deb ... | |
Unpacking cpp (4:7.4.0-1ubuntu2.3) ... | |
Selecting previously unselected package libcc1-0:amd64. | |
Preparing to unpack .../31-libcc1-0_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libgomp1:amd64. | |
Preparing to unpack .../32-libgomp1_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libgomp1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libitm1:amd64. | |
Preparing to unpack .../33-libitm1_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libitm1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libatomic1:amd64. | |
Preparing to unpack .../34-libatomic1_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libatomic1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libasan4:amd64. | |
Preparing to unpack .../35-libasan4_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking libasan4:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package liblsan0:amd64. | |
Preparing to unpack .../36-liblsan0_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking liblsan0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libtsan0:amd64. | |
Preparing to unpack .../37-libtsan0_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libtsan0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libubsan0:amd64. | |
Preparing to unpack .../38-libubsan0_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking libubsan0:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package libcilkrts5:amd64. | |
Preparing to unpack .../39-libcilkrts5_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking libcilkrts5:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package libmpx2:amd64. | |
Preparing to unpack .../40-libmpx2_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libmpx2:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libquadmath0:amd64. | |
Preparing to unpack .../41-libquadmath0_8.4.0-1ubuntu1~18.04_amd64.deb ... | |
Unpacking libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Selecting previously unselected package libgcc-7-dev:amd64. | |
Preparing to unpack .../42-libgcc-7-dev_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking libgcc-7-dev:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package gcc-7. | |
Preparing to unpack .../43-gcc-7_7.5.0-3ubuntu1~18.04_amd64.deb ... | |
Unpacking gcc-7 (7.5.0-3ubuntu1~18.04) ... | |
Selecting previously unselected package gcc. | |
Preparing to unpack .../44-gcc_4%3a7.4.0-1ubuntu2.3_amd64.deb ... | |
Unpacking gcc (4:7.4.0-1ubuntu2.3) ... | |
Selecting previously unselected package libc-dev-bin. | |
Preparing to unpack .../45-libc-dev-bin_2.27-3ubuntu1_amd64.deb ... | |
Unpacking libc-dev-bin (2.27-3ubuntu1) ... | |
Selecting previously unselected package linux-libc-dev:amd64. | |
Preparing to unpack .../46-linux-libc-dev_4.15.0-108.109_amd64.deb ... | |
Unpacking linux-libc-dev:amd64 (4.15.0-108.109) ... | |
Selecting previously unselected package libc6-dev:amd64. | |
Preparing to unpack .../47-libc6-dev_2.27-3ubuntu1_amd64.deb ... | |
Unpacking libc6-dev:amd64 (2.27-3ubuntu1) ... | |
Selecting previously unselected package zlib1g-dev:amd64. | |
Preparing to unpack .../48-zlib1g-dev_1%3a1.2.11.dfsg-0ubuntu2_amd64.deb ... | |
Unpacking zlib1g-dev:amd64 (1:1.2.11.dfsg-0ubuntu2) ... | |
Selecting previously unselected package libelf-dev:amd64. | |
Preparing to unpack .../49-libelf-dev_0.170-0.4ubuntu0.1_amd64.deb ... | |
Unpacking libelf-dev:amd64 (0.170-0.4ubuntu0.1) ... | |
Selecting previously unselected package libfl2:amd64. | |
Preparing to unpack .../50-libfl2_2.6.4-6_amd64.deb ... | |
Unpacking libfl2:amd64 (2.6.4-6) ... | |
Selecting previously unselected package libfl-dev:amd64. | |
Preparing to unpack .../51-libfl-dev_2.6.4-6_amd64.deb ... | |
Unpacking libfl-dev:amd64 (2.6.4-6) ... | |
Selecting previously unselected package libssl-dev:amd64. | |
Preparing to unpack .../52-libssl-dev_1.1.1-1ubuntu2.1~18.04.6_amd64.deb ... | |
Unpacking libssl-dev:amd64 (1.1.1-1ubuntu2.1~18.04.6) ... | |
Selecting previously unselected package manpages-dev. | |
Preparing to unpack .../53-manpages-dev_4.15-1_all.deb ... | |
Unpacking manpages-dev (4.15-1) ... | |
Setting up libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up libgomp1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up libatomic1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up readline-common (7.0-3) ... | |
Setting up manpages (4.15-1) ... | |
Setting up libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up sudo (1.8.21p2-3ubuntu1.2) ... | |
Setting up libsigsegv2:amd64 (2.12-1) ... | |
Setting up libreadline7:amd64 (7.0-3) ... | |
Setting up libpsl5:amd64 (0.19.1-5build1) ... | |
Setting up libelf1:amd64 (0.170-0.4ubuntu0.1) ... | |
Setting up libtsan0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up libcap2:amd64 (1:2.25-1.2) ... | |
Setting up linux-libc-dev:amd64 (4.15.0-108.109) ... | |
Setting up libmpfr6:amd64 (4.0.1-1) ... | |
Setting up m4 (1.4.18-1) ... | |
Setting up libkmod2:amd64 (24-1ubuntu3.4) ... | |
Setting up liblsan0:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up gcc-7-base:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Setting up binutils-common:amd64 (2.30-21ubuntu1~18.04.3) ... | |
Setting up libmpx2:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up publicsuffix (20180223.1310-1) ... | |
Setting up libssl1.1:amd64 (1.1.1-1ubuntu2.1~18.04.6) ... | |
debconf: unable to initialize frontend: Dialog | |
debconf: (TERM is not set, so the dialog frontend is not usable.) | |
debconf: falling back to frontend: Readline | |
debconf: unable to initialize frontend: Readline | |
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.) | |
debconf: falling back to frontend: Teletype | |
Setting up xz-utils (5.2.2-1.3) ... | |
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzma.1.gz because associated file /usr/share/man/man1/xz.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/unlzma.1.gz because associated file /usr/share/man/man1/unxz.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzcat.1.gz because associated file /usr/share/man/man1/xzcat.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzmore.1.gz because associated file /usr/share/man/man1/xzmore.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzless.1.gz because associated file /usr/share/man/man1/xzless.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzdiff.1.gz because associated file /usr/share/man/man1/xzdiff.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzcmp.1.gz because associated file /usr/share/man/man1/xzcmp.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzgrep.1.gz because associated file /usr/share/man/man1/xzgrep.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzegrep.1.gz because associated file /usr/share/man/man1/xzegrep.1.gz (of link group lzma) doesn't exist | |
update-alternatives: warning: skip creation of /usr/share/man/man1/lzfgrep.1.gz because associated file /usr/share/man/man1/xzfgrep.1.gz (of link group lzma) doesn't exist | |
Setting up openssl (1.1.1-1ubuntu2.1~18.04.6) ... | |
Setting up wget (1.19.4-1ubuntu2.2) ... | |
Setting up libbison-dev:amd64 (2:3.0.4.dfsg-1build1) ... | |
Setting up libfl2:amd64 (2.6.4-6) ... | |
Setting up libmpc3:amd64 (1.1.0-1) ... | |
Setting up libc-dev-bin (2.27-3ubuntu1) ... | |
Setting up bison (2:3.0.4.dfsg-1build1) ... | |
update-alternatives: using /usr/bin/bison.yacc to provide /usr/bin/yacc (yacc) in auto mode | |
update-alternatives: warning: skip creation of /usr/share/man/man1/yacc.1.gz because associated file /usr/share/man/man1/bison.yacc.1.gz (of link group yacc) doesn't exist | |
Setting up bc (1.07.1-2) ... | |
Setting up ca-certificates (20190110~18.04.1) ... | |
debconf: unable to initialize frontend: Dialog | |
debconf: (TERM is not set, so the dialog frontend is not usable.) | |
debconf: falling back to frontend: Readline | |
debconf: unable to initialize frontend: Readline | |
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.) | |
debconf: falling back to frontend: Teletype | |
Updating certificates in /etc/ssl/certs... | |
127 added, 0 removed; done. | |
Setting up manpages-dev (4.15-1) ... | |
Setting up libc6-dev:amd64 (2.27-3ubuntu1) ... | |
Setting up libitm1:amd64 (8.4.0-1ubuntu1~18.04) ... | |
Setting up zlib1g-dev:amd64 (1:1.2.11.dfsg-0ubuntu2) ... | |
Setting up libisl19:amd64 (0.19-1) ... | |
Setting up kmod (24-1ubuntu3.4) ... | |
Setting up libasan4:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Setting up libbinutils:amd64 (2.30-21ubuntu1~18.04.3) ... | |
Setting up flex (2.6.4-6) ... | |
Setting up libcilkrts5:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Setting up libubsan0:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Setting up libssl-dev:amd64 (1.1.1-1ubuntu2.1~18.04.6) ... | |
Setting up libelf-dev:amd64 (0.170-0.4ubuntu0.1) ... | |
Setting up libgcc-7-dev:amd64 (7.5.0-3ubuntu1~18.04) ... | |
Setting up cpp-7 (7.5.0-3ubuntu1~18.04) ... | |
Setting up libfl-dev:amd64 (2.6.4-6) ... | |
Setting up binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.3) ... | |
Setting up cpp (4:7.4.0-1ubuntu2.3) ... | |
Setting up binutils (2.30-21ubuntu1~18.04.3) ... | |
Setting up gcc-7 (7.5.0-3ubuntu1~18.04) ... | |
Setting up gcc (4:7.4.0-1ubuntu2.3) ... | |
Processing triggers for libc-bin (2.27-3ubuntu1) ... | |
Processing triggers for ca-certificates (20190110~18.04.1) ... | |
Updating certificates in /etc/ssl/certs... | |
0 added, 0 removed; done. | |
Running hooks in /etc/ca-certificates/update.d... | |
done. | |
Removing intermediate container e82c2391cc0e | |
---> 35aaaa54cb39 | |
Step 3/4 : COPY gvisor-addon /gvisor-addon | |
---> 4f2d248c8802 | |
Step 4/4 : CMD ["/gvisor-addon"] | |
---> Running in af3ad56e1015 | |
Removing intermediate container af3ad56e1015 | |
---> 93a8c861c09e | |
Successfully built 93a8c861c09e | |
Successfully tagged gcr.io/k8s-minikube/gvisor-addon:2 | |
>> Starting out/e2e-linux-amd64 at Wed Jul 1 02:56:34 UTC 2020 | |
++ test -f /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt | |
++ touch /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt | |
++ out/e2e-linux-amd64 '-minikube-start-args=--driver=kvm2 ' -test.timeout=70m -test.v -gvisor -binary=out/minikube-linux-amd64 | |
++ tee /home/jenkins/minikube-integration/linux-amd64-kvm2-master-2503-8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f/testout.txt | |
Found 8 cores, limiting parallelism with --test.parallel=4 | |
=== RUN TestDownloadOnly | |
=== RUN TestDownloadOnly/crio | |
=== RUN TestDownloadOnly/crio/v1.13.0 | |
=== RUN TestDownloadOnly/crio/v1.18.3 | |
=== RUN TestDownloadOnly/crio/v1.18.4-rc.0 | |
=== RUN TestDownloadOnly/crio/DeleteAll | |
=== RUN TestDownloadOnly/crio/DeleteAlwaysSucceeds | |
=== RUN TestDownloadOnly/docker | |
=== RUN TestDownloadOnly/docker/v1.13.0 | |
=== RUN TestDownloadOnly/docker/v1.18.3 | |
=== RUN TestDownloadOnly/docker/v1.18.4-rc.0 | |
=== RUN TestDownloadOnly/docker/DeleteAll | |
=== RUN TestDownloadOnly/docker/DeleteAlwaysSucceeds | |
=== RUN TestDownloadOnly/containerd | |
=== RUN TestDownloadOnly/containerd/v1.13.0 | |
=== RUN TestDownloadOnly/containerd/v1.18.3 | |
=== RUN TestDownloadOnly/containerd/v1.18.4-rc.0 | |
=== RUN TestDownloadOnly/containerd/DeleteAll | |
=== RUN TestDownloadOnly/containerd/DeleteAlwaysSucceeds | |
--- PASS: TestDownloadOnly (77.72s) | |
--- PASS: TestDownloadOnly/crio (11.10s) | |
--- PASS: TestDownloadOnly/crio/v1.13.0 (4.98s) | |
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2 | |
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2 : (4.980478001s) | |
--- PASS: TestDownloadOnly/crio/v1.18.3 (3.03s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=kvm2 : (3.025520149s) | |
--- PASS: TestDownloadOnly/crio/v1.18.4-rc.0 (2.62s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200701025634-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=kvm2 : (2.616657247s) | |
--- PASS: TestDownloadOnly/crio/DeleteAll (0.18s) | |
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all | |
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.15s) | |
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p crio-20200701025634-8084 | |
helpers_test.go:170: Cleaning up "crio-20200701025634-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p crio-20200701025634-8084 | |
--- PASS: TestDownloadOnly/docker (19.49s) | |
--- PASS: TestDownloadOnly/docker/v1.13.0 (6.87s) | |
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2 | |
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2 : (6.867983204s) | |
--- PASS: TestDownloadOnly/docker/v1.18.3 (5.77s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=kvm2 : (5.770595136s) | |
--- PASS: TestDownloadOnly/docker/v1.18.4-rc.0 (6.37s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200701025646-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=kvm2 : (6.372763555s) | |
--- PASS: TestDownloadOnly/docker/DeleteAll (0.18s) | |
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all | |
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.15s) | |
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p docker-20200701025646-8084 | |
helpers_test.go:170: Cleaning up "docker-20200701025646-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p docker-20200701025646-8084 | |
--- PASS: TestDownloadOnly/containerd (47.13s) | |
--- PASS: TestDownloadOnly/containerd/v1.13.0 (8.55s) | |
aaa_download_only_test.go:65: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2 | |
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2 : (8.545680308s) | |
--- PASS: TestDownloadOnly/containerd/v1.18.3 (10.91s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=kvm2 : (10.905585979s) | |
--- PASS: TestDownloadOnly/containerd/v1.18.4-rc.0 (19.35s) | |
aaa_download_only_test.go:67: (dbg) Run: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=kvm2 | |
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200701025705-8084 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=kvm2 : (19.34490696s) | |
--- PASS: TestDownloadOnly/containerd/DeleteAll (3.04s) | |
aaa_download_only_test.go:133: (dbg) Run: out/minikube-linux-amd64 delete --all | |
aaa_download_only_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete --all: (3.039439188s) | |
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (2.54s) | |
aaa_download_only_test.go:145: (dbg) Run: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084 | |
aaa_download_only_test.go:145: (dbg) Done: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084: (2.541553081s) | |
helpers_test.go:170: Cleaning up "containerd-20200701025705-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084 | |
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p containerd-20200701025705-8084: (2.753247521s) | |
=== RUN TestDownloadOnlyKic | |
--- SKIP: TestDownloadOnlyKic (0.00s) | |
aaa_download_only_test.go:156: skipping, only for docker or podman driver | |
=== RUN TestOffline | |
=== RUN TestOffline/group | |
=== RUN TestOffline/group/docker | |
=== PAUSE TestOffline/group/docker | |
=== RUN TestOffline/group/crio | |
=== PAUSE TestOffline/group/crio | |
=== RUN TestOffline/group/containerd | |
=== PAUSE TestOffline/group/containerd | |
=== CONT TestOffline/group/docker | |
=== CONT TestOffline/group/containerd | |
=== CONT TestOffline/group/crio | |
--- PASS: TestOffline (253.61s) | |
--- PASS: TestOffline/group (0.00s) | |
--- PASS: TestOffline/group/containerd (230.10s) | |
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2 | |
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2 : (3m49.139342662s) | |
helpers_test.go:170: Cleaning up "offline-containerd-20200701025752-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20200701025752-8084 | |
--- PASS: TestOffline/group/docker (243.19s) | |
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-docker-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2 | |
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2 : (4m2.446347584s) | |
helpers_test.go:170: Cleaning up "offline-docker-20200701025752-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-docker-20200701025752-8084 | |
--- PASS: TestOffline/group/crio (253.61s) | |
aab_offline_test.go:53: (dbg) Run: out/minikube-linux-amd64 start -p offline-crio-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2 | |
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20200701025752-8084 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2 : (4m12.338034102s) | |
helpers_test.go:170: Cleaning up "offline-crio-20200701025752-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p offline-crio-20200701025752-8084 | |
helpers_test.go:171: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20200701025752-8084: (1.272372307s) | |
=== RUN TestAddons | |
=== RUN TestAddons/parallel | |
=== RUN TestAddons/parallel/Registry | |
=== PAUSE TestAddons/parallel/Registry | |
=== RUN TestAddons/parallel/Ingress | |
=== PAUSE TestAddons/parallel/Ingress | |
=== RUN TestAddons/parallel/MetricsServer | |
=== PAUSE TestAddons/parallel/MetricsServer | |
=== RUN TestAddons/parallel/HelmTiller | |
=== PAUSE TestAddons/parallel/HelmTiller | |
=== RUN TestAddons/parallel/Olm | |
=== PAUSE TestAddons/parallel/Olm | |
=== CONT TestAddons/parallel/Registry | |
=== CONT TestAddons/parallel/HelmTiller | |
=== CONT TestAddons/parallel/Olm | |
=== CONT TestAddons/parallel/Ingress | |
=== CONT TestAddons/parallel/MetricsServer | |
2020/07/01 03:04:36 [DEBUG] GET http://192.168.39.105:5000 | |
--- FAIL: TestAddons (609.83s) | |
addons_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p addons-20200701030206-8084 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=kvm2 | |
addons_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p addons-20200701030206-8084 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=kvm2 : (2m19.805768991s) | |
--- FAIL: TestAddons/parallel (0.00s) | |
--- SKIP: TestAddons/parallel/Olm (0.00s) | |
addons_test.go:334: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257 | |
--- FAIL: TestAddons/parallel/Registry (12.87s) | |
addons_test.go:173: registry stabilized in 15.318845ms | |
addons_test.go:175: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ... | |
helpers_test.go:332: "registry-vdzjt" [f79eaab8-b0fe-446d-a971-cb33624725a8] Running | |
addons_test.go:175: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021224498s | |
addons_test.go:178: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ... | |
helpers_test.go:332: "registry-proxy-7kmmq" [065fd00f-b3cd-4b35-8896-7021306fecbb] Running | |
addons_test.go:178: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008063588s | |
addons_test.go:183: (dbg) Run: kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now | |
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now: exec: "kubectl": executable file not found in $PATH (418ns) | |
addons_test.go:185: pre-cleanup kubectl --context addons-20200701030206-8084 delete po -l run=registry-test --now failed: exec: "kubectl": executable file not found in $PATH (not a problem) | |
addons_test.go:188: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local" | |
addons_test.go:188: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exec: "kubectl": executable file not found in $PATH (116ns) | |
addons_test.go:190: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20200701030206-8084 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exec: "kubectl": executable file not found in $PATH | |
addons_test.go:194: expected curl response be "HTTP/1.1 200", but got ** | |
addons_test.go:202: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 ip | |
addons_test.go:231: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable registry --alsologtostderr -v=1 | |
helpers_test.go:215: -----------------------post-mortem-------------------------------- | |
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:237: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<< | |
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <====== | |
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25 | |
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.84531831s) | |
helpers_test.go:245: TestAddons/parallel/Registry logs: | |
-- stdout -- | |
* ==> Docker <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:04:38 UTC. -- | |
* Jul 01 03:03:56 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:56.880573447Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:03:57 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:57.977854661Z" level=info msg="shim reaped" id=a2f179901974b17442fcdc5ec101768baf6d5faae9bbd904ebe16f1e48fe5759 | |
* Jul 01 03:03:57 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:57.988208857Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.042862264Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2453f72f2d3fbbf3211d903f4e35f631dfaa2af789ad1423e4b31f4a2ba3bc0c/shim.sock" debug=false pid=5615 | |
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.734848751Z" level=info msg="shim reaped" id=2453f72f2d3fbbf3211d903f4e35f631dfaa2af789ad1423e4b31f4a2ba3bc0c | |
* Jul 01 03:03:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:03:59.745141379Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:01 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:01.601972719Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b30ca3163d6c1a8b981eaef619f6464eff0a43c70ed443a72b3704544816c57/shim.sock" debug=false pid=5721 | |
* Jul 01 03:04:02 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:02.457841311Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/39c3f696531d245b11df91f4c5735d60d1b881e5876c2ba18732619f1d2ea1e5/shim.sock" debug=false pid=5811 | |
* Jul 01 03:04:02 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:02.466947173Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d8512f3c21a09573210467eb25d58e2a6c904e412a403d1d782f87f6c065a40f/shim.sock" debug=false pid=5816 | |
* Jul 01 03:04:04 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:04.757414710Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5/shim.sock" debug=false pid=5974 | |
* Jul 01 03:04:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:06.038697805Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/47f59199ff8b2542ebb3b8f2df0ec48ce6ba2ab3da623f4dd2e66e8c3d3c34f2/shim.sock" debug=false pid=6057 | |
* Jul 01 03:04:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:06.076207506Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/740d3a15da583d6a597ee2c1e43b2c18ba003d2c1b42751671fa3775d5d84d5d/shim.sock" debug=false pid=6073 | |
* Jul 01 03:04:12 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:12.897699882Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87/shim.sock" debug=false pid=6233 | |
* Jul 01 03:04:25 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:25.272317806Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d920f932f040fce34d1156f22795d7e86a77132c51b63cd48afc626354ef6c2a/shim.sock" debug=false pid=6400 | |
* Jul 01 03:04:28 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:28.374226784Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602/shim.sock" debug=false pid=6590 | |
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.264160126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6815cdaec6e0a262bfafc87fa88eab7fcf036190a70f1d5687986c531a42fb9d/shim.sock" debug=false pid=6628 | |
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.953751693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/636b722f9b872ae2b392ff7f4518d3777f898a782f30fde093c50c90dc789b8f/shim.sock" debug=false pid=6668 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.316757798Z" level=info msg="shim reaped" id=9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.317492282Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.389801388Z" level=info msg="shim reaped" id=49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.408845696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.935730704Z" level=info msg="shim reaped" id=4f853a2c4fb3a27395c5bbe725cc9e9ff5ac2fb56ce858cb3eae48e6c6f83ccb | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46 | |
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* | |
* ==> container status <== | |
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 9 seconds ago Running packageserver 0 740d3a15da583 | |
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 9 seconds ago Running packageserver 0 47f59199ff8b2 | |
* ed93f8777ade8 quay.io/operator-framework/upstream-community-operators@sha256:4bdd1485bffb217bfd06dccd62a899dcce8bc57af971568ba995176c8b1aa464 10 seconds ago Running registry-server 0 d8512f3c21a09 | |
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 13 seconds ago Running controller 0 eefa25270d8a6 | |
* 49554f7da428d gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 26 seconds ago Exited registry-proxy 0 4f853a2c4fb3a | |
* 9d9dffb8b723a registry.hub.docker.com/library/registry@sha256:8be26f81ffea54106bae012c6f349df70f4d5e7e2ec01b143c46e2c03b9e551d 34 seconds ago Exited registry 0 4c6a2b7b2735c | |
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 36 seconds ago Running olm-operator 0 4a26317d80253 | |
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 37 seconds ago Running catalog-operator 0 87e032f179b67 | |
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 42 seconds ago Exited patch 0 a2f179901974b | |
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 43 seconds ago Running metrics-server 0 1b8c0d094c10b | |
* 1bc07123ace4f gcr.io/kubernetes-helm/tiller@sha256:59b6200a822ddb18577ca7120fb644a3297635f47d92db521836b39b74ad19e8 45 seconds ago Running tiller 0 e8c2c1e0e0a62 | |
* d6a261bca5222 67da37a9a360e 47 seconds ago Running coredns 0 d11b454b968e3 | |
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 47 seconds ago Exited create 0 de667a00fefb0 | |
* 94232379c1581 4689081edb103 52 seconds ago Running storage-provisioner 0 7896015c69c73 | |
* 40c9a46cf08ab 3439b7546f29b 56 seconds ago Running kube-proxy 0 8df7717a34531 | |
* a8673db5ff2ad 76216c34ed0c7 About a minute ago Running kube-scheduler 0 69d249b151f2d | |
* 663dada323e98 303ce5db0e90d About a minute ago Running etcd 0 4777c338fb836 | |
* 24d686838dec2 da26705ccb4b5 About a minute ago Running kube-controller-manager 0 ff24f8e852b09 | |
* b7ced5cccc0a4 7e28efa976bd1 About a minute ago Running kube-apiserver 0 1456a98fec87b | |
* | |
* ==> coredns [d6a261bca522] <== | |
* .:53 | |
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 | |
* CoreDNS-1.6.7 | |
* linux/amd64, go1.13.6, da7f65b | |
* | |
* ==> describe nodes <== | |
* Name: addons-20200701030206-8084 | |
* Roles: master | |
* Labels: beta.kubernetes.io/arch=amd64 | |
* beta.kubernetes.io/os=linux | |
* kubernetes.io/arch=amd64 | |
* kubernetes.io/hostname=addons-20200701030206-8084 | |
* kubernetes.io/os=linux | |
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
* minikube.k8s.io/name=addons-20200701030206-8084 | |
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700 | |
* minikube.k8s.io/version=v1.12.0-beta.0 | |
* node-role.kubernetes.io/master= | |
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock | |
* node.alpha.kubernetes.io/ttl: 0 | |
* volumes.kubernetes.io/controller-managed-attach-detach: true | |
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000 | |
* Taints: <none> | |
* Unschedulable: false | |
* Lease: | |
* HolderIdentity: addons-20200701030206-8084 | |
* AcquireTime: <unset> | |
* RenewTime: Wed, 01 Jul 2020 03:04:35 +0000 | |
* Conditions: | |
* Type Status LastHeartbeatTime LastTransitionTime Reason Message | |
* ---- ------ ----------------- ------------------ ------ ------- | |
* MemoryPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available | |
* DiskPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure | |
* PIDPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available | |
* Ready True Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status | |
* Addresses: | |
* InternalIP: 192.168.39.105 | |
* Hostname: addons-20200701030206-8084 | |
* Capacity: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* Allocatable: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* System Info: | |
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e | |
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e | |
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14 | |
* Kernel Version: 4.19.107 | |
* OS Image: Buildroot 2019.02.10 | |
* Operating System: linux | |
* Architecture: amd64 | |
* Container Runtime Version: docker://19.3.8 | |
* Kubelet Version: v1.18.3 | |
* Kube-Proxy Version: v1.18.3 | |
* Non-terminated Pods: (17 in total) | |
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE | |
* --------- ---- ------------ ---------- --------------- ------------- --- | |
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 57s | |
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62s | |
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 57s | |
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 62s | |
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 62s | |
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s | |
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 62s | |
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s | |
* kube-system registry-proxy-7kmmq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53s | |
* kube-system registry-vdzjt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s | |
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62s | |
* kube-system tiller-deploy-78ff886c54-7kcct 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57s | |
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 57s | |
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 57s | |
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 37s | |
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 33s | |
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 33s | |
* Allocated resources: | |
* (Total limits may be over 100 percent, i.e., overcommitted.) | |
* Resource Requests Limits | |
* -------- -------- ------ | |
* cpu 800m (40%) 100m (5%) | |
* memory 550Mi (22%) 270Mi (11%) | |
* ephemeral-storage 0 (0%) 0 (0%) | |
* hugepages-2Mi 0 (0%) 0 (0%) | |
* Events: | |
* Type Reason Age From Message | |
* ---- ------ ---- ---- ------- | |
* Normal Starting 63s kubelet, addons-20200701030206-8084 Starting kubelet. | |
* Normal NodeHasSufficientMemory 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory | |
* Normal NodeHasNoDiskPressure 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure | |
* Normal NodeHasSufficientPID 63s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID | |
* Normal NodeAllocatableEnforced 62s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods | |
* Normal Starting 56s kube-proxy, addons-20200701030206-8084 Starting kube-proxy. | |
* Normal NodeReady 53s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady | |
* | |
* ==> dmesg <== | |
* "trace_clock=local" | |
* on the kernel command line | |
* [ +0.000071] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 | |
* [ +1.825039] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument | |
* [ +0.005760] systemd-fstab-generator[1147]: Ignoring "noauto" for root device | |
* [ +0.006598] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. | |
* [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) | |
* [ +1.628469] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. | |
* [ +0.314006] vboxguest: loading out-of-tree module taints kernel. | |
* [ +0.005336] vboxguest: PCI device not found, probably running on physical hardware. | |
* [ +4.401958] systemd-fstab-generator[1990]: Ignoring "noauto" for root device | |
* [ +0.072204] systemd-fstab-generator[2000]: Ignoring "noauto" for root device | |
* [ +7.410511] systemd-fstab-generator[2190]: Ignoring "noauto" for root device | |
* [Jul 1 03:03] kauditd_printk_skb: 65 callbacks suppressed | |
* [ +0.256868] systemd-fstab-generator[2359]: Ignoring "noauto" for root device | |
* [ +0.295426] systemd-fstab-generator[2430]: Ignoring "noauto" for root device | |
* [ +1.557931] systemd-fstab-generator[2640]: Ignoring "noauto" for root device | |
* [ +3.130682] kauditd_printk_skb: 107 callbacks suppressed | |
* [ +8.920986] systemd-fstab-generator[3723]: Ignoring "noauto" for root device | |
* [ +8.276427] kauditd_printk_skb: 32 callbacks suppressed | |
* [ +8.301918] kauditd_printk_skb: 71 callbacks suppressed | |
* [ +5.560259] kauditd_printk_skb: 29 callbacks suppressed | |
* [Jul 1 03:04] kauditd_printk_skb: 11 callbacks suppressed | |
* [ +17.054203] NFSD: Unable to end grace period: -110 | |
* [ +15.586696] kauditd_printk_skb: 29 callbacks suppressed | |
* | |
* ==> etcd [663dada323e9] <== | |
* raft2020/07/01 03:03:28 INFO: 38dbae10e7efb596 became leader at term 2 | |
* raft2020/07/01 03:03:28 INFO: raft.node: 38dbae10e7efb596 elected leader 38dbae10e7efb596 at term 2 | |
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48 | |
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379 | |
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379 | |
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4 | |
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute | |
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute | |
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute | |
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute | |
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute | |
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute | |
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute | |
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute | |
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute | |
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute | |
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute | |
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute | |
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute | |
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute | |
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute | |
* | |
* ==> kernel <== | |
* 03:04:38 up 2 min, 0 users, load average: 2.78, 1.08, 0.41 | |
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux | |
* PRETTY_NAME="Buildroot 2019.02.10" | |
* | |
* ==> kube-apiserver [b7ced5cccc0a] <== | |
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.readinessProbe.properties.tcpSocket.properties.port has invalid property: anyOf | |
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.httpGet.properties.port has invalid property: anyOf | |
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf | |
* I0701 03:03:39.786250 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:03:39.786317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:03:39.809953 1 controller.go:606] quota admission added evaluator for: clusterserviceversions.operators.coreos.com | |
* I0701 03:03:39.838120 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:03:39.838409 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com | |
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps | |
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. | |
* | |
* ==> kube-controller-manager [24d686838dec] <== | |
* I0701 03:03:41.514749 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
* I0701 03:03:41.520691 1 shared_informer.go:230] Caches are synced for garbage collector | |
* E0701 03:03:41.534440 1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request | |
* E0701 03:03:41.972255 1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request | |
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq | |
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. | |
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2 | |
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms | |
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr | |
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com | |
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com | |
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com | |
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com | |
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com | |
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota | |
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota | |
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s | |
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request] | |
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector | |
* | |
* ==> kube-proxy [40c9a46cf08a] <== | |
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy | |
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105 | |
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier. | |
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined | |
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local | |
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3 | |
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768 | |
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller | |
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config | |
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller | |
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config | |
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config | |
* | |
* ==> kube-scheduler [a8673db5ff2a] <== | |
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled | |
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 | |
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue | |
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue | |
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue | |
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue | |
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue | |
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue | |
* | |
* ==> kubelet <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:04:38 UTC. -- | |
* Jul 01 03:04:02 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:02.977707 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/olm-operator-5fd48d8cd4-sh5bk through plugin: invalid network status for | |
* Jul 01 03:04:02 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:02.992739 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:05.046781 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-vdzjt through plugin: invalid network status for | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.126552 3731 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.287977 3731 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.312864 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5ed5b503-410d-47fe-b817-f526f3df1542-apiservice-cert") pod "packageserver-fc86cd5d4-djgms" (UID: "5ed5b503-410d-47fe-b817-f526f3df1542") | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.312902 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "olm-operator-serviceaccount-token-p7zp2" (UniqueName: "kubernetes.io/secret/5ed5b503-410d-47fe-b817-f526f3df1542-olm-operator-serviceaccount-token-p7zp2") pod "packageserver-fc86cd5d4-djgms" (UID: "5ed5b503-410d-47fe-b817-f526f3df1542") | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.413832 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "olm-operator-serviceaccount-token-p7zp2" (UniqueName: "kubernetes.io/secret/f6a28a72-ed19-4c82-ad04-87a8fc6fdc75-olm-operator-serviceaccount-token-p7zp2") pod "packageserver-fc86cd5d4-wgfqr" (UID: "f6a28a72-ed19-4c82-ad04-87a8fc6fdc75") | |
* Jul 01 03:04:05 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:05.413923 3731 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/f6a28a72-ed19-4c82-ad04-87a8fc6fdc75-apiservice-cert") pod "packageserver-fc86cd5d4-wgfqr" (UID: "f6a28a72-ed19-4c82-ad04-87a8fc6fdc75") | |
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.418476 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for | |
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.425914 3731 pod_container_deletor.go:77] Container "47f59199ff8b2542ebb3b8f2df0ec48ce6ba2ab3da623f4dd2e66e8c3d3c34f2" not found in pod's containers | |
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.440427 3731 pod_container_deletor.go:77] Container "740d3a15da583d6a597ee2c1e43b2c18ba003d2c1b42751671fa3775d5d84d5d" not found in pod's containers | |
* Jul 01 03:04:06 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:06.443913 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for | |
* Jul 01 03:04:07 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:07.457557 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for | |
* Jul 01 03:04:07 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:07.463213 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for | |
* Jul 01 03:04:13 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:13.563226 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-proxy-7kmmq through plugin: invalid network status for | |
* Jul 01 03:04:25 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:25.730292 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd through plugin: invalid network status for | |
* Jul 01 03:04:28 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:28.779848 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:04:29 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:29.811228 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-djgms through plugin: invalid network status for | |
* Jul 01 03:04:30 addons-20200701030206-8084 kubelet[3731]: W0701 03:04:30.833048 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-fc86cd5d4-wgfqr through plugin: invalid network status for | |
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.202932 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5 | |
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.265162 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87 | |
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.336890 3731 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-8hvxk" (UniqueName: "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk") pod "065fd00f-b3cd-4b35-8896-7021306fecbb" (UID: "065fd00f-b3cd-4b35-8896-7021306fecbb") | |
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.346186 3731 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk" (OuterVolumeSpecName: "default-token-8hvxk") pod "065fd00f-b3cd-4b35-8896-7021306fecbb" (UID: "065fd00f-b3cd-4b35-8896-7021306fecbb"). InnerVolumeSpecName "default-token-8hvxk". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
* Jul 01 03:04:38 addons-20200701030206-8084 kubelet[3731]: I0701 03:04:38.437405 3731 reconciler.go:319] Volume detached for volume "default-token-8hvxk" (UniqueName: "kubernetes.io/secret/065fd00f-b3cd-4b35-8896-7021306fecbb-default-token-8hvxk") on node "addons-20200701030206-8084" DevicePath "" | |
* | |
* ==> storage-provisioner [94232379c158] <== | |
-- /stdout -- | |
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running | |
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (313ns) | |
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH | |
--- FAIL: TestAddons/parallel/HelmTiller (139.99s) | |
addons_test.go:293: tiller-deploy stabilized in 18.029826ms | |
addons_test.go:295: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ... | |
helpers_test.go:332: "tiller-deploy-78ff886c54-7kcct" [40b7a3ba-bbb3-4355-8399-0e9570a4d0c8] Running | |
addons_test.go:295: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019518201s | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (336ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (501ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (424ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (443ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (849ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (442ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (475ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (436ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (433ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (460ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (1.283µs) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (462ns) | |
addons_test.go:310: (dbg) Run: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version | |
addons_test.go:310: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exec: "kubectl": executable file not found in $PATH (1.302µs) | |
addons_test.go:324: failed checking helm tiller: exec: "kubectl": executable file not found in $PATH | |
addons_test.go:327: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable helm-tiller --alsologtostderr -v=1 | |
helpers_test.go:215: -----------------------post-mortem-------------------------------- | |
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:237: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<< | |
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <====== | |
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25 | |
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.124447778s) | |
helpers_test.go:245: TestAddons/parallel/HelmTiller logs: | |
-- stdout -- | |
* ==> Docker <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:06:45 UTC. -- | |
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.264160126Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6815cdaec6e0a262bfafc87fa88eab7fcf036190a70f1d5687986c531a42fb9d/shim.sock" debug=false pid=6628 | |
* Jul 01 03:04:29 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:29.953751693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/636b722f9b872ae2b392ff7f4518d3777f898a782f30fde093c50c90dc789b8f/shim.sock" debug=false pid=6668 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.316757798Z" level=info msg="shim reaped" id=9d9dffb8b723aeb9f1f601740ed20469534698926a65c547ae1838a89c9cb6d5 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.317492282Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.389801388Z" level=info msg="shim reaped" id=49554f7da428d44645a5ce3bd4f5629c994661cba159835c2bf649bd91a91e87 | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.408845696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.935730704Z" level=info msg="shim reaped" id=4f853a2c4fb3a27395c5bbe725cc9e9ff5ac2fb56ce858cb3eae48e6c6f83ccb | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46 | |
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.808158458Z" level=info msg="shim reaped" id=ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602 | |
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.817866003Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:44.401425335Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7/shim.sock" debug=false pid=7303 | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7 | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* | |
* ==> container status <== | |
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
* ccc6490d8f7f4 65fedb276e53e 30 seconds ago Exited registry-server 3 d8512f3c21a09 | |
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running packageserver 0 740d3a15da583 | |
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running packageserver 0 47f59199ff8b2 | |
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 2 minutes ago Running controller 0 eefa25270d8a6 | |
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running olm-operator 0 4a26317d80253 | |
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 2 minutes ago Running catalog-operator 0 87e032f179b67 | |
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 2 minutes ago Exited patch 0 a2f179901974b | |
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 2 minutes ago Running metrics-server 0 1b8c0d094c10b | |
* 1bc07123ace4f gcr.io/kubernetes-helm/tiller@sha256:59b6200a822ddb18577ca7120fb644a3297635f47d92db521836b39b74ad19e8 2 minutes ago Exited tiller 0 e8c2c1e0e0a62 | |
* d6a261bca5222 67da37a9a360e 2 minutes ago Running coredns 0 d11b454b968e3 | |
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 2 minutes ago Exited create 0 de667a00fefb0 | |
* 94232379c1581 4689081edb103 2 minutes ago Running storage-provisioner 0 7896015c69c73 | |
* 40c9a46cf08ab 3439b7546f29b 3 minutes ago Running kube-proxy 0 8df7717a34531 | |
* a8673db5ff2ad 76216c34ed0c7 3 minutes ago Running kube-scheduler 0 69d249b151f2d | |
* 663dada323e98 303ce5db0e90d 3 minutes ago Running etcd 0 4777c338fb836 | |
* 24d686838dec2 da26705ccb4b5 3 minutes ago Running kube-controller-manager 0 ff24f8e852b09 | |
* b7ced5cccc0a4 7e28efa976bd1 3 minutes ago Running kube-apiserver 0 1456a98fec87b | |
* | |
* ==> coredns [d6a261bca522] <== | |
* .:53 | |
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 | |
* CoreDNS-1.6.7 | |
* linux/amd64, go1.13.6, da7f65b | |
* | |
* ==> describe nodes <== | |
* Name: addons-20200701030206-8084 | |
* Roles: master | |
* Labels: beta.kubernetes.io/arch=amd64 | |
* beta.kubernetes.io/os=linux | |
* kubernetes.io/arch=amd64 | |
* kubernetes.io/hostname=addons-20200701030206-8084 | |
* kubernetes.io/os=linux | |
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
* minikube.k8s.io/name=addons-20200701030206-8084 | |
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700 | |
* minikube.k8s.io/version=v1.12.0-beta.0 | |
* node-role.kubernetes.io/master= | |
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock | |
* node.alpha.kubernetes.io/ttl: 0 | |
* volumes.kubernetes.io/controller-managed-attach-detach: true | |
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000 | |
* Taints: <none> | |
* Unschedulable: false | |
* Lease: | |
* HolderIdentity: addons-20200701030206-8084 | |
* AcquireTime: <unset> | |
* RenewTime: Wed, 01 Jul 2020 03:06:35 +0000 | |
* Conditions: | |
* Type Status LastHeartbeatTime LastTransitionTime Reason Message | |
* ---- ------ ----------------- ------------------ ------ ------- | |
* MemoryPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available | |
* DiskPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure | |
* PIDPressure False Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available | |
* Ready True Wed, 01 Jul 2020 03:04:36 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status | |
* Addresses: | |
* InternalIP: 192.168.39.105 | |
* Hostname: addons-20200701030206-8084 | |
* Capacity: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* Allocatable: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* System Info: | |
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e | |
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e | |
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14 | |
* Kernel Version: 4.19.107 | |
* OS Image: Buildroot 2019.02.10 | |
* Operating System: linux | |
* Architecture: amd64 | |
* Container Runtime Version: docker://19.3.8 | |
* Kubelet Version: v1.18.3 | |
* Kube-Proxy Version: v1.18.3 | |
* Non-terminated Pods: (15 in total) | |
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE | |
* --------- ---- ------------ ---------- --------------- ------------- --- | |
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 3m4s | |
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m9s | |
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 3m4s | |
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 3m9s | |
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m9s | |
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s | |
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m9s | |
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s | |
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m9s | |
* kube-system tiller-deploy-78ff886c54-7kcct 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s | |
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 3m4s | |
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 3m4s | |
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 2m44s | |
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 2m40s | |
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 2m40s | |
* Allocated resources: | |
* (Total limits may be over 100 percent, i.e., overcommitted.) | |
* Resource Requests Limits | |
* -------- -------- ------ | |
* cpu 800m (40%) 100m (5%) | |
* memory 550Mi (22%) 270Mi (11%) | |
* ephemeral-storage 0 (0%) 0 (0%) | |
* hugepages-2Mi 0 (0%) 0 (0%) | |
* Events: | |
* Type Reason Age From Message | |
* ---- ------ ---- ---- ------- | |
* Normal Starting 3m10s kubelet, addons-20200701030206-8084 Starting kubelet. | |
* Normal NodeHasSufficientMemory 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory | |
* Normal NodeHasNoDiskPressure 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure | |
* Normal NodeHasSufficientPID 3m10s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID | |
* Normal NodeAllocatableEnforced 3m9s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods | |
* Normal Starting 3m3s kube-proxy, addons-20200701030206-8084 Starting kube-proxy. | |
* Normal NodeReady 3m kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady | |
* | |
* ==> dmesg <== | |
* [ +0.000002] ? finish_task_switch+0x6a/0x270 | |
* [ +0.000001] mem_cgroup_try_charge+0x81/0x170 | |
* [ +0.000002] __add_to_page_cache_locked+0x5f/0x200 | |
* [ +0.000025] add_to_page_cache_lru+0x45/0xe0 | |
* [ +0.000002] generic_file_read_iter+0x77f/0x9b0 | |
* [ +0.000002] ? _cond_resched+0x10/0x40 | |
* [ +0.000004] ? __inode_security_revalidate+0x43/0x60 | |
* [ +0.000001] do_iter_readv_writev+0x16b/0x190 | |
* [ +0.000002] do_iter_read+0xc3/0x170 | |
* [ +0.000005] ovl_read_iter+0xb1/0x100 [overlay] | |
* [ +0.000002] __vfs_read+0x109/0x170 | |
* [ +0.000002] vfs_read+0x84/0x130 | |
* [ +0.000001] ksys_pread64+0x6c/0x80 | |
* [ +0.000003] do_syscall_64+0x49/0x110 | |
* [ +0.000001] entry_SYSCALL_64_after_hwframe+0x44/0xa9 | |
* [ +0.000001] RIP: 0033:0xe1204f | |
* [ +0.000001] Code: 0f 05 48 83 f8 da 75 08 4c 89 c0 48 89 d6 0f 05 c3 48 89 f8 4d 89 c2 48 89 f7 4d 89 c8 48 89 d6 4c 8b 4c 24 08 48 89 ca 0f 05 <c3> e9 e1 ff ff ff 41 54 49 89 f0 55 53 89 d3 85 c9 74 05 b9 80 00 | |
* [ +0.000001] RSP: 002b:00007f369435c888 EFLAGS: 00000246 ORIG_RAX: 0000000000000011 | |
* [ +0.000001] RAX: ffffffffffffffda RBX: 0000000000e20000 RCX: 0000000000e1204f | |
* [ +0.000001] RDX: 0000000000001000 RSI: 0000000006448448 RDI: 0000000000000025 | |
* [ +0.000000] RBP: 0000000000001000 R08: 0000000000000000 R09: 0000000000000000 | |
* [ +0.000001] R10: 0000000000e20000 R11: 0000000000000246 R12: 0000000000001000 | |
* [ +0.000000] R13: 0000000006448448 R14: 0000000006448448 R15: 00000000039a0340 | |
* [ +0.000195] Memory cgroup out of memory: Kill process 8065 (registry-server) score 2044 or sacrifice child | |
* [ +0.000044] Killed process 8065 (registry-server) total-vm:188040kB, anon-rss:96604kB, file-rss:14388kB, shmem-rss:0kB | |
* | |
* ==> etcd [663dada323e9] <== | |
* raft2020/07/01 03:03:28 INFO: 38dbae10e7efb596 became leader at term 2 | |
* raft2020/07/01 03:03:28 INFO: raft.node: 38dbae10e7efb596 elected leader 38dbae10e7efb596 at term 2 | |
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48 | |
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379 | |
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379 | |
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4 | |
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute | |
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute | |
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute | |
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute | |
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute | |
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute | |
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute | |
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute | |
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute | |
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute | |
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute | |
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute | |
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute | |
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute | |
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute | |
* | |
* ==> kernel <== | |
* 03:06:45 up 4 min, 0 users, load average: 0.45, 0.74, 0.36 | |
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux | |
* PRETTY_NAME="Buildroot 2019.02.10" | |
* | |
* ==> kube-apiserver [b7ced5cccc0a] <== | |
* ERROR $root.definitions.com.coreos.operators.v1alpha1.ClusterServiceVersion.properties.spec.properties.install.properties.spec.properties.deployments.items.<array>.properties.spec.properties.template.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf | |
* I0701 03:03:39.786250 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:03:39.786317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:03:39.809953 1 controller.go:606] quota admission added evaluator for: clusterserviceversions.operators.coreos.com | |
* I0701 03:03:39.838120 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:03:39.838409 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com | |
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps | |
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. | |
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* | |
* ==> kube-controller-manager [24d686838dec] <== | |
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq | |
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. | |
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2 | |
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms | |
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr | |
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com | |
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com | |
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com | |
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com | |
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com | |
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota | |
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota | |
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s | |
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request] | |
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector | |
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* | |
* ==> kube-proxy [40c9a46cf08a] <== | |
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy | |
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105 | |
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier. | |
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined | |
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local | |
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3 | |
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768 | |
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller | |
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config | |
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller | |
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config | |
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config | |
* | |
* ==> kube-scheduler [a8673db5ff2a] <== | |
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled | |
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 | |
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue | |
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue | |
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue | |
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue | |
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue | |
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue | |
* | |
* ==> kubelet <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:06:45 UTC. -- | |
* Jul 01 03:05:27 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:27.859750 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:05:29 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:29.166691 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:48.447519 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:48.453068 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: E0701 03:05:48.453371 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:05:48 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:48.453959 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7 | |
* Jul 01 03:05:49 addons-20200701030206-8084 kubelet[3731]: W0701 03:05:49.464107 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:05:52 addons-20200701030206-8084 kubelet[3731]: I0701 03:05:52.774538 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:05:52 addons-20200701030206-8084 kubelet[3731]: E0701 03:05:52.775578 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:06:03 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:03.630022 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:06:03 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:03.630323 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:06:15 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:15.630056 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:06:15 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:15.768744 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:06:17 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:17.009313 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:44.338520 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:44.347309 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:44.347710 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:06:44 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:44.350850 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: W0701 03:06:45.360047 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.371917 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.411378 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: E0701 03:06:45.414996 3731 remote_runtime.go:295] ContainerStatus "1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.419789 3731 reconciler.go:196] operationExecutor.UnmountVolume started for volume "tiller-token-rw5b2" (UniqueName: "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2") pod "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8" (UID: "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8") | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.431558 3731 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2" (OuterVolumeSpecName: "tiller-token-rw5b2") pod "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8" (UID: "40b7a3ba-bbb3-4355-8399-0e9570a4d0c8"). InnerVolumeSpecName "tiller-token-rw5b2". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
* Jul 01 03:06:45 addons-20200701030206-8084 kubelet[3731]: I0701 03:06:45.520128 3731 reconciler.go:319] Volume detached for volume "tiller-token-rw5b2" (UniqueName: "kubernetes.io/secret/40b7a3ba-bbb3-4355-8399-0e9570a4d0c8-tiller-token-rw5b2") on node "addons-20200701030206-8084" DevicePath "" | |
* | |
* ==> storage-provisioner [94232379c158] <== | |
-- /stdout -- | |
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running | |
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (547ns) | |
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH | |
--- FAIL: TestAddons/parallel/Ingress (343.33s) | |
addons_test.go:100: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ... | |
helpers_test.go:332: "ingress-nginx-admission-create-59b72" [6375af40-e914-4b59-8cd2-35cb294ac5a4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted | |
addons_test.go:100: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 13.892629ms | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (405ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (442ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (454ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (836ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (523ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (740ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (433ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (470ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (450ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (506ns) | |
addons_test.go:105: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml | |
addons_test.go:105: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-ing.yaml: exec: "kubectl": executable file not found in $PATH (898ns) | |
addons_test.go:116: failed to create ingress: exec: "kubectl": executable file not found in $PATH | |
addons_test.go:119: (dbg) Run: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml | |
addons_test.go:119: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml: exec: "kubectl": executable file not found in $PATH (76ns) | |
addons_test.go:121: failed to kubectl replace nginx-pod-svc. args "kubectl --context addons-20200701030206-8084 replace --force -f testdata/nginx-pod-svc.yaml". exec: "kubectl": executable file not found in $PATH | |
addons_test.go:124: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ... | |
addons_test.go:124: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 4m0s: timed out waiting for the condition **** | |
addons_test.go:124: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
addons_test.go:124: TestAddons/parallel/Ingress: showing logs for failed pods as of 2020-07-01 03:10:07.970072088 +0000 UTC m=+813.071360688 | |
addons_test.go:125: failed waiting for ngnix pod: run=nginx within 4m0s: timed out waiting for the condition | |
helpers_test.go:215: -----------------------post-mortem-------------------------------- | |
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:237: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<< | |
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <====== | |
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25 | |
helpers_test.go:245: TestAddons/parallel/Ingress logs: | |
-- stdout -- | |
* ==> Docker <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:10:08 UTC. -- | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.936306640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:37 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:37.994717950Z" level=info msg="shim reaped" id=4c6a2b7b2735c289a1fc97f3cc2dac77b43b57c8bd297228be350ad881f72f46 | |
* Jul 01 03:04:38 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:38.010043710Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.808158458Z" level=info msg="shim reaped" id=ed93f8777ade8caff27f7b4453aafc2e44589369b308f24e02956d0a482dd602 | |
* Jul 01 03:04:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:43.817866003Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:04:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:04:44.401425335Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7/shim.sock" debug=false pid=7303 | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7 | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:07:32 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:07:32.714468798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b/shim.sock" debug=false pid=8814 | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734798119Z" level=error msg="stream copy error: reading from a closed fifo" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734807838Z" level=error msg="stream copy error: reading from a closed fifo" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.738961780Z" level=error msg="Error running exec 2f6e2b249139d96c0e8499b70c146bae118aca7838bb26b2fbf9815155067bbb in container: OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"read init-p: connection reset by peer\": unknown" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.789835802Z" level=info msg="shim reaped" id=208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.800056647Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:09:40 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:09:40.710837642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c/shim.sock" debug=false pid=9653 | |
* | |
* ==> container status <== | |
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
* 6197a0fa774c1 65fedb276e53e 28 seconds ago Running registry-server 5 d8512f3c21a09 | |
* 208781ffec9d6 65fedb276e53e 2 minutes ago Exited registry-server 4 d8512f3c21a09 | |
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 5 minutes ago Running packageserver 0 740d3a15da583 | |
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 5 minutes ago Running packageserver 0 47f59199ff8b2 | |
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 5 minutes ago Running controller 0 eefa25270d8a6 | |
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 6 minutes ago Running olm-operator 0 4a26317d80253 | |
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 6 minutes ago Running catalog-operator 0 87e032f179b67 | |
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 6 minutes ago Exited patch 0 a2f179901974b | |
* f34af38da2a24 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 6 minutes ago Running metrics-server 0 1b8c0d094c10b | |
* d6a261bca5222 67da37a9a360e 6 minutes ago Running coredns 0 d11b454b968e3 | |
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 6 minutes ago Exited create 0 de667a00fefb0 | |
* 94232379c1581 4689081edb103 6 minutes ago Running storage-provisioner 0 7896015c69c73 | |
* 40c9a46cf08ab 3439b7546f29b 6 minutes ago Running kube-proxy 0 8df7717a34531 | |
* a8673db5ff2ad 76216c34ed0c7 6 minutes ago Running kube-scheduler 0 69d249b151f2d | |
* 663dada323e98 303ce5db0e90d 6 minutes ago Running etcd 0 4777c338fb836 | |
* 24d686838dec2 da26705ccb4b5 6 minutes ago Running kube-controller-manager 0 ff24f8e852b09 | |
* b7ced5cccc0a4 7e28efa976bd1 6 minutes ago Running kube-apiserver 0 1456a98fec87b | |
* | |
* ==> coredns [d6a261bca522] <== | |
* .:53 | |
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 | |
* CoreDNS-1.6.7 | |
* linux/amd64, go1.13.6, da7f65b | |
* | |
* ==> describe nodes <== | |
* Name: addons-20200701030206-8084 | |
* Roles: master | |
* Labels: beta.kubernetes.io/arch=amd64 | |
* beta.kubernetes.io/os=linux | |
* kubernetes.io/arch=amd64 | |
* kubernetes.io/hostname=addons-20200701030206-8084 | |
* kubernetes.io/os=linux | |
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
* minikube.k8s.io/name=addons-20200701030206-8084 | |
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700 | |
* minikube.k8s.io/version=v1.12.0-beta.0 | |
* node-role.kubernetes.io/master= | |
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock | |
* node.alpha.kubernetes.io/ttl: 0 | |
* volumes.kubernetes.io/controller-managed-attach-detach: true | |
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000 | |
* Taints: <none> | |
* Unschedulable: false | |
* Lease: | |
* HolderIdentity: addons-20200701030206-8084 | |
* AcquireTime: <unset> | |
* RenewTime: Wed, 01 Jul 2020 03:10:05 +0000 | |
* Conditions: | |
* Type Status LastHeartbeatTime LastTransitionTime Reason Message | |
* ---- ------ ----------------- ------------------ ------ ------- | |
* MemoryPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available | |
* DiskPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure | |
* PIDPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available | |
* Ready True Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status | |
* Addresses: | |
* InternalIP: 192.168.39.105 | |
* Hostname: addons-20200701030206-8084 | |
* Capacity: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* Allocatable: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* System Info: | |
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e | |
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e | |
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14 | |
* Kernel Version: 4.19.107 | |
* OS Image: Buildroot 2019.02.10 | |
* Operating System: linux | |
* Architecture: amd64 | |
* Container Runtime Version: docker://19.3.8 | |
* Kubelet Version: v1.18.3 | |
* Kube-Proxy Version: v1.18.3 | |
* Non-terminated Pods: (14 in total) | |
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE | |
* --------- ---- ------------ ---------- --------------- ------------- --- | |
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 6m27s | |
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m32s | |
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 6m27s | |
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 6m32s | |
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 6m32s | |
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m27s | |
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m32s | |
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m27s | |
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m32s | |
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 6m27s | |
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 6m27s | |
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 6m7s | |
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 6m3s | |
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 6m3s | |
* Allocated resources: | |
* (Total limits may be over 100 percent, i.e., overcommitted.) | |
* Resource Requests Limits | |
* -------- -------- ------ | |
* cpu 800m (40%) 100m (5%) | |
* memory 550Mi (22%) 270Mi (11%) | |
* ephemeral-storage 0 (0%) 0 (0%) | |
* hugepages-2Mi 0 (0%) 0 (0%) | |
* Events: | |
* Type Reason Age From Message | |
* ---- ------ ---- ---- ------- | |
* Normal Starting 6m33s kubelet, addons-20200701030206-8084 Starting kubelet. | |
* Normal NodeHasSufficientMemory 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory | |
* Normal NodeHasNoDiskPressure 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure | |
* Normal NodeHasSufficientPID 6m33s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID | |
* Normal NodeAllocatableEnforced 6m32s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods | |
* Normal Starting 6m26s kube-proxy, addons-20200701030206-8084 Starting kube-proxy. | |
* Normal NodeReady 6m23s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady | |
* | |
* ==> dmesg <== | |
* [ +0.000001] Call Trace: | |
* [ +0.000006] dump_stack+0x66/0x8b | |
* [ +0.000003] dump_header+0x66/0x28e | |
* [ +0.000002] oom_kill_process+0x251/0x270 | |
* [ +0.000001] out_of_memory+0x10b/0x4a0 | |
* [ +0.000003] mem_cgroup_out_of_memory+0xb0/0xd0 | |
* [ +0.000002] try_charge+0x688/0x770 | |
* [ +0.000002] ? __alloc_pages_nodemask+0x11f/0x2a0 | |
* [ +0.000000] mem_cgroup_try_charge+0x81/0x170 | |
* [ +0.000002] mem_cgroup_try_charge_delay+0x17/0x40 | |
* [ +0.000001] __handle_mm_fault+0x7be/0xe50 | |
* [ +0.000002] handle_mm_fault+0xd7/0x230 | |
* [ +0.000003] __do_page_fault+0x23e/0x4c0 | |
* [ +0.000003] ? async_page_fault+0x8/0x30 | |
* [ +0.000001] async_page_fault+0x1e/0x30 | |
* [ +0.000001] RIP: 0033:0xd65cee | |
* [ +0.000001] Code: 31 d2 48 c7 83 80 01 00 00 00 00 00 00 66 44 89 b3 64 01 00 00 89 ab 68 01 00 00 85 ff 79 08 eb 31 0f 1f 00 48 89 f0 83 e9 01 <48> 89 10 4a 8d 34 00 48 89 c2 83 f9 ff 75 eb 48 8d 47 01 49 0f af | |
* [ +0.000001] RSP: 002b:00007f2f5d558d40 EFLAGS: 00010202 | |
* [ +0.000001] RAX: 00000000037c1048 RBX: 0000000002eebce8 RCX: 0000000000000052 | |
* [ +0.000000] RDX: 00000000037c0b98 RSI: 00000000037c1048 RDI: 0000000000000063 | |
* [ +0.000001] RBP: 0000000000000064 R08: 00000000000004b0 R09: 0000000003dfe470 | |
* [ +0.000000] R10: 0000000000000000 R11: 000000000197c600 R12: 0000000000000000 | |
* [ +0.000001] R13: 00000000037bc548 R14: 00000000000004b0 R15: 0000000000000101 | |
* [ +0.000093] Memory cgroup out of memory: Kill process 8832 (registry-server) score 2051 or sacrifice child | |
* [ +0.000038] Killed process 8832 (registry-server) total-vm:237320kB, anon-rss:97324kB, file-rss:14452kB, shmem-rss:0kB | |
* | |
* ==> etcd [663dada323e9] <== | |
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48 | |
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379 | |
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379 | |
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4 | |
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute | |
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute | |
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute | |
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute | |
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute | |
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute | |
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute | |
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute | |
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute | |
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute | |
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute | |
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute | |
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute | |
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute | |
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute | |
* 2020-07-01 03:09:41.650732 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/olm/operatorhubio-catalog\" " with result "range_response_count:1 size:2026" took too long (196.156523ms) to execute | |
* 2020-07-01 03:09:41.651243 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (203.221616ms) to execute | |
* | |
* ==> kernel <== | |
* 03:10:08 up 7 min, 0 users, load average: 1.13, 1.07, 0.58 | |
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux | |
* PRETTY_NAME="Buildroot 2019.02.10" | |
* | |
* ==> kube-apiserver [b7ced5cccc0a] <== | |
* I0701 03:03:39.851362 1 controller.go:606] quota admission added evaluator for: catalogsources.operators.coreos.com | |
* I0701 03:03:41.148400 1 controller.go:606] quota admission added evaluator for: replicasets.apps | |
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. | |
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:06:57.886892 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:06:57.887194 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:08:32.648453 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:08:32.648470 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:09:32.651434 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:09:32.651549 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* | |
* ==> kube-controller-manager [24d686838dec] <== | |
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq | |
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. | |
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2 | |
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms | |
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr | |
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com | |
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com | |
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com | |
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com | |
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com | |
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota | |
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota | |
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s | |
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request] | |
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector | |
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* | |
* ==> kube-proxy [40c9a46cf08a] <== | |
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy | |
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105 | |
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier. | |
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined | |
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local | |
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3 | |
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768 | |
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller | |
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config | |
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller | |
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config | |
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config | |
* | |
* ==> kube-scheduler [a8673db5ff2a] <== | |
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled | |
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 | |
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue | |
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue | |
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue | |
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue | |
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue | |
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue | |
* | |
* ==> kubelet <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:10:09 UTC. -- | |
* Jul 01 03:07:04 addons-20200701030206-8084 kubelet[3731]: E0701 03:07:04.630129 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:07:19 addons-20200701030206-8084 kubelet[3731]: I0701 03:07:19.630051 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:07:19 addons-20200701030206-8084 kubelet[3731]: E0701 03:07:19.631116 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:07:32 addons-20200701030206-8084 kubelet[3731]: I0701 03:07:32.629729 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:07:32 addons-20200701030206-8084 kubelet[3731]: W0701 03:07:32.870040 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:07:34 addons-20200701030206-8084 kubelet[3731]: W0701 03:07:34.009338 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: W0701 03:08:16.469339 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:16.474042 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:16.474339 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:16 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:16.474701 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:08:17 addons-20200701030206-8084 kubelet[3731]: W0701 03:08:17.484224 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:08:22 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:22.774423 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:22 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:22.774897 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:08:36 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:36.629790 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:36 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:36.630137 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:08:49 addons-20200701030206-8084 kubelet[3731]: I0701 03:08:49.630489 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:49 addons-20200701030206-8084 kubelet[3731]: E0701 03:08:49.630882 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:09:01 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:01.629892 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:09:01 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:01.630798 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:09:12 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:12.630322 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:09:12 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:12.631358 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:09:27 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:27.629743 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:09:27 addons-20200701030206-8084 kubelet[3731]: E0701 03:09:27.630542 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:09:40 addons-20200701030206-8084 kubelet[3731]: I0701 03:09:40.630138 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:09:41 addons-20200701030206-8084 kubelet[3731]: W0701 03:09:41.359091 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* | |
* ==> storage-provisioner [94232379c158] <== | |
-- /stdout -- | |
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running | |
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (287ns) | |
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH | |
--- FAIL: TestAddons/parallel/MetricsServer (454.94s) | |
addons_test.go:249: metrics-server stabilized in 19.013125ms | |
addons_test.go:251: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ... | |
helpers_test.go:332: "metrics-server-7bc6d75975-qxr52" [4315b491-aec4-47d9-af19-ba67e84066dc] Running | |
addons_test.go:251: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018120755s | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (360ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (493ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (459ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (628ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (474ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (524ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (728ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (473ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (487ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (420ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (573ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (428ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (498ns) | |
addons_test.go:257: (dbg) Run: kubectl --context addons-20200701030206-8084 top pods -n kube-system | |
addons_test.go:257: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 top pods -n kube-system: exec: "kubectl": executable file not found in $PATH (425ns) | |
addons_test.go:272: failed checking metric server: exec: "kubectl": executable file not found in $PATH | |
addons_test.go:275: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 addons disable metrics-server --alsologtostderr -v=1 | |
helpers_test.go:215: -----------------------post-mortem-------------------------------- | |
helpers_test.go:232: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:237: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<< | |
helpers_test.go:238: ======> post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <====== | |
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25 | |
helpers_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p addons-20200701030206-8084 logs -n 25: (1.27287553s) | |
helpers_test.go:245: TestAddons/parallel/MetricsServer logs: | |
-- stdout -- | |
* ==> Docker <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:12:00 UTC. -- | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.772431532Z" level=info msg="shim reaped" id=ddb8a5980fb5b94077adae6392ab6acf22e22db7bb906787cb0e27ad0b2f15a7 | |
* Jul 01 03:05:06 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:06.782075375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:05:27 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:27.725993923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8/shim.sock" debug=false pid=7691 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.089305645Z" level=info msg="shim reaped" id=71c43e9c50fe593f8f99accc3700632353f8b367aa99cd5b86a635bbc77b53f8 | |
* Jul 01 03:05:48 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:05:48.098796476Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:15.747261169Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53/shim.sock" debug=false pid=8047 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.805068045Z" level=info msg="shim reaped" id=ccc6490d8f7f46427cfbd5deeccaf8d7cf83cc45ecc655e80a5b76b3ef7bba53 | |
* Jul 01 03:06:43 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:43.815307819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.987579019Z" level=info msg="shim reaped" id=1bc07123ace4fe46946f8fdf0fd0dadf60278caa4383a10b25433a954068b080 | |
* Jul 01 03:06:44 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:44.997749719Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.139852993Z" level=info msg="shim reaped" id=e8c2c1e0e0a62503a8ed73783cc2af78489b9bad9fe471ada17aac4e7bfd938e | |
* Jul 01 03:06:45 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:06:45.150300631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:07:32 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:07:32.714468798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b/shim.sock" debug=false pid=8814 | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734798119Z" level=error msg="stream copy error: reading from a closed fifo" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.734807838Z" level=error msg="stream copy error: reading from a closed fifo" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.738961780Z" level=error msg="Error running exec 2f6e2b249139d96c0e8499b70c146bae118aca7838bb26b2fbf9815155067bbb in container: OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused \"read init-p: connection reset by peer\": unknown" | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.789835802Z" level=info msg="shim reaped" id=208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:08:15 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:08:15.800056647Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:09:40 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:09:40.710837642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c/shim.sock" debug=false pid=9653 | |
* Jul 01 03:10:11 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:10:11.268589252Z" level=info msg="shim reaped" id=6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:11 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:10:11.283756825Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:11:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:11:59.793295024Z" level=info msg="shim reaped" id=f34af38da2a244711c04394f746a798ee2b720389b1e7950ef7a900071a733b6 | |
* Jul 01 03:11:59 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:11:59.802564544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* Jul 01 03:12:00 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:12:00.029583706Z" level=info msg="shim reaped" id=1b8c0d094c10b4700bd35471254c00cd98bd77efcab123265e16549fc824452e | |
* Jul 01 03:12:00 addons-20200701030206-8084 dockerd[2202]: time="2020-07-01T03:12:00.039659383Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
* | |
* ==> container status <== | |
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
* 6197a0fa774c1 65fedb276e53e 2 minutes ago Exited registry-server 5 d8512f3c21a09 | |
* 6815cdaec6e0a quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running packageserver 0 47f59199ff8b2 | |
* 636b722f9b872 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running packageserver 0 740d3a15da583 | |
* d920f932f040f quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 7 minutes ago Running controller 0 eefa25270d8a6 | |
* 39c3f696531d2 quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running olm-operator 0 4a26317d80253 | |
* 0b30ca3163d6c quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373 7 minutes ago Running catalog-operator 0 87e032f179b67 | |
* 1a30822b4f9be jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 8 minutes ago Exited patch 0 a2f179901974b | |
* 9e4cfc5738e04 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 8 minutes ago Exited create 0 de667a00fefb0 | |
* d6a261bca5222 67da37a9a360e 8 minutes ago Running coredns 0 d11b454b968e3 | |
* 94232379c1581 4689081edb103 8 minutes ago Running storage-provisioner 0 7896015c69c73 | |
* 40c9a46cf08ab 3439b7546f29b 8 minutes ago Running kube-proxy 0 8df7717a34531 | |
* a8673db5ff2ad 76216c34ed0c7 8 minutes ago Running kube-scheduler 0 69d249b151f2d | |
* 663dada323e98 303ce5db0e90d 8 minutes ago Running etcd 0 4777c338fb836 | |
* 24d686838dec2 da26705ccb4b5 8 minutes ago Running kube-controller-manager 0 ff24f8e852b09 | |
* b7ced5cccc0a4 7e28efa976bd1 8 minutes ago Running kube-apiserver 0 1456a98fec87b | |
* | |
* ==> coredns [d6a261bca522] <== | |
* .:53 | |
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 | |
* CoreDNS-1.6.7 | |
* linux/amd64, go1.13.6, da7f65b | |
* | |
* ==> describe nodes <== | |
* Name: addons-20200701030206-8084 | |
* Roles: master | |
* Labels: beta.kubernetes.io/arch=amd64 | |
* beta.kubernetes.io/os=linux | |
* kubernetes.io/arch=amd64 | |
* kubernetes.io/hostname=addons-20200701030206-8084 | |
* kubernetes.io/os=linux | |
* minikube.k8s.io/commit=8e52b6b82c25ed8422b28c3f8146a0b50f3ca74f | |
* minikube.k8s.io/name=addons-20200701030206-8084 | |
* minikube.k8s.io/updated_at=2020_07_01T03_03_34_0700 | |
* minikube.k8s.io/version=v1.12.0-beta.0 | |
* node-role.kubernetes.io/master= | |
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock | |
* node.alpha.kubernetes.io/ttl: 0 | |
* volumes.kubernetes.io/controller-managed-attach-detach: true | |
* CreationTimestamp: Wed, 01 Jul 2020 03:03:31 +0000 | |
* Taints: <none> | |
* Unschedulable: false | |
* Lease: | |
* HolderIdentity: addons-20200701030206-8084 | |
* AcquireTime: <unset> | |
* RenewTime: Wed, 01 Jul 2020 03:11:56 +0000 | |
* Conditions: | |
* Type Status LastHeartbeatTime LastTransitionTime Reason Message | |
* ---- ------ ----------------- ------------------ ------ ------- | |
* MemoryPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available | |
* DiskPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure | |
* PIDPressure False Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available | |
* Ready True Wed, 01 Jul 2020 03:09:37 +0000 Wed, 01 Jul 2020 03:03:45 +0000 KubeletReady kubelet is posting ready status | |
* Addresses: | |
* InternalIP: 192.168.39.105 | |
* Hostname: addons-20200701030206-8084 | |
* Capacity: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* Allocatable: | |
* cpu: 2 | |
* ephemeral-storage: 16954224Ki | |
* hugepages-2Mi: 0 | |
* memory: 2470872Ki | |
* pods: 110 | |
* System Info: | |
* Machine ID: 11d7f8acaa014dd1a88f3c5ba725298e | |
* System UUID: 11d7f8ac-aa01-4dd1-a88f-3c5ba725298e | |
* Boot ID: 3a2b8acb-8700-4c04-87f6-71cbb4607c14 | |
* Kernel Version: 4.19.107 | |
* OS Image: Buildroot 2019.02.10 | |
* Operating System: linux | |
* Architecture: amd64 | |
* Container Runtime Version: docker://19.3.8 | |
* Kubelet Version: v1.18.3 | |
* Kube-Proxy Version: v1.18.3 | |
* Non-terminated Pods: (14 in total) | |
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE | |
* --------- ---- ------------ ---------- --------------- ------------- --- | |
* kube-system coredns-66bff467f8-hj7n4 100m (5%) 0 (0%) 70Mi (2%) 170Mi (7%) 8m19s | |
* kube-system etcd-addons-20200701030206-8084 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s | |
* kube-system ingress-nginx-controller-7bb4c67d67-fkjkd 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 8m19s | |
* kube-system kube-apiserver-addons-20200701030206-8084 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m24s | |
* kube-system kube-controller-manager-addons-20200701030206-8084 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m24s | |
* kube-system kube-proxy-8bljr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m19s | |
* kube-system kube-scheduler-addons-20200701030206-8084 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m24s | |
* kube-system metrics-server-7bc6d75975-qxr52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m19s | |
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s | |
* olm catalog-operator-86f777cc59-n2z95 10m (0%) 0 (0%) 80Mi (3%) 0 (0%) 8m19s | |
* olm olm-operator-5fd48d8cd4-sh5bk 10m (0%) 0 (0%) 160Mi (6%) 0 (0%) 8m19s | |
* olm operatorhubio-catalog-9h9sw 10m (0%) 100m (5%) 50Mi (2%) 100Mi (4%) 7m59s | |
* olm packageserver-fc86cd5d4-djgms 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 7m55s | |
* olm packageserver-fc86cd5d4-wgfqr 10m (0%) 0 (0%) 50Mi (2%) 0 (0%) 7m55s | |
* Allocated resources: | |
* (Total limits may be over 100 percent, i.e., overcommitted.) | |
* Resource Requests Limits | |
* -------- -------- ------ | |
* cpu 800m (40%) 100m (5%) | |
* memory 550Mi (22%) 270Mi (11%) | |
* ephemeral-storage 0 (0%) 0 (0%) | |
* hugepages-2Mi 0 (0%) 0 (0%) | |
* Events: | |
* Type Reason Age From Message | |
* ---- ------ ---- ---- ------- | |
* Normal Starting 8m25s kubelet, addons-20200701030206-8084 Starting kubelet. | |
* Normal NodeHasSufficientMemory 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientMemory | |
* Normal NodeHasNoDiskPressure 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasNoDiskPressure | |
* Normal NodeHasSufficientPID 8m25s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeHasSufficientPID | |
* Normal NodeAllocatableEnforced 8m24s kubelet, addons-20200701030206-8084 Updated Node Allocatable limit across pods | |
* Normal Starting 8m18s kube-proxy, addons-20200701030206-8084 Starting kube-proxy. | |
* Normal NodeReady 8m15s kubelet, addons-20200701030206-8084 Node addons-20200701030206-8084 status is now: NodeReady | |
* | |
* ==> dmesg <== | |
* [ +0.000000] Call Trace: | |
* [ +0.000005] dump_stack+0x66/0x8b | |
* [ +0.000004] dump_header+0x66/0x28e | |
* [ +0.000001] oom_kill_process+0x251/0x270 | |
* [ +0.000002] out_of_memory+0x10b/0x4a0 | |
* [ +0.000003] mem_cgroup_out_of_memory+0xb0/0xd0 | |
* [ +0.000002] try_charge+0x728/0x770 | |
* [ +0.000001] ? __alloc_pages_nodemask+0x11f/0x2a0 | |
* [ +0.000001] mem_cgroup_try_charge+0x81/0x170 | |
* [ +0.000001] mem_cgroup_try_charge_delay+0x17/0x40 | |
* [ +0.000002] __handle_mm_fault+0x7be/0xe50 | |
* [ +0.000002] handle_mm_fault+0xd7/0x230 | |
* [ +0.000003] __do_page_fault+0x23e/0x4c0 | |
* [ +0.000003] ? async_page_fault+0x8/0x30 | |
* [ +0.000001] async_page_fault+0x1e/0x30 | |
* [ +0.000002] RIP: 0033:0xe0a8f5 | |
* [ +0.000001] Code: c3 48 8b 47 08 48 89 fa 48 83 e0 fe 48 8d 48 f0 48 39 f1 76 29 48 89 c1 49 89 f0 48 8d 3c 37 48 29 f1 49 83 c8 01 48 83 c9 01 <4c> 89 07 48 89 4f 08 48 89 0c 02 4c 89 42 08 e9 d9 fc ff ff c3 41 | |
* [ +0.000025] RSP: 002b:00007fa815df7b78 EFLAGS: 00010202 | |
* [ +0.000001] RAX: 0000000000016ac0 RBX: 0000000006ca0530 RCX: 0000000000001601 | |
* [ +0.000001] RDX: 0000000006ca0530 RSI: 00000000000154c0 RDI: 0000000006cb59f0 | |
* [ +0.000000] RBP: 0000000006ca1000 R08: 00000000000154c1 R09: 00000000011ce91e | |
* [ +0.000001] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000 | |
* [ +0.000000] R13: fc00000000000000 R14: 0000000006c7fb80 R15: 000000000197cb40 | |
* [ +0.000123] Memory cgroup out of memory: Kill process 9672 (registry-server) score 2054 or sacrifice child | |
* [ +0.000041] Killed process 9672 (registry-server) total-vm:191280kB, anon-rss:97532kB, file-rss:14516kB, shmem-rss:0kB | |
* | |
* ==> etcd [663dada323e9] <== | |
* 2020-07-01 03:03:28.332386 I | etcdserver: setting up the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.332551 I | etcdserver: published {Name:addons-20200701030206-8084 ClientURLs:[https://192.168.39.105:2379]} to cluster f45b5855e490ef48 | |
* 2020-07-01 03:03:28.332600 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.333537 I | embed: serving client requests on 127.0.0.1:2379 | |
* 2020-07-01 03:03:28.334586 I | embed: ready to serve client requests | |
* 2020-07-01 03:03:28.337193 I | embed: serving client requests on 192.168.39.105:2379 | |
* 2020-07-01 03:03:28.338344 N | etcdserver/membership: set the initial cluster version to 3.4 | |
* 2020-07-01 03:03:28.338411 I | etcdserver/api: enabled capabilities for version 3.4 | |
* 2020-07-01 03:04:04.323170 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9305" took too long (105.645771ms) to execute | |
* 2020-07-01 03:04:05.236139 W | etcdserver: read-only range request "key:\"/registry/endpointslices/olm/v1-packages-operators-coreos-com-gfbjh\" " with result "range_response_count:1 size:953" took too long (127.168286ms) to execute | |
* 2020-07-01 03:04:05.805401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (119.574479ms) to execute | |
* 2020-07-01 03:04:05.808506 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (122.545256ms) to execute | |
* 2020-07-01 03:04:05.820836 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9775" took too long (135.01854ms) to execute | |
* 2020-07-01 03:04:08.142775 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (111.648032ms) to execute | |
* 2020-07-01 03:04:08.143088 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (231.508641ms) to execute | |
* 2020-07-01 03:04:08.143309 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/\" range_end:\"/registry/operators.coreos.com/catalogsources0\" " with result "range_response_count:1 size:2019" took too long (159.192904ms) to execute | |
* 2020-07-01 03:04:08.143552 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59530" took too long (113.041301ms) to execute | |
* 2020-07-01 03:04:16.739873 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59653" took too long (208.500076ms) to execute | |
* 2020-07-01 03:04:16.740802 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (228.630795ms) to execute | |
* 2020-07-01 03:04:23.380208 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (268.088725ms) to execute | |
* 2020-07-01 03:04:29.198339 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (110.334524ms) to execute | |
* 2020-07-01 03:04:29.198868 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (115.991158ms) to execute | |
* 2020-07-01 03:04:29.199825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:14 size:59989" took too long (116.97369ms) to execute | |
* 2020-07-01 03:09:41.650732 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/olm/operatorhubio-catalog\" " with result "range_response_count:1 size:2026" took too long (196.156523ms) to execute | |
* 2020-07-01 03:09:41.651243 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1314" took too long (203.221616ms) to execute | |
* | |
* ==> kernel <== | |
* 03:12:00 up 9 min, 0 users, load average: 0.90, 0.89, 0.56 | |
* Linux addons-20200701030206-8084 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux | |
* PRETTY_NAME="Buildroot 2019.02.10" | |
* | |
* ==> kube-apiserver [b7ced5cccc0a] <== | |
* I0701 03:03:41.407486 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
* W0701 03:03:42.028314 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:03:42.028373 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:03:42.028384 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:03:57.879055 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:03:57.879070 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* I0701 03:04:01.803924 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.803984 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* I0701 03:04:01.886189 1 client.go:361] parsed scheme: "endpoint" | |
* I0701 03:04:01.886333 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] | |
* W0701 03:04:06.880277 1 handler_proxy.go:102] no RequestInfo found in the context | |
* E0701 03:04:06.880349 1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
* I0701 03:04:06.880361 1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. | |
* E0701 03:04:57.883519 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:04:57.883592 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:06:57.886892 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:06:57.887194 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:08:32.648453 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:08:32.648470 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:09:32.651434 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:09:32.651549 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* E0701 03:11:32.655413 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist | |
* I0701 03:11:32.655497 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
* | |
* ==> kube-controller-manager [24d686838dec] <== | |
* I0701 03:03:46.000600 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"registry-proxy", UID:"fe3c5a0b-c38c-4314-9a05-53037ff158f0", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-proxy-7kmmq | |
* I0701 03:03:50.871083 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. | |
* I0701 03:03:52.669012 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"1ae488a6-abe8-4bc0-965b-c398118daf32", APIVersion:"batch/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:03:57.805105 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"f6bb03c0-6ca4-47f0-900a-b1e273dbb951", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
* I0701 03:04:05.000820 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"olm", Name:"packageserver", UID:"4f1d1f78-eb9b-4bf3-9753-09b07a182891", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-fc86cd5d4 to 2 | |
* I0701 03:04:05.059501 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-djgms | |
* I0701 03:04:05.170272 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"olm", Name:"packageserver-fc86cd5d4", UID:"c9874dfe-0029-4742-a3d8-1f87cb754bb4", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-fc86cd5d4-wgfqr | |
* E0701 03:04:11.881078 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:11.881240 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com | |
* I0701 03:04:11.881307 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com | |
* I0701 03:04:11.881325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com | |
* I0701 03:04:11.881345 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com | |
* I0701 03:04:11.881361 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com | |
* I0701 03:04:11.881406 1 shared_informer.go:223] Waiting for caches to sync for resource quota | |
* I0701 03:04:11.981723 1 shared_informer.go:230] Caches are synced for resource quota | |
* I0701 03:04:13.123021 1 request.go:621] Throttling request took 1.047575349s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s | |
* W0701 03:04:13.926487 1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request] | |
* E0701 03:04:14.127948 1 memcache.go:206] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* E0701 03:04:14.428219 1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request | |
* I0701 03:04:14.429209 1 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
* I0701 03:04:14.429275 1 shared_informer.go:230] Caches are synced for garbage collector | |
* E0701 03:04:39.175318 1 clusterroleaggregation_controller.go:181] olm-operators-edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "olm-operators-edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.185205 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.186128 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* E0701 03:04:39.204080 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
* | |
* ==> kube-proxy [40c9a46cf08a] <== | |
* W0701 03:03:42.853505 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy | |
* I0701 03:03:42.861890 1 node.go:136] Successfully retrieved node IP: 192.168.39.105 | |
* I0701 03:03:42.861937 1 server_others.go:186] Using iptables Proxier. | |
* W0701 03:03:42.861945 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined | |
* I0701 03:03:42.861949 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local | |
* I0701 03:03:42.862522 1 server.go:583] Version: v1.18.3 | |
* I0701 03:03:42.863107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
* I0701 03:03:42.863131 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
* I0701 03:03:42.863538 1 conntrack.go:83] Setting conntrack hashsize to 32768 | |
* I0701 03:03:42.867910 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
* I0701 03:03:42.868306 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
* I0701 03:03:42.871109 1 config.go:315] Starting service config controller | |
* I0701 03:03:42.871148 1 shared_informer.go:223] Waiting for caches to sync for service config | |
* I0701 03:03:42.871165 1 config.go:133] Starting endpoints config controller | |
* I0701 03:03:42.871173 1 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
* I0701 03:03:42.971416 1 shared_informer.go:230] Caches are synced for endpoints config | |
* I0701 03:03:42.971523 1 shared_informer.go:230] Caches are synced for service config | |
* | |
* ==> kube-scheduler [a8673db5ff2a] <== | |
* W0701 03:03:31.650803 1 authentication.go:40] Authentication is disabled | |
* I0701 03:03:31.650814 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
* I0701 03:03:31.652329 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 | |
* I0701 03:03:31.652574 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652711 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* I0701 03:03:31.652730 1 tlsconfig.go:240] Starting DynamicServingCertificateController | |
* E0701 03:03:31.657008 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:31.658164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
* E0701 03:03:31.658324 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* E0701 03:03:31.658888 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
* E0701 03:03:31.659056 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
* E0701 03:03:31.659357 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:31.659504 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
* E0701 03:03:31.659723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
* E0701 03:03:31.659789 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.465153 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
* E0701 03:03:32.497519 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope | |
* E0701 03:03:32.559891 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" | |
* I0701 03:03:35.752931 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file | |
* E0701 03:03:40.948371 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-59b72 is already present in the active queue | |
* E0701 03:03:40.967858 1 factory.go:503] pod: kube-system/ingress-nginx-admission-patch-f8zdn is already present in the active queue | |
* E0701 03:03:41.332123 1 factory.go:503] pod: kube-system/tiller-deploy-78ff886c54-7kcct is already present in the active queue | |
* E0701 03:03:41.345197 1 factory.go:503] pod: kube-system/metrics-server-7bc6d75975-qxr52 is already present in the active queue | |
* E0701 03:03:41.367475 1 factory.go:503] pod: olm/olm-operator-5fd48d8cd4-sh5bk is already present in the active queue | |
* E0701 03:03:41.389016 1 factory.go:503] pod: kube-system/ingress-nginx-controller-7bb4c67d67-fkjkd is already present in the active queue | |
* | |
* ==> kubelet <== | |
* -- Logs begin at Wed 2020-07-01 03:02:14 UTC, end at Wed 2020-07-01 03:12:00 UTC. -- | |
* Jul 01 03:09:41 addons-20200701030206-8084 kubelet[3731]: W0701 03:09:41.359091 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: W0701 03:10:11.710194 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:11.716171 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 208781ffec9d6ee31318647e8ce976b29384e7b638b2006baff53d73bdd97b9b | |
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:11.716706 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:11 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:11.724093 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: W0701 03:10:12.727026 3731 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for olm/operatorhubio-catalog-9h9sw through plugin: invalid network status for | |
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:12.774389 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:12 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:12.774757 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:10:24 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:24.629843 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:24 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:24.631034 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:10:37 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:37.631509 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:37 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:37.631889 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:10:51 addons-20200701030206-8084 kubelet[3731]: I0701 03:10:51.630062 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:10:51 addons-20200701030206-8084 kubelet[3731]: E0701 03:10:51.630968 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:04 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:04.629759 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:11:04 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:04.630117 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:16 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:16.629761 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:11:16 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:16.630187 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:30 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:30.630363 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:11:30 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:30.630749 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:43 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:43.629779 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:11:43 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:43.630702 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:58 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:58.629807 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6197a0fa774c158d7e830ac228320439540a8ddaae85f40b359301345a689b5c | |
* Jul 01 03:11:58 addons-20200701030206-8084 kubelet[3731]: E0701 03:11:58.630146 3731 pod_workers.go:191] Error syncing pod 67b75841-1d08-45d6-849d-d4df7d2309ac ("operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)"), skipping: failed to "StartContainer" for "registry-server" with CrashLoopBackOff: "back-off 2m40s restarting failed container=registry-server pod=operatorhubio-catalog-9h9sw_olm(67b75841-1d08-45d6-849d-d4df7d2309ac)" | |
* Jul 01 03:11:59 addons-20200701030206-8084 kubelet[3731]: I0701 03:11:59.995699 3731 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f34af38da2a244711c04394f746a798ee2b720389b1e7950ef7a900071a733b6 | |
* | |
* ==> storage-provisioner [94232379c158] <== | |
-- /stdout -- | |
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20200701030206-8084 -n addons-20200701030206-8084 | |
helpers_test.go:254: (dbg) Run: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running | |
helpers_test.go:254: (dbg) Non-zero exit: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH (281ns) | |
helpers_test.go:256: kubectl --context addons-20200701030206-8084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: exec: "kubectl": executable file not found in $PATH | |
addons_test.go:71: (dbg) Run: out/minikube-linux-amd64 stop -p addons-20200701030206-8084 | |
addons_test.go:71: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20200701030206-8084: (14.093054537s) | |
addons_test.go:75: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p addons-20200701030206-8084 | |
addons_test.go:79: (dbg) Run: out/minikube-linux-amd64 addons disable dashboard -p addons-20200701030206-8084 | |
helpers_test.go:170: Cleaning up "addons-20200701030206-8084" profile ... | |
helpers_test.go:171: (dbg) Run: out/minikube-linux-amd64 delete -p addons-20200701030206-8084 | |
=== RUN TestCertOptions | |
=== PAUSE TestCertOptions | |
=== RUN TestDockerFlags | |
=== PAUSE TestDockerFlags | |
=== RUN TestForceSystemdFlag | |
=== PAUSE TestForceSystemdFlag | |
=== RUN TestForceSystemdEnv | |
=== PAUSE TestForceSystemdEnv | |
=== RUN TestKVMDriverInstallOrUpdate | |
=== PAUSE TestKVMDriverInstallOrUpdate | |
=== RUN TestHyperKitDriverInstallOrUpdate | |
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s) | |
driver_install_or_update_test.go:102: Skip if not darwin. | |
=== RUN TestErrorSpam | |
=== PAUSE TestErrorSpam | |
=== RUN TestFunctional | |
=== RUN TestFunctional/serial | |
=== RUN TestFunctional/serial/CopySyncFile | |
=== RUN TestFunctional/serial/StartWithProxy | |
=== RUN TestFunctional/serial/SoftStart | |
=== RUN TestFunctional/serial/KubeContext | |
=== RUN TestFunctional/serial/KubectlGetPods | |
=== RUN TestFunctional/serial/CacheCmd | |
=== RUN TestFunctional/serial/CacheCmd/cache | |
=== RUN TestFunctional/serial/CacheCmd/cache/add | |
=== RUN TestFunctional/serial/CacheCmd/cache/delete_busybox:1.28.4-glibc | |
=== RUN TestFunctional/serial/CacheCmd/cache/list | |
=== RUN TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node | |
=== RUN TestFunctional/serial/CacheCmd/cache/cache_reload | |
=== RUN TestFunctional/serial/CacheCmd/cache/delete | |
=== RUN TestFunctional/serial/MinikubeKubectlCmd | |
=== PAUSE TestFunctional | |
=== RUN TestGvisorAddon | |
=== PAUSE TestGvisorAddon | |
=== RUN TestMultiNode | |
=== RUN TestMultiNode/serial | |
=== RUN TestMultiNode/serial/FreshStart2Nodes | |
=== RUN TestMultiNode/serial/AddNode | |
=== RUN TestMultiNode/serial/StopNode | |
=== RUN TestMultiNode/serial/StartAfterStop | |
=== RUN TestMultiNode/serial/DeleteNode | |
=== RUN TestMultiNode/serial/StopMultiNode | |
=== RUN TestMultiNode/serial/RestartMultiNode | |
--- FAIL: TestMultiNode (396.72s) | |
--- FAIL: TestMultiNode/serial (393.41s) | |
--- PASS: TestMultiNode/serial/FreshStart2Nodes (181.13s) | |
multinode_test.go:65: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --wait=true --memory=2200 --nodes=2 --driver=kvm2 | |
multinode_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20200701031411-8084 --wait=true --memory=2200 --nodes=2 --driver=kvm2 : (3m0.758212273s) | |
multinode_test.go:71: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 status --alsologtostderr | |
--- PASS: TestMultiNode/serial/AddNode (88.72s) | |
multinode_test.go:89: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20200701031411-8084 -v 3 --alsologtostderr | |
multinode_test.go:89: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20200701031411-8084 -v 3 --alsologtostderr: (1m28.212112207s) | |
multinode_test.go:95: (dbg) Run: out/minikube-linux-amd64 -p multinode-20200701031411-8084 statu |