Created
November 3, 2019 14:07
-
-
Save alvaroaleman/238b29e1e3f49aab7e39089e4545e30a to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
wrapper.sh] [INFO] Wrapping Test Command: `bash -c gsutil cp -P gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64 "${PATH%%:*}/kind" && gsutil cat gs://bentheelder-kind-ci-builds/latest/e2e-k8s.sh | sh` | |
wrapper.sh] [INFO] Running in: gcr.io/k8s-testimages/krte:v20191020-6567e5c-master | |
wrapper.sh] [INFO] See: https://github.com/kubernetes/test-infra/blob/master/images/krte/wrapper.sh | |
================================================================================ | |
wrapper.sh] [SETUP] Performing pre-test setup ... | |
wrapper.sh] [SETUP] Bazel remote cache is enabled, generating .bazelrcs ... | |
create_bazel_cache_rcs.sh: Configuring '/root/.bazelrc' and '/etc/bazel.bazelrc' with | |
# ------------------------------------------------------------------------------ | |
startup --host_jvm_args=-Dbazel.DigestFunction=sha256 | |
build --experimental_remote_spawn_cache | |
build --remote_local_fallback | |
build --remote_http_cache=http://bazel-cache.default.svc.cluster.local.:8080/kubernetes/kubernetes,7f7656b63c121afcda83188b05b5fd13 | |
# ------------------------------------------------------------------------------ | |
wrapper.sh] [SETUP] Done setting up .bazelrcs | |
wrapper.sh] [SETUP] Docker in Docker enabled, initializing ... | |
Starting Docker: docker. | |
wrapper.sh] [SETUP] Waiting for Docker to be ready, sleeping for 1 seconds ... | |
wrapper.sh] [SETUP] Done setting up Docker in Docker. | |
wrapper.sh] [SETUP] Setting SOURCE_DATE_EPOCH for build reproducibility ... | |
wrapper.sh] [SETUP] exported SOURCE_DATE_EPOCH=1572763301 | |
================================================================================ | |
wrapper.sh] [TEST] Running Test Command: `bash -c gsutil cp -P gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64 "${PATH%%:*}/kind" && gsutil cat gs://bentheelder-kind-ci-builds/latest/e2e-k8s.sh | sh` ... | |
Copying gs://bentheelder-kind-ci-builds/latest/kind-linux-amd64... | |
/ [0 files][ 0.0 B/ 9.4 MiB] | |
/ [1 files][ 9.4 MiB/ 9.4 MiB] | |
Operation completed over 1 objects/9.4 MiB. | |
+ main | |
+ mktemp -d | |
+ TMP_DIR=/tmp/tmp.0uZ1no4LVH | |
+ trap cleanup EXIT | |
+ export ARTIFACTS=/logs/artifacts | |
+ mkdir -p /logs/artifacts | |
+ KUBECONFIG=/root/.kube/kind-test-config | |
+ export KUBECONFIG | |
+ echo exported KUBECONFIG=/root/.kube/kind-test-config | |
exported KUBECONFIG=/root/.kube/kind-test-config | |
+ kind version | |
kind v0.6.0-alpha+0ff0546bc81543 go1.13.3 linux/amd64 | |
+ BUILD_TYPE=bazel | |
+ [ bazel = bazel ] | |
+ build_with_bazel | |
+ [ true = true ] | |
+ create_bazel_cache_rcs.sh | |
create_bazel_cache_rcs.sh: Configuring '/root/.bazelrc' and '/etc/bazel.bazelrc' with | |
# ------------------------------------------------------------------------------ | |
startup --host_jvm_args=-Dbazel.DigestFunction=sha256 | |
build --experimental_remote_spawn_cache | |
build --remote_local_fallback | |
build --remote_http_cache=http://bazel-cache.default.svc.cluster.local.:8080/kubernetes/kubernetes,7f7656b63c121afcda83188b05b5fd13 | |
# ------------------------------------------------------------------------------ | |
+ kind build node-image --type=bazel | |
Starting to build Kubernetes | |
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'. | |
Extracting Bazel installation... | |
Starting local Bazel server and connecting to it... | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
INFO: Invocation ID: ca63bdfb-2099-49a8-a989-f344b0b3a22d | |
Loading: | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 0 packages loaded | |
Loading: 3 packages loaded | |
currently loading: build | |
Loading: 3 packages loaded | |
currently loading: build | |
Analyzing: 4 targets (4 packages loaded, 0 targets configured) | |
Analyzing: 4 targets (15 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (16 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (16 packages loaded, 31 targets configured) | |
Analyzing: 4 targets (936 packages loaded, 8838 targets configured) | |
Analyzing: 4 targets (2184 packages loaded, 16012 targets configured) | |
INFO: Analysed 4 targets (2184 packages loaded, 17236 targets configured). | |
Building: checking cached actions | |
INFO: Found 4 targets... | |
[0 / 20] [-----] Expanding template external/bazel_tools/tools/build_defs/hash/sha256 [for host] | |
[61 / 1,547] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 3s remote-cache ... (8 actions, 7 running) | |
[1,007 / 2,680] GoCompilePkg staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions.a; 0s remote-cache ... (7 actions running) | |
[1,910 / 2,701] GoCompilePkg cmd/kubeadm/app/cmd/phases/join/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.a; 0s remote-cache ... (7 actions, 6 running) | |
[2,484 / 3,070] GoLink cmd/kubectl/linux_amd64_pure_stripped/kubectl; 7s linux-sandbox ... (8 actions, 7 running) | |
[2,921 / 3,139] GoLink cmd/kube-apiserver/linux_amd64_pure_stripped/kube-apiserver; 13s linux-sandbox ... (8 actions, 7 running) | |
[3,115 / 3,144] GoLink cmd/kube-apiserver/linux_amd64_pure_stripped/kube-apiserver; 23s linux-sandbox ... (4 actions, 3 running) | |
[3,124 / 3,144] GoLink cmd/kubelet/kubelet; 19s linux-sandbox ... (3 actions, 2 running) | |
[3,134 / 3,144] ImageLayer build/kube-apiserver-internal-layer.tar; 4s linux-sandbox ... (2 actions, 1 running) | |
[3,143 / 3,144] Executing genrule //build:gen_kube-apiserver.tar; 1s linux-sandbox | |
INFO: Elapsed time: 132.036s, Critical Path: 76.59s | |
INFO: 3095 processes: 3037 remote cache hit, 58 linux-sandbox. | |
INFO: Build completed successfully, 3144 total actions | |
INFO: Build completed successfully, 3144 total actions | |
Finished building Kubernetes | |
Building node image in: /tmp/kind-node-image098734066 | |
Starting image build ... | |
Building in kind-build-4073b6e5-ecf9-4c78-a22c-7afa63e4592b | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
Detected built images: k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011, k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011 | |
Pulling: kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555 | |
Pulling: k8s.gcr.io/pause:3.1 | |
Pulling: k8s.gcr.io/etcd:3.4.3-0 | |
Pulling: k8s.gcr.io/coredns:1.6.2 | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-proxy-amd64 -> k8s.gcr.io/kube-proxy | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
fixed: k8s.gcr.io/kube-scheduler-amd64 -> k8s.gcr.io/kube-scheduler | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-controller-manager-amd64 -> k8s.gcr.io/kube-controller-manager | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
fixed: k8s.gcr.io/kube-apiserver-amd64 -> k8s.gcr.io/kube-apiserver | |
sha256:a7e030da540ce82163d6b8fd1a997193f89ac0620573fd52a257f577f6b14ba2 | |
Image build completed. | |
+ bazel build //cmd/kubectl //test/e2e:e2e.test //vendor/github.com/onsi/ginkgo/ginkgo | |
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'. | |
Starting local Bazel server and connecting to it... | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
WARNING: Option 'experimental_remote_spawn_cache' is deprecated | |
INFO: Invocation ID: 4302c021-5dab-4f55-ba83-1ab76d1bb27a | |
Loading: | |
Loading: 0 packages loaded | |
Analyzing: 3 targets (3 packages loaded) | |
Analyzing: 3 targets (3 packages loaded, 0 targets configured) | |
Analyzing: 3 targets (12 packages loaded, 19 targets configured) | |
Analyzing: 3 targets (172 packages loaded, 3807 targets configured) | |
Analyzing: 3 targets (495 packages loaded, 7877 targets configured) | |
Analyzing: 3 targets (653 packages loaded, 8573 targets configured) | |
Analyzing: 3 targets (878 packages loaded, 9627 targets configured) | |
Analyzing: 3 targets (1132 packages loaded, 11836 targets configured) | |
Analyzing: 3 targets (1460 packages loaded, 13305 targets configured) | |
Analyzing: 3 targets (1850 packages loaded, 15902 targets configured) | |
Analyzing: 3 targets (1857 packages loaded, 16775 targets configured) | |
Analyzing: 3 targets (1858 packages loaded, 16895 targets configured) | |
INFO: Analysed 3 targets (1858 packages loaded, 16899 targets configured). | |
INFO: Found 3 targets... | |
[1 / 16] [-----] BazelWorkspaceStatusAction stable-status.txt | |
[7 / 1,145] checking cached actions | |
[487 / 2,208] GoCompilePkg vendor/github.com/onsi/gomega/matchers/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/onsi/gomega/matchers.a; 0s remote-cache | |
[1,324 / 2,208] GoCompilePkg staging/src/k8s.io/apiextensions-apiserver/pkg/generated/openapi/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/generated/openapi.a; 0s remote-cache ... (7 actions running) | |
[2,092 / 2,208] GoCompilePkg test/utils/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/test/utils.a; 0s remote-cache ... (3 actions running) | |
[2,206 / 2,208] GoLink test/e2e/linux_amd64_stripped/_go_default_test-cgo; 1s linux-sandbox | |
[2,206 / 2,208] GoLink test/e2e/linux_amd64_stripped/_go_default_test-cgo; 11s linux-sandbox | |
[2,207 / 2,208] [-----] Executing genrule //test/e2e:gen_e2e.test | |
INFO: Elapsed time: 63.810s, Critical Path: 35.27s | |
INFO: 528 processes: 524 remote cache hit, 4 linux-sandbox. | |
INFO: Build completed successfully, 533 total actions | |
INFO: Build completed successfully, 533 total actions | |
+ mkdir -p _output/bin/ | |
+ cp bazel-bin/test/e2e/e2e.test _output/bin/ | |
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f | |
+ dirname /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl | |
+ PATH=/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped:/home/prow/go/bin:/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin | |
+ export PATH | |
+ [ -n ] | |
+ create_cluster | |
+ cat | |
+ NUM_NODES=2 | |
+ KIND_CREATE_ATTEMPTED=true | |
+ kind create cluster --image=kindest/node:latest --retain --wait=1m -v=3 --config=/logs/artifacts/kind-config.yaml | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
Creating cluster "kind" ... | |
• Ensuring node image (kindest/node:latest) 🖼 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --type=image kindest/node:latest" | |
DEBUG: docker/images.go:58] Image: kindest/node:latest present locally | |
✓ Ensuring node image (kindest/node:latest) 🖼 | |
• Preparing nodes 📦 ... | |
DEBUG: exec/local.go:116] Running: "docker info --format ''"'"'{{json .SecurityOptions}}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-worker2 --name kind-worker2 --label io.k8s.sigs.kind.role=worker --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind kindest/node:latest" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-worker --name kind-worker --label io.k8s.sigs.kind.role=worker --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind kindest/node:latest" | |
DEBUG: exec/local.go:116] Running: "docker run --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.role=control-plane --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.k8s.sigs.kind.cluster=kind --publish=127.0.0.1:33605:6443/TCP kindest/node:latest" | |
✓ Preparing nodes 📦 | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
• Creating kubeadm config 📜 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 cat /kind/version" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' kind-control-plane" | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.2 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.2 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.2 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 mkdir -p /kind" | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.4 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.4 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.4 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker mkdir -p /kind" | |
DEBUG: kubeadm/config.go:445] Configuration Input data: {kind v1.18.0-alpha.0.178+0c66e64b140011 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} | |
DEBUG: config/config.go:209] Using kubeadm config: | |
apiServer: | |
certSANs: | |
- localhost | |
- 127.0.0.1 | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
clusterName: kind | |
controlPlaneEndpoint: 172.17.0.3:6443 | |
controllerManager: | |
extraArgs: | |
enable-hostpath-provisioner: "true" | |
kind: ClusterConfiguration | |
kubernetesVersion: v1.18.0-alpha.0.178+0c66e64b140011 | |
networking: | |
podSubnet: 10.244.0.0/16 | |
serviceSubnet: 10.96.0.0/12 | |
scheduler: | |
extraArgs: null | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
bootstrapTokens: | |
- token: abcdef.0123456789abcdef | |
kind: InitConfiguration | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.3 | |
bindPort: 6443 | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.3 | |
--- | |
apiVersion: kubeadm.k8s.io/v1beta2 | |
controlPlane: | |
localAPIEndpoint: | |
advertiseAddress: 172.17.0.3 | |
bindPort: 6443 | |
discovery: | |
bootstrapToken: | |
apiServerEndpoint: 172.17.0.3:6443 | |
token: abcdef.0123456789abcdef | |
unsafeSkipCAVerification: true | |
kind: JoinConfiguration | |
nodeRegistration: | |
criSocket: /run/containerd/containerd.sock | |
kubeletExtraArgs: | |
fail-swap-on: "false" | |
node-ip: 172.17.0.3 | |
--- | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
evictionHard: | |
imagefs.available: 0% | |
nodefs.available: 0% | |
nodefs.inodesFree: 0% | |
imageGCHighThresholdPercent: 100 | |
kind: KubeletConfiguration | |
--- | |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 | |
kind: KubeProxyConfiguration | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane mkdir -p /kind" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf" | |
✓ Creating kubeadm config 📜 | |
• Starting control-plane 🕹️ ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6" | |
DEBUG: kubeadminit/init.go:74] I1103 06:54:42.437440 138 initconfiguration.go:207] loading configuration from "/kind/kubeadm.conf" | |
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration | |
I1103 06:54:42.447314 138 feature_gate.go:216] feature gates: &{map[]} | |
[init] Using Kubernetes version: v1.18.0-alpha.0.178+0c66e64b140011 | |
[preflight] Running pre-flight checks | |
I1103 06:54:42.447992 138 checks.go:577] validating Kubernetes and kubeadm version | |
I1103 06:54:42.448219 138 checks.go:166] validating if the firewall is enabled and active | |
I1103 06:54:42.461829 138 checks.go:201] validating availability of port 6443 | |
I1103 06:54:42.462183 138 checks.go:201] validating availability of port 10251 | |
I1103 06:54:42.462226 138 checks.go:201] validating availability of port 10252 | |
I1103 06:54:42.462274 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml | |
I1103 06:54:42.462322 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml | |
I1103 06:54:42.462340 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml | |
I1103 06:54:42.462353 138 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml | |
I1103 06:54:42.462369 138 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:54:42.462426 138 checks.go:471] validating http connectivity to first IP address in the CIDR | |
I1103 06:54:42.462447 138 checks.go:471] validating http connectivity to first IP address in the CIDR | |
I1103 06:54:42.462459 138 checks.go:102] validating the container runtime | |
I1103 06:54:42.486634 138 checks.go:376] validating the presence of executable crictl | |
I1103 06:54:42.486709 138 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:54:42.486895 138 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:54:42.486995 138 checks.go:649] validating whether swap is enabled or not | |
I1103 06:54:42.487034 138 checks.go:376] validating the presence of executable ip | |
I1103 06:54:42.487127 138 checks.go:376] validating the presence of executable iptables | |
I1103 06:54:42.487279 138 checks.go:376] validating the presence of executable mount | |
I1103 06:54:42.487322 138 checks.go:376] validating the presence of executable nsenter | |
I1103 06:54:42.487375 138 checks.go:376] validating the presence of executable ebtables | |
I1103 06:54:42.487411 138 checks.go:376] validating the presence of executable ethtool | |
I1103 06:54:42.487439 138 checks.go:376] validating the presence of executable socat | |
I1103 06:54:42.487500 138 checks.go:376] validating the presence of executable tc | |
I1103 06:54:42.487524 138 checks.go:376] validating the presence of executable touch | |
I1103 06:54:42.487563 138 checks.go:520] running all checks | |
I1103 06:54:42.493529 138 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:54:42.493881 138 checks.go:618] validating kubelet version | |
I1103 06:54:42.600742 138 checks.go:128] validating if the service is enabled and active | |
I1103 06:54:42.625103 138 checks.go:201] validating availability of port 10250 | |
I1103 06:54:42.625585 138 checks.go:201] validating availability of port 2379 | |
I1103 06:54:42.625755 138 checks.go:201] validating availability of port 2380 | |
I1103 06:54:42.625934 138 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd | |
[preflight] Pulling images required for setting up a Kubernetes cluster | |
[preflight] This might take a minute or two, depending on the speed of your internet connection | |
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' | |
I1103 06:54:42.642072 138 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.655767 138 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.669250 138 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.683247 138 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.178_0c66e64b140011 | |
I1103 06:54:42.704181 138 checks.go:838] image exists: k8s.gcr.io/pause:3.1 | |
I1103 06:54:42.717674 138 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0 | |
I1103 06:54:42.729600 138 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.2 | |
I1103 06:54:42.729659 138 kubelet.go:61] Stopping the kubelet | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
I1103 06:54:42.772283 138 kubelet.go:79] Starting the kubelet | |
[kubelet-start] Activating the kubelet service | |
[certs] Using certificateDir folder "/etc/kubernetes/pki" | |
I1103 06:54:42.890977 138 certs.go:104] creating a new certificate authority for ca | |
[certs] Generating "ca" certificate and key | |
[certs] Generating "apiserver" certificate and key | |
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1] | |
[certs] Generating "apiserver-kubelet-client" certificate and key | |
I1103 06:54:43.549128 138 certs.go:104] creating a new certificate authority for front-proxy-ca | |
[certs] Generating "front-proxy-ca" certificate and key | |
[certs] Generating "front-proxy-client" certificate and key | |
I1103 06:54:43.944482 138 certs.go:104] creating a new certificate authority for etcd-ca | |
[certs] Generating "etcd/ca" certificate and key | |
[certs] Generating "etcd/server" certificate and key | |
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1] | |
[certs] Generating "etcd/peer" certificate and key | |
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1] | |
[certs] Generating "etcd/healthcheck-client" certificate and key | |
[certs] Generating "apiserver-etcd-client" certificate and key | |
I1103 06:54:45.487973 138 certs.go:70] creating a new public/private key files for signing service account users | |
[certs] Generating "sa" key and public key | |
[kubeconfig] Using kubeconfig folder "/etc/kubernetes" | |
I1103 06:54:45.669426 138 kubeconfig.go:79] creating kubeconfig file for admin.conf | |
[kubeconfig] Writing "admin.conf" kubeconfig file | |
I1103 06:54:45.861455 138 kubeconfig.go:79] creating kubeconfig file for kubelet.conf | |
[kubeconfig] Writing "kubelet.conf" kubeconfig file | |
I1103 06:54:46.332341 138 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf | |
[kubeconfig] Writing "controller-manager.conf" kubeconfig file | |
I1103 06:54:46.816266 138 kubeconfig.go:79] creating kubeconfig file for scheduler.conf | |
[kubeconfig] Writing "scheduler.conf" kubeconfig file | |
I1103 06:54:47.474436 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
[control-plane] Using manifest folder "/etc/kubernetes/manifests" | |
[control-plane] Creating static Pod manifest for "kube-apiserver" | |
I1103 06:54:47.485277 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" | |
[control-plane] Creating static Pod manifest for "kube-controller-manager" | |
I1103 06:54:47.485562 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
W1103 06:54:47.485828 138 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" | |
I1103 06:54:47.487648 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" | |
[control-plane] Creating static Pod manifest for "kube-scheduler" | |
I1103 06:54:47.487763 138 manifests.go:90] [control-plane] getting StaticPodSpecs | |
W1103 06:54:47.487902 138 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" | |
I1103 06:54:47.488992 138 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" | |
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" | |
I1103 06:54:47.490104 138 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" | |
I1103 06:54:47.490127 138 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy | |
I1103 06:54:47.491386 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s | |
I1103 06:54:47.492863 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:47.993705 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:48.493773 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:48.994619 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:49.494222 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:49.994282 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:50.494015 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:50.993973 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:51.493732 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:51.993846 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:52.494183 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds | |
I1103 06:54:57.368027 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4374 milliseconds | |
I1103 06:54:57.495956 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds | |
I1103 06:54:57.996715 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds | |
I1103 06:54:58.498211 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds | |
[apiclient] All control plane components are healthy after 11.503138 seconds | |
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace | |
I1103 06:54:58.995448 138 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 200 OK in 2 milliseconds | |
I1103 06:54:58.995555 138 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap | |
I1103 06:54:59.002852 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds | |
I1103 06:54:59.009583 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 5 milliseconds | |
I1103 06:54:59.015371 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 4 milliseconds | |
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster | |
I1103 06:54:59.016484 138 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap | |
I1103 06:54:59.023519 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds | |
I1103 06:54:59.027330 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds | |
I1103 06:54:59.031695 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds | |
I1103 06:54:59.031902 138 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node | |
I1103 06:54:59.031928 138 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-control-plane" as an annotation | |
I1103 06:54:59.535391 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds | |
I1103 06:54:59.543352 138 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 5 milliseconds | |
[upload-certs] Skipping phase. Please see --upload-certs | |
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label "node-role.kubernetes.io/master=''" | |
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] | |
I1103 06:55:00.047499 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 3 milliseconds | |
I1103 06:55:00.052782 138 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds | |
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles | |
I1103 06:55:00.056855 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 2 milliseconds | |
I1103 06:55:00.063542 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/secrets 201 Created in 5 milliseconds | |
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials | |
I1103 06:55:00.074851 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 10 milliseconds | |
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token | |
I1103 06:55:00.078725 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds | |
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster | |
I1103 06:55:00.082191 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds | |
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace | |
I1103 06:55:00.082366 138 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig | |
I1103 06:55:00.083079 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
I1103 06:55:00.083275 138 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig | |
I1103 06:55:00.083849 138 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace | |
I1103 06:55:00.087183 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 2 milliseconds | |
I1103 06:55:00.087481 138 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace | |
I1103 06:55:00.091263 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 3 milliseconds | |
I1103 06:55:00.095529 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 3 milliseconds | |
I1103 06:55:00.098081 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 2 milliseconds | |
I1103 06:55:00.101013 138 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 2 milliseconds | |
I1103 06:55:00.104440 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds | |
I1103 06:55:00.111298 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 5 milliseconds | |
I1103 06:55:00.117340 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 5 milliseconds | |
I1103 06:55:00.129526 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds | |
I1103 06:55:00.160752 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 16 milliseconds | |
I1103 06:55:00.178331 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/services 201 Created in 15 milliseconds | |
[addons] Applied essential addon: CoreDNS | |
I1103 06:55:00.248951 138 request.go:573] Throttling request took 69.910732ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts | |
I1103 06:55:00.253577 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 4 milliseconds | |
I1103 06:55:00.444260 138 request.go:573] Throttling request took 188.357153ms, request: POST:https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps | |
I1103 06:55:00.449354 138 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 4 milliseconds | |
I1103 06:55:00.467168 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 11 milliseconds | |
I1103 06:55:00.470814 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds | |
I1103 06:55:00.474684 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds | |
I1103 06:55:00.481900 138 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 6 milliseconds | |
I1103 06:55:00.483053 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
I1103 06:55:00.483977 138 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf | |
[addons] Applied essential addon: kube-proxy | |
Your Kubernetes control-plane has initialized successfully! | |
To start using your cluster, you need to run the following as a regular user: | |
mkdir -p $HOME/.kube | |
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |
You should now deploy a pod network to the cluster. | |
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: | |
https://kubernetes.io/docs/concepts/cluster-administration/addons/ | |
You can now join any number of control-plane nodes by copying certificate authorities | |
and service account keys on each node and then running the following as root: | |
kubeadm join 172.17.0.3:6443 --token <value withheld> \ | |
--discovery-token-ca-cert-hash sha256:fa13192441ccaa921333b63599081f417d2326651c1f39a45a302f072024ac70 \ | |
--control-plane | |
Then you can join any number of worker nodes by running the following on each as root: | |
kubeadm join 172.17.0.3:6443 --token <value withheld> \ | |
--discovery-token-ca-cert-hash sha256:fa13192441ccaa921333b63599081f417d2326651c1f39a45a302f072024ac70 | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
✓ Starting control-plane 🕹️ | |
• Installing CNI 🔌 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /kind/manifests/default-cni.yaml" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f -" | |
✓ Installing CNI 🔌 | |
• Installing StorageClass 💾 ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -" | |
✓ Installing StorageClass 💾 | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
• Joining worker nodes 🚜 ... | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" | |
DEBUG: kubeadmjoin/join.go:133] W1103 06:55:04.316958 352 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. | |
I1103 06:55:04.317041 352 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName | |
I1103 06:55:04.317060 352 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf" | |
I1103 06:55:04.319574 352 preflight.go:90] [preflight] Running general checks | |
I1103 06:55:04.319682 352 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests | |
[preflight] Running pre-flight checks | |
I1103 06:55:04.319780 352 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf | |
I1103 06:55:04.319823 352 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:04.319837 352 checks.go:102] validating the container runtime | |
I1103 06:55:04.335633 352 checks.go:376] validating the presence of executable crictl | |
I1103 06:55:04.335696 352 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:55:04.335784 352 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:55:04.336038 352 checks.go:649] validating whether swap is enabled or not | |
I1103 06:55:04.336202 352 checks.go:376] validating the presence of executable ip | |
I1103 06:55:04.336336 352 checks.go:376] validating the presence of executable iptables | |
I1103 06:55:04.336438 352 checks.go:376] validating the presence of executable mount | |
I1103 06:55:04.336527 352 checks.go:376] validating the presence of executable nsenter | |
I1103 06:55:04.337129 352 checks.go:376] validating the presence of executable ebtables | |
I1103 06:55:04.337197 352 checks.go:376] validating the presence of executable ethtool | |
I1103 06:55:04.337217 352 checks.go:376] validating the presence of executable socat | |
I1103 06:55:04.337251 352 checks.go:376] validating the presence of executable tc | |
I1103 06:55:04.337273 352 checks.go:376] validating the presence of executable touch | |
I1103 06:55:04.337311 352 checks.go:520] running all checks | |
I1103 06:55:04.345846 352 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:55:04.346193 352 checks.go:618] validating kubelet version | |
I1103 06:55:04.445195 352 checks.go:128] validating if the service is enabled and active | |
I1103 06:55:04.462996 352 checks.go:201] validating availability of port 10250 | |
I1103 06:55:04.463397 352 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt | |
I1103 06:55:04.463499 352 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:55:04.463542 352 join.go:441] [preflight] Discovering cluster-info | |
I1103 06:55:04.463640 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:04.464557 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:04.473464 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 8 milliseconds | |
I1103 06:55:04.474431 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:09.474681 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:09.475650 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:09.478144 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds | |
I1103 06:55:09.479085 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:14.480646 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:14.481773 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:14.484289 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds | |
I1103 06:55:14.484528 352 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:19.485085 352 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:19.485928 352 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:19.489633 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:19.491696 352 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443" | |
I1103 06:55:19.491726 352 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443" | |
I1103 06:55:19.491760 352 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process | |
I1103 06:55:19.491776 352 join.go:455] [preflight] Fetching init configuration | |
I1103 06:55:19.491784 352 join.go:493] [preflight] Retrieving KubeConfig objects | |
[preflight] Reading configuration from the cluster... | |
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' | |
I1103 06:55:19.503739 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 10 milliseconds | |
I1103 06:55:19.509708 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds | |
I1103 06:55:19.513289 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 2 milliseconds | |
I1103 06:55:19.516342 352 interface.go:389] Looking for default routes with IPv4 addresses | |
I1103 06:55:19.516366 352 interface.go:394] Default route transits interface "eth0" | |
I1103 06:55:19.516708 352 interface.go:201] Interface eth0 is up | |
I1103 06:55:19.517107 352 interface.go:249] Interface "eth0" has 1 addresses :[172.17.0.2/16]. | |
I1103 06:55:19.517142 352 interface.go:216] Checking addr 172.17.0.2/16. | |
I1103 06:55:19.517153 352 interface.go:223] IP found 172.17.0.2 | |
I1103 06:55:19.517300 352 interface.go:255] Found valid IPv4 address 172.17.0.2 for interface "eth0". | |
I1103 06:55:19.517320 352 interface.go:400] Found active IP 172.17.0.2 | |
I1103 06:55:19.517731 352 preflight.go:101] [preflight] Running configuration dependant checks | |
I1103 06:55:19.517774 352 controlplaneprepare.go:211] [download-certs] Skipping certs download | |
I1103 06:55:19.519454 352 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.521043 352 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt | |
I1103 06:55:19.521443 352 loader.go:375] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.522088 352 kubelet.go:133] [kubelet-start] Stopping the kubelet | |
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace | |
I1103 06:55:19.542316 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 3 milliseconds | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
I1103 06:55:19.557291 352 kubelet.go:150] [kubelet-start] Starting the kubelet | |
[kubelet-start] Activating the kubelet service | |
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... | |
I1103 06:55:20.708928 352 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.721361 352 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.722924 352 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node | |
I1103 06:55:20.722955 352 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-worker2" as an annotation | |
I1103 06:55:21.233007 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 9 milliseconds | |
I1103 06:55:21.726448 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:22.226928 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:22.726100 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:23.226671 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:23.726013 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:24.226802 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:24.726906 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:25.226942 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:25.727734 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds | |
I1103 06:55:26.227361 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:26.726955 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:27.226678 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:27.726100 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:28.226715 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:28.726539 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:29.226721 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:29.727526 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 4 milliseconds | |
I1103 06:55:30.226393 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:30.726674 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:31.226217 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 2 milliseconds | |
I1103 06:55:31.727372 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:32.226922 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:32.727912 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 404 Not Found in 3 milliseconds | |
I1103 06:55:33.234981 352 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 8 milliseconds | |
I1103 06:55:33.251898 352 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker2 200 OK in 11 milliseconds | |
This node has joined the cluster: | |
* Certificate signing request was sent to apiserver and a response was received. | |
* The Kubelet was informed of the new secure connection details. | |
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. | |
DEBUG: kubeadmjoin/join.go:133] W1103 06:55:04.304914 353 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. | |
I1103 06:55:04.305005 353 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName | |
I1103 06:55:04.305024 353 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf" | |
[preflight] Running pre-flight checks | |
I1103 06:55:04.307200 353 preflight.go:90] [preflight] Running general checks | |
I1103 06:55:04.307309 353 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests | |
I1103 06:55:04.307429 353 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf | |
I1103 06:55:04.307450 353 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:04.307463 353 checks.go:102] validating the container runtime | |
I1103 06:55:04.327111 353 checks.go:376] validating the presence of executable crictl | |
I1103 06:55:04.327317 353 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables | |
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist | |
I1103 06:55:04.327432 353 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward | |
I1103 06:55:04.327506 353 checks.go:649] validating whether swap is enabled or not | |
I1103 06:55:04.327621 353 checks.go:376] validating the presence of executable ip | |
I1103 06:55:04.327724 353 checks.go:376] validating the presence of executable iptables | |
I1103 06:55:04.327857 353 checks.go:376] validating the presence of executable mount | |
I1103 06:55:04.327884 353 checks.go:376] validating the presence of executable nsenter | |
I1103 06:55:04.327927 353 checks.go:376] validating the presence of executable ebtables | |
I1103 06:55:04.327976 353 checks.go:376] validating the presence of executable ethtool | |
I1103 06:55:04.328009 353 checks.go:376] validating the presence of executable socat | |
I1103 06:55:04.328046 353 checks.go:376] validating the presence of executable tc | |
I1103 06:55:04.328072 353 checks.go:376] validating the presence of executable touch | |
I1103 06:55:04.328119 353 checks.go:520] running all checks | |
I1103 06:55:04.336370 353 checks.go:406] checking whether the given node name is reachable using net.LookupHost | |
I1103 06:55:04.336654 353 checks.go:618] validating kubelet version | |
I1103 06:55:04.444767 353 checks.go:128] validating if the service is enabled and active | |
I1103 06:55:04.459597 353 checks.go:201] validating availability of port 10250 | |
I1103 06:55:04.460126 353 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt | |
I1103 06:55:04.460168 353 checks.go:432] validating if the connectivity type is via proxy or direct | |
I1103 06:55:04.460235 353 join.go:441] [preflight] Discovering cluster-info | |
I1103 06:55:04.460361 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:04.461222 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:04.470433 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds | |
I1103 06:55:04.472256 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:09.473166 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:09.473698 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:09.478145 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 4 milliseconds | |
I1103 06:55:09.478854 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:14.479246 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:14.480052 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:14.483768 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:14.484187 353 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token | |
I1103 06:55:19.484529 353 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443" | |
I1103 06:55:19.485407 353 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443" | |
I1103 06:55:19.489481 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds | |
I1103 06:55:19.491065 353 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443" | |
I1103 06:55:19.491134 353 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443" | |
I1103 06:55:19.491183 353 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process | |
I1103 06:55:19.491237 353 join.go:455] [preflight] Fetching init configuration | |
I1103 06:55:19.491291 353 join.go:493] [preflight] Retrieving KubeConfig objects | |
[preflight] Reading configuration from the cluster... | |
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' | |
I1103 06:55:19.501485 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds | |
I1103 06:55:19.505939 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 2 milliseconds | |
I1103 06:55:19.509476 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 2 milliseconds | |
I1103 06:55:19.512104 353 interface.go:389] Looking for default routes with IPv4 addresses | |
I1103 06:55:19.512125 353 interface.go:394] Default route transits interface "eth0" | |
I1103 06:55:19.512253 353 interface.go:201] Interface eth0 is up | |
I1103 06:55:19.512312 353 interface.go:249] Interface "eth0" has 1 addresses :[172.17.0.4/16]. | |
I1103 06:55:19.512333 353 interface.go:216] Checking addr 172.17.0.4/16. | |
I1103 06:55:19.512343 353 interface.go:223] IP found 172.17.0.4 | |
I1103 06:55:19.512353 353 interface.go:255] Found valid IPv4 address 172.17.0.4 for interface "eth0". | |
I1103 06:55:19.512361 353 interface.go:400] Found active IP 172.17.0.4 | |
I1103 06:55:19.512443 353 preflight.go:101] [preflight] Running configuration dependant checks | |
I1103 06:55:19.512459 353 controlplaneprepare.go:211] [download-certs] Skipping certs download | |
I1103 06:55:19.512472 353 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.517347 353 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt | |
I1103 06:55:19.517938 353 loader.go:375] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf | |
I1103 06:55:19.518546 353 kubelet.go:133] [kubelet-start] Stopping the kubelet | |
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace | |
I1103 06:55:19.535398 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.18 200 OK in 3 milliseconds | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
I1103 06:55:19.550702 353 kubelet.go:150] [kubelet-start] Starting the kubelet | |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" | |
[kubelet-start] Activating the kubelet service | |
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... | |
I1103 06:55:20.685305 353 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.702567 353 loader.go:375] Config loaded from file: /etc/kubernetes/kubelet.conf | |
I1103 06:55:20.704749 353 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node | |
I1103 06:55:20.704832 353 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-worker" as an annotation | |
I1103 06:55:21.222512 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 17 milliseconds | |
I1103 06:55:21.707461 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds | |
I1103 06:55:22.208952 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:22.708677 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:23.209261 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:23.709885 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:24.209557 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:24.709016 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:25.209471 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:25.709602 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 4 milliseconds | |
I1103 06:55:26.208431 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:26.708281 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:27.209094 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:27.708227 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:28.208625 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:28.710235 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 5 milliseconds | |
I1103 06:55:29.208632 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:29.708624 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:30.208306 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:30.708657 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:31.208287 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:31.708459 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:32.208096 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 2 milliseconds | |
I1103 06:55:32.708928 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:33.208940 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 404 Not Found in 3 milliseconds | |
I1103 06:55:33.709704 353 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 4 milliseconds | |
I1103 06:55:33.724501 353 round_trippers.go:443] PATCH https://172.17.0.3:6443/api/v1/nodes/kind-worker 200 OK in 9 milliseconds | |
This node has joined the cluster: | |
* Certificate signing request was sent to apiserver and a response was received. | |
* The Kubelet was informed of the new secure connection details. | |
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. | |
✓ Joining worker nodes 🚜 | |
• Waiting ≤ 1m0s for control-plane = Ready ⏳ ... | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master '-o=jsonpath='"'"'{.items..status.conditions[-1:].status}'"'"''" | |
✓ Waiting ≤ 1m0s for control-plane = Ready ⏳ | |
• Ready after 24s 💚 | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker exec --privileged kind-control-plane cat /etc/kubernetes/admin.conf" | |
DEBUG: exec/local.go:116] Running: "docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster=kind --format '{{.Names}}'" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker2" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-control-plane" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ index .Config.Labels "io.k8s.sigs.kind.role"}}' kind-worker" | |
DEBUG: exec/local.go:116] Running: "docker inspect --format '{{ with (index (index .NetworkSettings.Ports "6443/tcp") 0) }}{{ printf "%s %s" .HostIp .HostPort }}{{ end }}' kind-control-plane" | |
Set kubectl context to "kind-kind" | |
You can now use your cluster with: | |
kubectl cluster-info --context kind-kind | |
+ run_tests | |
+ [ ipv4 = ipv6 ] | |
+ SKIP=\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] | |
+ FOCUS=. | |
+ [ true = true ] | |
+ export GINKGO_PARALLEL=y | |
+ [ -z \[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] ] | |
+ SKIP=\[Serial\]|\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] | |
+ export KUBERNETES_CONFORMANCE_TEST=y | |
+ export KUBE_CONTAINER_RUNTIME=remote | |
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock | |
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd | |
+ export GINKGO_TOLERATE_FLAKES=y | |
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=. --ginkgo.skip=\[Serial\]|\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|In-tree.Volumes.\[Driver:.nfs\]|PersistentVolumes.NFS|Dynamic.PV|Network.should.set.TCP.CLOSE_WAIT.timeout|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|ReplicationController.light.Should.scale.from.1.pod.to.2.pods|should.provide.basic.identity|\[NodeFeature:PodReadinessGate\] --report-dir=/logs/artifacts --disable-log-dump=true | |
Conformance test: not doing test setup. | |
Running Suite: Kubernetes e2e suite | |
=================================== | |
Random Seed: [1m1572764161[0m - Will randomize all specs | |
Will run [1m4979[0m specs | |
Running in parallel across [1m25[0m nodes | |
Nov 3 06:56:10.100: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:10.103: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable | |
Nov 3 06:56:10.184: INFO: Condition Ready of node kind-worker2 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Nov 3 06:56:10.184: INFO: Condition Ready of node kind-worker is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Nov 3 06:56:10.184: INFO: Unschedulable nodes: | |
Nov 3 06:56:10.184: INFO: -> kind-worker2 Ready=false Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master | |
Nov 3 06:56:10.184: INFO: -> kind-worker Ready=false Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master | |
Nov 3 06:56:10.184: INFO: ================================ | |
Nov 3 06:56:40.188: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready | |
Nov 3 06:56:40.246: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) | |
Nov 3 06:56:40.246: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. | |
Nov 3 06:56:40.246: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start | |
Nov 3 06:56:40.264: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) | |
Nov 3 06:56:40.264: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) | |
Nov 3 06:56:40.264: INFO: e2e test version: v0.0.0-master+$Format:%h$ | |
Nov 3 06:56:40.266: INFO: kube-apiserver version: v1.18.0-alpha.0.178+0c66e64b140011 | |
Nov 3 06:56:40.267: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.273: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.282: INFO: Driver vsphere doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: vsphere] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver vsphere doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.285: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: tmpfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.312: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.386: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.310: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.304: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.389: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.390: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.319: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.392: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.315: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.390: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.320: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.391: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.391: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.394: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.309: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.307: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.305: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.395: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.396: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.402: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.317: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.400: INFO: Cluster IP family: ipv4 | |
[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.307: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.397: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.397: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.405: INFO: Cluster IP family: ipv4 | |
[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.409: INFO: Cluster IP family: ipv4 | |
Nov 3 06:56:40.314: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.416: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.412: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.426: INFO: Driver local doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
Nov 3 06:56:40.316: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:40.430: INFO: Cluster IP family: ipv4 | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.448: INFO: Only supported for providers [gce gke] (not skeleton) | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gcepd] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [gce gke] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1194 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.452: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould not mount / map unused volumes in a pod [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumemode.go:334[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.452: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.454: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.448: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.457: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support non-existent path [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:189[0m | |
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.456: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.446: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.464: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.464: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: vsphere] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.463: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver hostPathSymlink doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.473: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m | |
test/e2e/storage/drivers/in_tree.go:258 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.475: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.476: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gcepd] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.483: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-link-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly file specified in the volumeMount [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:374[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.486: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.487: INFO: Driver csi-hostpath-v0 doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver csi-hostpath-v0 doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.487: INFO: Driver cinder doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver cinder doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.489: INFO: Driver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath-v0 doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.491: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly directory specified in the volumeMount [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:359[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-windows] DNS | |
test/e2e/windows/framework.go:28 | |
Nov 3 06:56:40.496: INFO: Only supported for node OS distro [windows] (not debian) | |
[AfterEach] [sig-windows] DNS | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-windows] DNS | |
[90mtest/e2e/windows/framework.go:27[0m | |
[36m[1mshould support configurable pod DNS servers [BeforeEach][0m | |
[90mtest/e2e/windows/dns.go:42[0m | |
[36mOnly supported for node OS distro [windows] (not debian)[0m | |
test/e2e/windows/framework.go:30 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.498: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: emptydir] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.511: INFO: Driver local doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:40.508: INFO: Driver gluster doesn't support ext4 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: gluster] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver gluster doesn't support ext4 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.395: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename resourcequota | |
Nov 3 06:56:40.611: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.630: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8261 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to update and delete ResourceQuota. [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a ResourceQuota | |
[1mSTEP[0m: Getting a ResourceQuota | |
[1mSTEP[0m: Updating a ResourceQuota | |
[1mSTEP[0m: Verifying a ResourceQuota was modified | |
[1mSTEP[0m: Deleting a ResourceQuota | |
[1mSTEP[0m: Verifying the deleted ResourceQuota | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "resourcequota-8261" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Volume Placement | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.507: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename volume-placement | |
Nov 3 06:56:42.603: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.724: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-placement-9859 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Volume Placement | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:52 | |
Nov 3 06:56:42.850: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Volume Placement | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:42.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "volume-placement-9859" for this suite. | |
[AfterEach] [sig-storage] Volume Placement | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:70 | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [2.411 seconds][0m | |
[sig-storage] Volume Placement | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mtest back to back pod creation and deletion with different volume sources on the same worker node [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_volume_placement.go:276[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_volume_placement.go:53 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.402: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:40.577: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.598: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4492 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/empty_dir.go:45 | |
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup | |
test/e2e/common/empty_dir.go:58 | |
[1mSTEP[0m: Creating a pod to test emptydir subpath on tmpfs | |
Nov 3 06:56:40.772: INFO: Waiting up to 5m0s for pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8" in namespace "emptydir-4492" to be "success or failure" | |
Nov 3 06:56:40.791: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.627572ms | |
Nov 3 06:56:42.839: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066567371s | |
Nov 3 06:56:44.845: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072062332s | |
Nov 3 06:56:46.890: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117658859s | |
Nov 3 06:56:48.912: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139959904s | |
Nov 3 06:56:50.923: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150555575s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:50.923: INFO: Pod "pod-9208840f-63c4-43bc-af9c-1271ef1e87f8" satisfied condition "success or failure" | |
Nov 3 06:56:50.927: INFO: Trying to get logs from node kind-worker2 pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.200: INFO: Waiting for pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 to disappear | |
Nov 3 06:56:51.204: INFO: Pod pod-9208840f-63c4-43bc-af9c-1271ef1e87f8 no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-4492" for this suite. | |
[32m• [SLOW TEST:10.816 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/empty_dir.go:43[0m | |
nonexistent volume subPath should have the correct mode and owner using FSGroup | |
[90mtest/e2e/common/empty_dir.go:58[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.222: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould be able to unmount after the subpath directory is deleted [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:437[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.458: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:40.677: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.745: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8834 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/empty_dir.go:45 | |
[It] files with FSGroup ownership should support (root,0644,tmpfs) | |
test/e2e/common/empty_dir.go:62 | |
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs | |
Nov 3 06:56:40.917: INFO: Waiting up to 5m0s for pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e" in namespace "emptydir-8834" to be "success or failure" | |
Nov 3 06:56:40.943: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.096838ms | |
Nov 3 06:56:43.249: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331974528s | |
Nov 3 06:56:45.269: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352546294s | |
Nov 3 06:56:47.582: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664816887s | |
Nov 3 06:56:49.587: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670013683s | |
Nov 3 06:56:51.590: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.673383668s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:51.590: INFO: Pod "pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e" satisfied condition "success or failure" | |
Nov 3 06:56:51.596: INFO: Trying to get logs from node kind-worker2 pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.631: INFO: Waiting for pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e to disappear | |
Nov 3 06:56:51.634: INFO: Pod pod-732c67a7-a2f7-450b-af90-fc5f6c72e12e no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-8834" for this suite. | |
[32m• [SLOW TEST:11.186 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/empty_dir.go:43[0m | |
files with FSGroup ownership should support (root,0644,tmpfs) | |
[90mtest/e2e/common/empty_dir.go:62[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.960: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename projected | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-584 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Projected downwardAPI | |
test/e2e/common/projected_downwardapi.go:40 | |
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] | |
test/e2e/common/projected_downwardapi.go:90 | |
[1mSTEP[0m: Creating a pod to test downward API volume plugin | |
Nov 3 06:56:43.266: INFO: Waiting up to 5m0s for pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219" in namespace "projected-584" to be "success or failure" | |
Nov 3 06:56:43.298: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 31.790449ms | |
Nov 3 06:56:45.368: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101721691s | |
Nov 3 06:56:47.582: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315861952s | |
Nov 3 06:56:49.587: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Pending", Reason="", readiness=false. Elapsed: 6.320769327s | |
Nov 3 06:56:51.592: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.325773467s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:51.592: INFO: Pod "metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219" satisfied condition "success or failure" | |
Nov 3 06:56:51.596: INFO: Trying to get logs from node kind-worker2 pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 container client-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:51.631: INFO: Waiting for pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 to disappear | |
Nov 3 06:56:51.634: INFO: Pod metadata-volume-88205ea3-563d-478d-93dd-33c1769ae219 no longer exists | |
[AfterEach] [sig-storage] Projected downwardAPI | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "projected-584" for this suite. | |
[32m• [SLOW TEST:10.690 seconds][0m | |
[sig-storage] Projected downwardAPI | |
[90mtest/e2e/common/projected_downwardapi.go:34[0m | |
should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] | |
[90mtest/e2e/common/projected_downwardapi.go:90[0m | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.657: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:51.693: INFO: Driver local doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: block] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver local doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.291: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename gc | |
Nov 3 06:56:40.421: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.533: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9197 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support cascading deletion of custom resources | |
test/e2e/apimachinery/garbage_collector.go:869 | |
Nov 3 06:56:40.678: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
Nov 3 06:56:41.536: INFO: created owner resource "owner4wcl2" | |
Nov 3 06:56:41.557: INFO: created dependent resource "dependentrvxzj" | |
[AfterEach] [sig-api-machinery] Garbage collector | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "gc-9197" for this suite. | |
[32m• [SLOW TEST:11.809 seconds][0m | |
[sig-api-machinery] Garbage collector | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should support cascading deletion of custom resources | |
[90mtest/e2e/apimachinery/garbage_collector.go:869[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-auth] ServiceAccounts | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.704: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename svcaccounts | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3966 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should allow opting out of API token automount [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: getting the auto-created API token | |
Nov 3 06:56:52.456: INFO: created pod pod-service-account-defaultsa | |
Nov 3 06:56:52.456: INFO: pod pod-service-account-defaultsa service account token volume mount: true | |
Nov 3 06:56:52.463: INFO: created pod pod-service-account-mountsa | |
Nov 3 06:56:52.463: INFO: pod pod-service-account-mountsa service account token volume mount: true | |
Nov 3 06:56:52.474: INFO: created pod pod-service-account-nomountsa | |
Nov 3 06:56:52.474: INFO: pod pod-service-account-nomountsa service account token volume mount: false | |
Nov 3 06:56:52.504: INFO: created pod pod-service-account-defaultsa-mountspec | |
Nov 3 06:56:52.504: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.518: INFO: created pod pod-service-account-mountsa-mountspec | |
Nov 3 06:56:52.518: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.531: INFO: created pod pod-service-account-nomountsa-mountspec | |
Nov 3 06:56:52.531: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true | |
Nov 3 06:56:52.551: INFO: created pod pod-service-account-defaultsa-nomountspec | |
Nov 3 06:56:52.551: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false | |
Nov 3 06:56:52.593: INFO: created pod pod-service-account-mountsa-nomountspec | |
Nov 3 06:56:52.593: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false | |
Nov 3 06:56:52.617: INFO: created pod pod-service-account-nomountsa-nomountspec | |
Nov 3 06:56:52.617: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false | |
[AfterEach] [sig-auth] ServiceAccounts | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "svcaccounts-3966" for this suite. | |
[32m•[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:52.742: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: Pre-provisioned PV (default fs)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver csi-hostpath doesn't support PreprovisionedPV -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[BeforeEach] [sig-windows] Windows volume mounts | |
test/e2e/windows/framework.go:28 | |
Nov 3 06:56:52.745: INFO: Only supported for node OS distro [windows] (not debian) | |
[AfterEach] [sig-windows] Windows volume mounts | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:52.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-windows] Windows volume mounts | |
[90mtest/e2e/windows/framework.go:27[0m | |
[36m[1mcheck volume mount permissions [BeforeEach][0m | |
[90mtest/e2e/windows/volumes.go:62[0m | |
container should have readOnly permissions on emptyDir | |
[90mtest/e2e/windows/volumes.go:64[0m | |
[36mOnly supported for node OS distro [windows] (not debian)[0m | |
test/e2e/windows/framework.go:30 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.453: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename secrets | |
Nov 3 06:56:40.654: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.698: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7025 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating secret with name secret-test-07c122a4-c615-4468-8240-b02d1b4a84a9 | |
[1mSTEP[0m: Creating a pod to test consume secrets | |
Nov 3 06:56:40.934: INFO: Waiting up to 5m0s for pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14" in namespace "secrets-7025" to be "success or failure" | |
Nov 3 06:56:40.973: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 38.485149ms | |
Nov 3 06:56:43.240: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305760134s | |
Nov 3 06:56:45.270: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335869855s | |
Nov 3 06:56:47.556: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621851772s | |
Nov 3 06:56:49.560: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626340278s | |
Nov 3 06:56:51.565: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630405697s | |
Nov 3 06:56:53.590: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.656001781s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:53.590: INFO: Pod "pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14" satisfied condition "success or failure" | |
Nov 3 06:56:53.619: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 container secret-volume-test: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:53.790: INFO: Waiting for pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 to disappear | |
Nov 3 06:56:53.821: INFO: Pod pod-secrets-9a17c54c-c946-48dd-801d-db80c105ed14 no longer exists | |
[AfterEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:53.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "secrets-7025" for this suite. | |
[32m• [SLOW TEST:13.463 seconds][0m | |
[sig-storage] Secrets | |
[90mtest/e2e/common/secrets_volume.go:34[0m | |
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.496: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename emptydir | |
Nov 3 06:56:41.622: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.693: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1334 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium | |
Nov 3 06:56:41.933: INFO: Waiting up to 5m0s for pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c" in namespace "emptydir-1334" to be "success or failure" | |
Nov 3 06:56:41.958: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.830321ms | |
Nov 3 06:56:43.964: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031633337s | |
Nov 3 06:56:45.979: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04599743s | |
Nov 3 06:56:48.234: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301239804s | |
Nov 3 06:56:50.240: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Running", Reason="", readiness=true. Elapsed: 8.306953068s | |
Nov 3 06:56:52.246: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Running", Reason="", readiness=true. Elapsed: 10.313579641s | |
Nov 3 06:56:54.252: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.319042007s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:56:54.252: INFO: Pod "pod-57880e8f-988b-41d9-acd1-1e93dda8679c" satisfied condition "success or failure" | |
Nov 3 06:56:54.255: INFO: Trying to get logs from node kind-worker2 pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:56:54.302: INFO: Waiting for pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c to disappear | |
Nov 3 06:56:54.317: INFO: Pod pod-57880e8f-988b-41d9-acd1-1e93dda8679c no longer exists | |
[AfterEach] [sig-storage] EmptyDir volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "emptydir-1334" for this suite. | |
[32m• [SLOW TEST:13.853 seconds][0m | |
[sig-storage] EmptyDir volumes | |
[90mtest/e2e/common/empty_dir.go:40[0m | |
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:54.363: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPathSymlink] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver hostPathSymlink doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.461: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename security-context-test | |
Nov 3 06:56:40.752: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:40.816: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3748 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Security Context | |
test/e2e/common/security_context.go:39 | |
[It] should not run with an explicit root user ID [LinuxOnly] | |
test/e2e/common/security_context.go:132 | |
[AfterEach] [k8s.io] Security Context | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "security-context-test-3748" for this suite. | |
[32m• [SLOW TEST:14.549 seconds][0m | |
[k8s.io] Security Context | |
[90mtest/e2e/framework/framework.go:683[0m | |
When creating a container with runAsNonRoot | |
[90mtest/e2e/common/security_context.go:97[0m | |
should not run with an explicit root user ID [LinuxOnly] | |
[90mtest/e2e/common/security_context.go:132[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.479: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
Nov 3 06:56:42.485: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.581: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3239 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
Nov 3 06:56:53.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && mount --bind /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && ln -s /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd] Namespace:persistent-local-volumes-test-3239 PodName:hostexec-kind-worker-ftnbr ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:53.277: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:56:53.801: INFO: Creating a PV followed by a PVC | |
Nov 3 06:56:53.906: INFO: Waiting for PV local-pvbr9zr to bind to PVC pvc-tjqdc | |
Nov 3 06:56:53.906: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tjqdc] to have phase Bound | |
Nov 3 06:56:53.926: INFO: PersistentVolumeClaim pvc-tjqdc found but phase is Pending instead of Bound. | |
Nov 3 06:56:55.930: INFO: PersistentVolumeClaim pvc-tjqdc found and phase=Bound (2.023506165s) | |
Nov 3 06:56:55.930: INFO: Waiting up to 3m0s for PersistentVolume local-pvbr9zr to have phase Bound | |
Nov 3 06:56:55.933: INFO: PersistentVolume local-pvbr9zr found and phase=Bound (3.815805ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:56:55.940: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: dir-link-bindmounted] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:56:55.941: INFO: Deleting PersistentVolumeClaim "pvc-tjqdc" | |
Nov 3 06:56:55.955: INFO: Deleting PersistentVolume "local-pvbr9zr" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:56:55.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd && umount /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend && rm -r /tmp/local-volume-test-122bbfce-84e6-4e44-a32a-e2ea29b163bd-backend] Namespace:persistent-local-volumes-test-3239 PodName:hostexec-kind-worker-ftnbr ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.971: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-3239" for this suite. | |
[36m[1mS [SKIPPING] [15.847 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: dir-link-bindmounted] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:56:56.328: INFO: Only supported for providers [azure] (not skeleton) | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext4)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mOnly supported for providers [azure] (not skeleton)[0m | |
test/e2e/storage/drivers/in_tree.go:1449 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-cli] Kubectl alpha client | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:56.336: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename kubectl | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-96 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-cli] Kubectl alpha client | |
test/e2e/kubectl/kubectl.go:208 | |
[BeforeEach] Kubectl run CronJob | |
test/e2e/kubectl/kubectl.go:217 | |
[It] should create a CronJob | |
test/e2e/kubectl/kubectl.go:226 | |
Nov 3 06:56:56.547: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc002845560), Code:404}} | |
[AfterEach] Kubectl run CronJob | |
test/e2e/kubectl/kubectl.go:222 | |
Nov 3 06:56:56.548: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96' | |
Nov 3 06:56:56.902: INFO: rc: 1 | |
Nov 3 06:56:56.902: FAIL: Unexpected error: | |
<exec.CodeExitError>: { | |
Err: { | |
s: "error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96] [] <nil> Error from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n [] <nil> 0xc0028b0210 exit status 1 <nil> <nil> true [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa10 0xc001cdfa28] [0x1109c80 0x1109c80] 0xc0028458c0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1", | |
}, | |
Code: 1, | |
} | |
error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:33605 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-96] [] <nil> Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found | |
[] <nil> 0xc0028b0210 exit status 1 <nil> <nil> true [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa00 0xc001cdfa18 0xc001cdfa30] [0xc001cdfa10 0xc001cdfa28] [0x1109c80 0x1109c80] 0xc0028458c0 <nil>}: | |
Command stdout: | |
stderr: | |
Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found | |
error: | |
exit status 1 | |
occurred | |
[AfterEach] [sig-cli] Kubectl alpha client | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:56:56.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "kubectl-96" for this suite. | |
[36m[1mS [SKIPPING] [0.596 seconds][0m | |
[sig-cli] Kubectl alpha client | |
[90mtest/e2e/kubectl/framework.go:23[0m | |
Kubectl run CronJob | |
[90mtest/e2e/kubectl/kubectl.go:213[0m | |
[36m[1mshould create a CronJob [It][0m | |
[90mtest/e2e/kubectl/kubectl.go:226[0m | |
[36mCould not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc002845560), Code:404}}[0m | |
test/e2e/kubectl/kubectl.go:227 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:42.921: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2540 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: dir-link] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
Nov 3 06:56:55.768: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend && ln -s /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3] Namespace:persistent-local-volumes-test-2540 PodName:hostexec-kind-worker-wrt76 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:55.768: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:56:56.015: INFO: Creating a PV followed by a PVC | |
Nov 3 06:56:56.045: INFO: Waiting for PV local-pvpd8qk to bind to PVC pvc-gfcqn | |
Nov 3 06:56:56.046: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-gfcqn] to have phase Bound | |
Nov 3 06:56:56.069: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:56:58.073: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:57:00.078: INFO: PersistentVolumeClaim pvc-gfcqn found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.086: INFO: PersistentVolumeClaim pvc-gfcqn found and phase=Bound (6.040929945s) | |
Nov 3 06:57:02.087: INFO: Waiting up to 3m0s for PersistentVolume local-pvpd8qk to have phase Bound | |
Nov 3 06:57:02.090: INFO: PersistentVolume local-pvpd8qk found and phase=Bound (3.304692ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:57:02.102: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: dir-link] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:57:02.103: INFO: Deleting PersistentVolumeClaim "pvc-gfcqn" | |
Nov 3 06:57:02.123: INFO: Deleting PersistentVolume "local-pvpd8qk" | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:02.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3 && rm -r /tmp/local-volume-test-025ec729-47b7-419d-b07a-726ffdb641d3-backend] Namespace:persistent-local-volumes-test-2540 PodName:hostexec-kind-worker-wrt76 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.136: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-2540" for this suite. | |
[36m[1mS [SKIPPING] [19.467 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: dir-link] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.394: INFO: Driver azure doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver azure doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.404: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: cinder] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support non-existent path [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:189[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.660: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9640 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] PersistentVolumes-local | |
test/e2e/storage/persistent_volumes-local.go:153 | |
[BeforeEach] [Volume type: tmpfs] | |
test/e2e/storage/persistent_volumes-local.go:189 | |
[1mSTEP[0m: Initializing test volumes | |
[1mSTEP[0m: Creating tmpfs mount point on node "kind-worker" at path "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" | |
Nov 3 06:56:59.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef"] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:56:59.927: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Creating local PVCs and PVs | |
Nov 3 06:57:00.106: INFO: Creating a PV followed by a PVC | |
Nov 3 06:57:00.126: INFO: Waiting for PV local-pvzjpt2 to bind to PVC pvc-vxjdc | |
Nov 3 06:57:00.126: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-vxjdc] to have phase Bound | |
Nov 3 06:57:00.145: INFO: PersistentVolumeClaim pvc-vxjdc found but phase is Pending instead of Bound. | |
Nov 3 06:57:02.158: INFO: PersistentVolumeClaim pvc-vxjdc found and phase=Bound (2.032078507s) | |
Nov 3 06:57:02.158: INFO: Waiting up to 3m0s for PersistentVolume local-pvzjpt2 to have phase Bound | |
Nov 3 06:57:02.169: INFO: PersistentVolume local-pvzjpt2 found and phase=Bound (10.8475ms) | |
[BeforeEach] Set fsGroup for local volume | |
test/e2e/storage/persistent_volumes-local.go:255 | |
[It] should set different fsGroup for second pod if first pod is deleted | |
test/e2e/storage/persistent_volumes-local.go:280 | |
Nov 3 06:57:02.185: INFO: Disabled temporarily, reopen after #73168 is fixed | |
[AfterEach] [Volume type: tmpfs] | |
test/e2e/storage/persistent_volumes-local.go:198 | |
[1mSTEP[0m: Cleaning up PVC and PV | |
Nov 3 06:57:02.186: INFO: Deleting PersistentVolumeClaim "pvc-vxjdc" | |
Nov 3 06:57:02.212: INFO: Deleting PersistentVolume "local-pvzjpt2" | |
[1mSTEP[0m: Unmount tmpfs mount point on node "kind-worker" at path "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef" | |
Nov 3 06:57:02.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef"] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.238: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Removing the test directory | |
Nov 3 06:57:02.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-919996c0-1e5b-4112-bd40-3744fbff3aef] Namespace:persistent-local-volumes-test-9640 PodName:hostexec-kind-worker-dmz5t ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} | |
Nov 3 06:57:02.486: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[AfterEach] [sig-storage] PersistentVolumes-local | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "persistent-local-volumes-test-9640" for this suite. | |
[36m[1mS [SKIPPING] [11.042 seconds][0m | |
[sig-storage] PersistentVolumes-local | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Volume type: tmpfs] | |
[90mtest/e2e/storage/persistent_volumes-local.go:186[0m | |
Set fsGroup for local volume | |
[90mtest/e2e/storage/persistent_volumes-local.go:254[0m | |
[36m[1mshould set different fsGroup for second pod if first pod is deleted [It][0m | |
[90mtest/e2e/storage/persistent_volumes-local.go:280[0m | |
[36mDisabled temporarily, reopen after #73168 is fixed[0m | |
test/e2e/storage/persistent_volumes-local.go:281 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.705: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:02.710: INFO: Driver hostPath doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:02.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: hostPath] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould store data [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:150[0m | |
[36mDriver hostPath doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.498: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename downward-api | |
Nov 3 06:56:41.638: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:41.655: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2755 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Downward API volume | |
test/e2e/common/downwardapi_volume.go:40 | |
[It] should provide container's cpu limit [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test downward API volume plugin | |
Nov 3 06:56:41.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a" in namespace "downward-api-2755" to be "success or failure" | |
Nov 3 06:56:41.925: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.187873ms | |
Nov 3 06:56:43.934: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047378418s | |
Nov 3 06:56:45.939: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051867026s | |
Nov 3 06:56:48.234: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346924228s | |
Nov 3 06:56:50.239: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35201566s | |
Nov 3 06:56:52.246: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.359204056s | |
Nov 3 06:56:54.251: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.364465546s | |
Nov 3 06:56:56.265: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.378567859s | |
Nov 3 06:56:58.269: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.382305572s | |
Nov 3 06:57:00.277: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.390268677s | |
Nov 3 06:57:02.303: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.416660282s | |
Nov 3 06:57:04.309: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.421717977s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:04.309: INFO: Pod "downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a" satisfied condition "success or failure" | |
Nov 3 06:57:04.311: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a container client-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:04.821: INFO: Waiting for pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a to disappear | |
Nov 3 06:57:04.824: INFO: Pod downwardapi-volume-f0891c63-2747-44c8-a22c-9f7605603b7a no longer exists | |
[AfterEach] [sig-storage] Downward API volume | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "downward-api-2755" for this suite. | |
[32m• [SLOW TEST:24.335 seconds][0m | |
[sig-storage] Downward API volume | |
[90mtest/e2e/common/downwardapi_volume.go:35[0m | |
should provide container's cpu limit [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.852: INFO: Driver local doesn't support ext3 -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: dir-bindmounted] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ext3)] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver local doesn't support ext3 -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.859: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern | |
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support file as subpath [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:225[0m | |
[36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m | |
test/e2e/storage/testsuites/base.go:685 | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:04.864: INFO: csi-hostpath-v0 has no volume attributes defined, doesn't support ephemeral inline volumes | |
[AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:04.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] CSI Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: csi-hostpath-v0] | |
[90mtest/e2e/storage/csi_volumes.go:56[0m | |
[Testpattern: inline ephemeral CSI volume] ephemeral | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support two pods which share the same volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/ephemeral.go:140[0m | |
[36mcsi-hostpath-v0 has no volume attributes defined, doesn't support ephemeral inline volumes[0m | |
test/e2e/storage/drivers/csi.go:136 | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:34 | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.519: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename sysctl | |
Nov 3 06:56:42.795: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.866: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-2435 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/common/sysctl.go:63 | |
[It] should support unsafe sysctls which are actually whitelisted | |
test/e2e/common/sysctl.go:110 | |
[1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl | |
[1mSTEP[0m: Watching for error events or started pod | |
[1mSTEP[0m: Waiting for pod completion | |
[1mSTEP[0m: Checking that the pod succeeded | |
[1mSTEP[0m: Getting logs from the pod | |
[1mSTEP[0m: Checking that the sysctl is actually updated | |
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "sysctl-2435" for this suite. | |
[32m• [SLOW TEST:24.858 seconds][0m | |
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] | |
[90mtest/e2e/framework/framework.go:683[0m | |
should support unsafe sysctls which are actually whitelisted | |
[90mtest/e2e/common/sysctl.go:110[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:40.482: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
Nov 3 06:56:42.605: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. | |
Nov 3 06:56:42.677: INFO: Found ClusterRoles; assuming RBAC is enabled. | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8732 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:43.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:45.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:48.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:49.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:51.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361003, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:56:54.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should be able to deny pod and configmap creation [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Registering the webhook via the AdmissionRegistration API | |
Nov 3 06:56:54.822: INFO: Waiting for webhook configuration to be ready... | |
[1mSTEP[0m: create a pod that should be denied by the webhook | |
[1mSTEP[0m: create a pod that causes the webhook to hang | |
[1mSTEP[0m: create a configmap that should be denied by the webhook | |
[1mSTEP[0m: create a configmap that should be admitted by the webhook | |
[1mSTEP[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook | |
[1mSTEP[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook | |
[1mSTEP[0m: create a namespace that bypass the webhook | |
[1mSTEP[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-8732" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-8732-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:25.136 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should be able to deny pod and configmap creation [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:51.248: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2475 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
Nov 3 06:56:52.162: INFO: role binding webhook-auth-reader already exists | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:52.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:54.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:56.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:58.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:00.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:02.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361012, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:57:05.242: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate pod and apply defaults after mutation [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Registering the mutating pod webhook via the AdmissionRegistration API | |
[1mSTEP[0m: create a pod that should be updated by the webhook | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-2475" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-2475-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:14.630 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should mutate pod and apply defaults after mutation [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:05.626: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3847 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should include custom resource definition resources in discovery documents [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: fetching the /apis discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io API group in the /apis discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document | |
[1mSTEP[0m: fetching the /apis/apiextensions.k8s.io discovery document | |
[1mSTEP[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document | |
[1mSTEP[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document | |
[1mSTEP[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document | |
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:05.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "custom-resource-definition-3847" for this suite. | |
[32m•[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:52.103: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename webhook | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6882 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:87 | |
[1mSTEP[0m: Setting up server cert | |
[1mSTEP[0m: Create role binding to let webhook read extension-apiserver-authentication | |
Nov 3 06:56:53.301: INFO: role binding webhook-auth-reader already exists | |
[1mSTEP[0m: Deploying the webhook pod | |
[1mSTEP[0m: Wait for the deployment to be ready | |
Nov 3 06:56:53.483: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set | |
Nov 3 06:56:55.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:57.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:56:59.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
Nov 3 06:57:01.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63708361013, loc:(*time.Location)(0x83e1840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} | |
[1mSTEP[0m: Deploying the webhook service | |
[1mSTEP[0m: Verifying the service has paired with the endpoint | |
Nov 3 06:57:04.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 | |
[It] should mutate custom resource with different stored version [Conformance] | |
test/e2e/framework/framework.go:688 | |
Nov 3 06:57:04.538: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Registering the mutating webhook for custom resource e2e-test-webhook-9779-crds.webhook.example.com via the AdmissionRegistration API | |
[1mSTEP[0m: Creating a custom resource while v1 is storage version | |
[1mSTEP[0m: Patching Custom Resource Definition to set v2 as storage | |
[1mSTEP[0m: Patching the custom resource while v2 is storage version | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "webhook-6882" for this suite. | |
[1mSTEP[0m: Destroying namespace "webhook-6882-markers" for this suite. | |
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
test/e2e/apimachinery/webhook.go:102 | |
[32m• [SLOW TEST:14.151 seconds][0m | |
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should mutate custom resource with different stored version [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-node] ConfigMap | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:06.271: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename configmap | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5652 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should fail to create ConfigMap with empty key [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-d04fa7fe-1503-462c-a779-9cd7d80db7e7 | |
[AfterEach] [sig-node] ConfigMap | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "configmap-5652" for this suite. | |
[32m•[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:06.560: INFO: Driver local doesn't support InlineVolume -- skipping | |
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: local][LocalVolumeType: blockfs] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Inline-volume (default fs)] subPath | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould support readOnly file specified in the volumeMount [LinuxOnly] [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/subpath.go:374[0m | |
[36mDriver local doesn't support InlineVolume -- skipping[0m | |
test/e2e/storage/testsuites/base.go:152 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [k8s.io] Docker Containers | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:54.372: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename containers | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6399 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating a pod to test override all | |
Nov 3 06:56:54.549: INFO: Waiting up to 5m0s for pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf" in namespace "containers-6399" to be "success or failure" | |
Nov 3 06:56:54.574: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.818809ms | |
Nov 3 06:56:56.583: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034281714s | |
Nov 3 06:56:58.588: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03874951s | |
Nov 3 06:57:00.591: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042615832s | |
Nov 3 06:57:02.596: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046723138s | |
Nov 3 06:57:04.600: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050996206s | |
Nov 3 06:57:06.611: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.061882432s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:06.611: INFO: Pod "client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf" satisfied condition "success or failure" | |
Nov 3 06:57:06.617: INFO: Trying to get logs from node kind-worker2 pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf container test-container: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:06.667: INFO: Waiting for pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf to disappear | |
Nov 3 06:57:06.678: INFO: Pod client-containers-e3d7f8b2-8162-4ec5-991d-7473bf00afcf no longer exists | |
[AfterEach] [k8s.io] Docker Containers | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:06.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "containers-6399" for this suite. | |
[32m• [SLOW TEST:12.355 seconds][0m | |
[k8s.io] Docker Containers | |
[90mtest/e2e/framework/framework.go:683[0m | |
should be able to override the image's default command and arguments [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:57:06.749: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename zone-support | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-5395 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[BeforeEach] [sig-storage] Zone Support | |
test/e2e/storage/vsphere/vsphere_zone_support.go:101 | |
Nov 3 06:57:07.033: INFO: Only supported for providers [vsphere] (not skeleton) | |
[AfterEach] [sig-storage] Zone Support | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "zone-support-5395" for this suite. | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.402 seconds][0m | |
[sig-storage] Zone Support | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[36m[1mVerify a pod is created and attached to a dynamically created PV, based on multiple zones specified in the storage class. (No shared datastores exist among both zones) [BeforeEach][0m | |
[90mtest/e2e/storage/vsphere/vsphere_zone_support.go:282[0m | |
[36mOnly supported for providers [vsphere] (not skeleton)[0m | |
test/e2e/storage/vsphere/vsphere_zone_support.go:102 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:07.159: INFO: Distro debian doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: aws] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDistro debian doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:163 | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/storage/testsuites/base.go:99 | |
Nov 3 06:57:07.165: INFO: Driver azure doesn't support ntfs -- skipping | |
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:07.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m | |
[sig-storage] In-tree Volumes | |
[90mtest/e2e/storage/utils/framework.go:23[0m | |
[Driver: azure] | |
[90mtest/e2e/storage/in_tree_volumes.go:70[0m | |
[Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes | |
[90mtest/e2e/storage/testsuites/base.go:98[0m | |
[36m[1mshould allow exec of files on the volume [BeforeEach][0m | |
[90mtest/e2e/storage/testsuites/volumes.go:191[0m | |
[36mDriver azure doesn't support ntfs -- skipping[0m | |
test/e2e/storage/testsuites/base.go:157 | |
[90m------------------------------[0m | |
[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:56.937: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename resourcequota | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3610 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should create a ResourceQuota and capture the life of a pod. [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Counting existing ResourceQuota | |
[1mSTEP[0m: Creating a ResourceQuota | |
[1mSTEP[0m: Ensuring resource quota status is calculated | |
[1mSTEP[0m: Creating a Pod that fits quota | |
[1mSTEP[0m: Ensuring ResourceQuota status captures the pod usage | |
[1mSTEP[0m: Not allowing a pod to be created that exceeds remaining quota | |
[1mSTEP[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) | |
[1mSTEP[0m: Ensuring a pod cannot update its resource requirements | |
[1mSTEP[0m: Ensuring attempts to update pod resource requirements did not change quota usage | |
[1mSTEP[0m: Deleting the pod | |
[1mSTEP[0m: Ensuring resource quota status released the pod usage | |
[AfterEach] [sig-api-machinery] ResourceQuota | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "resourcequota-3610" for this suite. | |
[32m• [SLOW TEST:13.718 seconds][0m | |
[sig-api-machinery] ResourceQuota | |
[90mtest/e2e/apimachinery/framework.go:23[0m | |
should create a ResourceQuota and capture the life of a pod. [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |
[36mS[0m[36mS[0m[36mS[0m[36mS[0m | |
[90m------------------------------[0m | |
[BeforeEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:147 | |
[1mSTEP[0m: Creating a kubernetes client | |
Nov 3 06:56:53.930: INFO: >>> kubeConfig: /root/.kube/kind-test-config | |
[1mSTEP[0m: Building a namespace api object, basename secrets | |
[1mSTEP[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-915 | |
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace | |
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
test/e2e/framework/framework.go:688 | |
[1mSTEP[0m: Creating secret with name secret-test-240ea91a-1459-4ee4-84d8-25e8094b5a18 | |
[1mSTEP[0m: Creating a pod to test consume secrets | |
Nov 3 06:56:54.167: INFO: Waiting up to 5m0s for pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f" in namespace "secrets-915" to be "success or failure" | |
Nov 3 06:56:54.174: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343212ms | |
Nov 3 06:56:56.196: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029071933s | |
Nov 3 06:56:58.243: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075794817s | |
Nov 3 06:57:00.251: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083931922s | |
Nov 3 06:57:02.263: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095160433s | |
Nov 3 06:57:04.267: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099672949s | |
Nov 3 06:57:06.289: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.121433664s | |
Nov 3 06:57:08.365: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.197980395s | |
Nov 3 06:57:10.377: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.209511934s | |
[1mSTEP[0m: Saw pod success | |
Nov 3 06:57:10.377: INFO: Pod "pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f" satisfied condition "success or failure" | |
Nov 3 06:57:10.428: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f container secret-volume-test: <nil> | |
[1mSTEP[0m: delete the pod | |
Nov 3 06:57:10.576: INFO: Waiting for pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f to disappear | |
Nov 3 06:57:10.600: INFO: Pod pod-secrets-7b4da097-273a-48d1-b231-c09bce11088f no longer exists | |
[AfterEach] [sig-storage] Secrets | |
test/e2e/framework/framework.go:148 | |
Nov 3 06:57:10.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready | |
[1mSTEP[0m: Destroying namespace "secrets-915" for this suite. | |
[32m• [SLOW TEST:16.733 seconds][0m | |
[sig-storage] Secrets | |
[90mtest/e2e/common/secrets_volume.go:34[0m | |
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] | |
[90mtest/e2e/framework/framework.go:688[0m | |
[90m------------------------------[0m | |