Skip to content

Instantly share code, notes, and snippets.

@mloskot
Last active April 27, 2023 10:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mloskot/2670e7e7331f90e390eb5a750de7bf78 to your computer and use it in GitHub Desktop.
Save mloskot/2670e7e7331f90e390eb5a750de7bf78 to your computer and use it in GitHub Desktop.
master-nowsl: initial run with mage and custom updated vagrant boxes
$ mage | tee mage.log
Running dependency: Fetch
Running dependency: startup
2023/04/27 06:56:05 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
2023/04/27 06:56:05 fetch.go:35: [swdt] Downloading manifest https://storage.googleapis.com/k8s-release-dev/ci/latest-1.27.txt
Running dependency: Run
2023/04/27 06:56:06 fetch.go:41: [swdt] Downloading binaries of Kubernetes v1.27.1-13+6494bc61297cd0
Running dependency: checkVagrant
2023/04/27 06:56:06 fetch.go:50: [swdt] Downloading sync\linux\bin\kubeadm from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/linux/amd64/kubeadm
2023/04/27 06:56:06 fetch.go:55: [swdt] File exists. Skipping.
2023/04/27 06:56:06 fetch.go:50: [swdt] Downloading sync\linux\bin\kubectl from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/linux/amd64/kubectl
2023/04/27 06:56:06 fetch.go:55: [swdt] File exists. Skipping.
2023/04/27 06:56:06 fetch.go:50: [swdt] Downloading sync\linux\bin\kubelet from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/linux/amd64/kubelet
2023/04/27 06:56:06 fetch.go:55: [swdt] File exists. Skipping.
2023/04/27 06:56:06 fetch.go:66: [swdt] Downloading sync\windows\bin\kubeadm.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/windows/amd64/kubeadm.exe
2023/04/27 06:56:06 fetch.go:71: [swdt] File exists. Skipping.
2023/04/27 06:56:06 fetch.go:66: [swdt] Downloading sync\windows\bin\kubelet.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/windows/amd64/kubelet.exe
2023/04/27 06:56:06 fetch.go:71: [swdt] File exists. Skipping.
2023/04/27 06:56:06 fetch.go:66: [swdt] Downloading sync\windows\bin\kube-proxy.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.27.1-13+6494bc61297cd0/bin/windows/amd64/kube-proxy.exe
2023/04/27 06:56:06 fetch.go:71: [swdt] File exists. Skipping.
2023/04/27 06:56:06 main.go:106: [swdt] Target Fetch finished in 0.01 minutes
2023/04/27 06:56:06 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 06:56:06 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 06:56:06 run.go:25: [swdt] Creating .lock directory
2023/04/27 06:56:06 run.go:40: [swdt] Creating sync\shared\kubejoin.ps1 mock file to keep Vagrantfile happy
2023/04/27 06:56:06 run.go:43: [swdt] Creating control plane Linux node
2023/04/27 06:56:06 cmd.go:142: [swdt] exec: vagrant "up" "controlplane"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
[Vagrantfile] Provisioning winw1 node with Calico: 3.25.1; containerd: 1.7.0
Bringing machine 'controlplane' up with 'virtualbox' provider...
==> controlplane: Importing base box 'mloskot/sig-windows-dev-tools-ubuntu-2204'...
Progress: 20%
Progress: 40%
Progress: 90%
==> controlplane: Matching MAC address for NAT networking...
==> controlplane: Checking if box 'mloskot/sig-windows-dev-tools-ubuntu-2204' version '1.0' is up to date...
==> controlplane: Setting the name of the VM: sig-windows-dev-tools_controlplane_1682578603687_83372
==> controlplane: Clearing any previously set network interfaces...
==> controlplane: Preparing network interfaces based on configuration...
controlplane: Adapter 1: nat
controlplane: Adapter 2: hostonly
==> controlplane: Forwarding ports...
controlplane: 22 (guest) => 2222 (host) (adapter 1)
==> controlplane: Running 'pre-boot' VM customizations...
==> controlplane: Booting VM...
==> controlplane: Waiting for machine to boot. This may take a few minutes...
controlplane: SSH address: 127.0.0.1:2222
controlplane: SSH username: vagrant
controlplane: SSH auth method: private key
controlplane:
controlplane: Vagrant insecure key detected. Vagrant will automatically replace
controlplane: this with a newly generated keypair for better security.
controlplane:
controlplane: Inserting generated public key within guest...
==> controlplane: Machine booted and ready!
[controlplane] GuestAdditions 7.0.8 running --- OK.
==> controlplane: Checking for guest additions in VM...
==> controlplane: Setting hostname...
==> controlplane: Configuring and enabling network interfaces...
==> controlplane: Mounting shared folders...
controlplane: /var/sync/linux => D:/_kubernetes/sig-windows-dev-tools/sync/linux
controlplane: /var/sync/forked => D:/_kubernetes/sig-windows-dev-tools/forked
controlplane: /var/sync/shared => D:/_kubernetes/sig-windows-dev-tools/sync/shared
==> controlplane: Running provisioner: shell...
controlplane: Running: C:/Users/mateuszl/AppData/Local/Temp/vagrant-shell20230427-23372-r939ct.sh
controlplane: ARGS: 1.27 10.20.30.10 100.244.0.0/16
controlplane: Using 1.27 as the Kubernetes version
controlplane: Setting up internet connectivity to /etc/resolv.conf
controlplane: nameserver 8.8.8.8
controlplane: nameserver 1.1.1.1
controlplane: now curling to add keys...
controlplane: Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
controlplane: OK
controlplane: deb https://apt.kubernetes.io/ kubernetes-xenial main
controlplane: SWDT: Running apt get update -y
controlplane: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease
controlplane: Get:3 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease [119 kB]
controlplane: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
controlplane: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [65.7 kB]
controlplane: Get:5 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease [108 kB]
controlplane: Get:6 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease [110 kB]
controlplane: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 Packages [1,064 kB]
controlplane: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main Translation-en [220 kB]
controlplane: Get:9 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 c-n-f Metadata [14.1 kB]
controlplane: Get:10 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 Packages [910 kB]
controlplane: Get:11 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe Translation-en [182 kB]
controlplane: Get:12 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 c-n-f Metadata [18.6 kB]
controlplane: Get:13 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 Packages [795 kB]
controlplane: Get:14 https://mirrors.edge.kernel.org/ubuntu jammy-security/main Translation-en [155 kB]
controlplane: Get:15 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 c-n-f Metadata [9,024 B]
controlplane: Get:16 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 Packages [830 kB]
controlplane: Get:17 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted Translation-en [131 kB]
controlplane: Get:18 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 Packages [726 kB]
controlplane: Get:19 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe Translation-en [120 kB]
controlplane: Fetched 5,586 kB in 10s (574 kB/s)
controlplane: Reading package lists...
controlplane: W: https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
controlplane: overlay
controlplane: br_netfilter
controlplane: SWDT: Running modprobes
controlplane: net.bridge.bridge-nf-call-iptables = 1
controlplane: net.ipv4.ip_forward = 1
controlplane: net.bridge.bridge-nf-call-ip6tables = 1
controlplane: * Applying /etc/sysctl.d/10-console-messages.conf ...
controlplane: kernel.printk = 4 4 1 7
controlplane: * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
controlplane: net.ipv6.conf.all.use_tempaddr = 2
controlplane: net.ipv6.conf.default.use_tempaddr = 2
controlplane: * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
controlplane: kernel.kptr_restrict = 1
controlplane: * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
controlplane: kernel.sysrq = 176
controlplane: * Applying /etc/sysctl.d/10-network-security.conf ...
controlplane: net.ipv4.conf.default.rp_filter = 2
controlplane: net.ipv4.conf.all.rp_filter = 2
controlplane: * Applying /etc/sysctl.d/10-ptrace.conf ...
controlplane: kernel.yama.ptrace_scope = 1
controlplane: * Applying /etc/sysctl.d/10-zeropage.conf ...
controlplane: vm.mmap_min_addr = 65536
controlplane: * Applying /usr/lib/sysctl.d/50-default.conf ...
controlplane: kernel.core_uses_pid = 1
controlplane: net.ipv4.conf.default.rp_filter = 2
controlplane: net.ipv4.conf.default.accept_source_route = 0
controlplane: sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
controlplane: net.ipv4.conf.default.promote_secondaries = 1
controlplane: sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
controlplane: net.ipv4.ping_group_range = 0 2147483647
controlplane: net.core.default_qdisc = fq_codel
controlplane: fs.protected_hardlinks = 1
controlplane: fs.protected_symlinks = 1
controlplane: fs.protected_regular = 1
controlplane: fs.protected_fifos = 1
controlplane: * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
controlplane: kernel.pid_max = 4194304
controlplane: * Applying /etc/sysctl.d/99-kubernetes-cri.conf ...
controlplane: net.bridge.bridge-nf-call-iptables = 1
controlplane: net.ipv4.ip_forward = 1
controlplane: net.bridge.bridge-nf-call-ip6tables = 1
controlplane: * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
controlplane: fs.protected_fifos = 1
controlplane: fs.protected_hardlinks = 1
controlplane: fs.protected_regular = 2
controlplane: fs.protected_symlinks = 1
controlplane: * Applying /etc/sysctl.d/99-sysctl.conf ...
controlplane: net.ipv6.conf.all.disable_ipv6 = 1
controlplane: * Applying /etc/sysctl.conf ...
controlplane: net.ipv6.conf.all.disable_ipv6 = 1
controlplane: SWDT installing kubelet, kubeadm, kubectl will overwrite them later as needeed...
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: The following additional packages will be installed:
controlplane: conntrack cri-tools ebtables kubernetes-cni socat
controlplane: The following NEW packages will be installed:
controlplane: conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
controlplane: 0 upgraded, 8 newly installed, 0 to remove and 38 not upgraded.
controlplane: Need to get 85.9 MB of archives.
controlplane: After this operation, 328 MB of additional disk space will be used.
controlplane: Get:2 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
controlplane: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]
controlplane: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]
controlplane: Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
controlplane: Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
controlplane: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.27.1-00 [18.7 MB]
controlplane: Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.27.1-00 [10.2 MB]
controlplane: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.27.1-00 [9,928 kB]
controlplane: dpkg-preconfigure: unable to re-open stdin: No such file or directory
controlplane: Fetched 85.9 MB in 37s (2,298 kB/s)
controlplane: Selecting previously unselected package conntrack.
controlplane: (Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 80676 files and directories currently installed.)
controlplane: Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
controlplane: Unpacking conntrack (1:1.4.6-2build2) ...
controlplane: Selecting previously unselected package cri-tools.
controlplane: Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
controlplane: Unpacking cri-tools (1.26.0-00) ...
controlplane: Selecting previously unselected package ebtables.
controlplane: Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
controlplane: Unpacking ebtables (2.0.11-4build2) ...
controlplane: Selecting previously unselected package kubernetes-cni.
controlplane: Preparing to unpack .../3-kubernetes-cni_1.2.0-00_amd64.deb ...
controlplane: Unpacking kubernetes-cni (1.2.0-00) ...
controlplane: Selecting previously unselected package socat.
controlplane: Preparing to unpack .../4-socat_1.7.4.1-3ubuntu4_amd64.deb ...
controlplane: Unpacking socat (1.7.4.1-3ubuntu4) ...
controlplane: Selecting previously unselected package kubelet.
controlplane: Preparing to unpack .../5-kubelet_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubelet (1.27.1-00) ...
controlplane: Selecting previously unselected package kubectl.
controlplane: Preparing to unpack .../6-kubectl_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubectl (1.27.1-00) ...
controlplane: Selecting previously unselected package kubeadm.
controlplane: Preparing to unpack .../7-kubeadm_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubeadm (1.27.1-00) ...
controlplane: Setting up conntrack (1:1.4.6-2build2) ...
controlplane: Setting up kubectl (1.27.1-00) ...
controlplane: Setting up ebtables (2.0.11-4build2) ...
controlplane: Setting up socat (1.7.4.1-3ubuntu4) ...
controlplane: Setting up cri-tools (1.26.0-00) ...
controlplane: Setting up kubernetes-cni (1.2.0-00) ...
controlplane: Setting up kubelet (1.27.1-00) ...
controlplane: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service  /lib/systemd/system/kubelet.service.
controlplane: Setting up kubeadm (1.27.1-00) ...
controlplane: Processing triggers for man-db (2.10.2-1) ...
controlplane:
controlplane: Running kernel seems to be up-to-date.
controlplane:
controlplane: No services need to be restarted.
controlplane:
controlplane: No containers need to be restarted.
controlplane:
controlplane: No user sessions are running outdated binaries.
controlplane:
controlplane: No VM guests are running outdated hypervisor (qemu) binaries on this host.
controlplane: kubelet set on hold.
controlplane: kubeadm set on hold.
controlplane: kubectl set on hold.
controlplane: Configuring Containerd
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: lsb-release is already the newest version (11.1.0ubuntu4).
controlplane: ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
controlplane: ca-certificates set to manually installed.
controlplane: gnupg is already the newest version (2.2.27-3ubuntu2.1).
controlplane: 0 upgraded, 0 newly installed, 0 to remove and 38 not upgraded.
controlplane: Hit:2 https://mirrors.edge.kernel.org/ubuntu jammy InRelease
controlplane: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease
controlplane: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease
controlplane: Hit:5 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease
controlplane: Get:6 https://download.docker.com/linux/ubuntu jammy InRelease [48.9 kB]
controlplane: Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
controlplane: Get:7 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [16.7 kB]
controlplane: Fetched 74.6 kB in 1s (55.9 kB/s)
controlplane: Reading package lists...
controlplane: W: https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: The following NEW packages will be installed:
controlplane: containerd.io
controlplane: 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded.
controlplane: Need to get 28.3 MB of archives.
controlplane: After this operation, 116 MB of additional disk space will be used.
controlplane: Get:1 https://download.docker.com/linux/ubuntu jammy/stable amd64 containerd.io amd64 1.6.20-1 [28.3 MB]
controlplane: dpkg-preconfigure: unable to re-open stdin: No such file or directory
controlplane: Fetched 28.3 MB in 11s (2,478 kB/s)
controlplane: Selecting previously unselected package containerd.io.
controlplane: (Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 80770 files and directories currently installed.)
controlplane: Preparing to unpack .../containerd.io_1.6.20-1_amd64.deb ...
controlplane: Unpacking containerd.io (1.6.20-1) ...
controlplane: Setting up containerd.io (1.6.20-1) ...
controlplane: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service  /lib/systemd/system/containerd.service.
controlplane: Processing triggers for man-db (2.10.2-1) ...
controlplane:
controlplane: Running kernel seems to be up-to-date.
controlplane:
controlplane: No services need to be restarted.
controlplane:
controlplane: No containers need to be restarted.
controlplane:
controlplane: No user sessions are running outdated binaries.
controlplane:
controlplane: No VM guests are running outdated hypervisor (qemu) binaries on this host.
controlplane: copying /var/sync/linux/bin/kubeadm to node path..
controlplane: copying /var/sync/linux/bin/kubectl to node path..
controlplane: copying /var/sync/linux/bin/kubelet to node path..
controlplane: disabled_plugins = []
controlplane: imports = []
controlplane: oom_score = 0
controlplane: plugin_dir = ""
controlplane: required_plugins = []
controlplane: root = "/var/lib/containerd"
controlplane: state = "/run/containerd"
controlplane: temp = ""
controlplane: version = 2
controlplane:
controlplane: [cgroup]
controlplane: path = ""
controlplane:
controlplane: [debug]
controlplane: address = ""
controlplane: format = ""
controlplane: gid = 0
controlplane: level = ""
controlplane: uid = 0
controlplane:
controlplane: [grpc]
controlplane: address = "/run/containerd/containerd.sock"
controlplane: gid = 0
controlplane: max_recv_message_size = 16777216
controlplane: max_send_message_size = 16777216
controlplane: tcp_address = ""
controlplane: tcp_tls_ca = ""
controlplane: tcp_tls_cert = ""
controlplane: tcp_tls_key = ""
controlplane: uid = 0
controlplane:
controlplane: [metrics]
controlplane: address = ""
controlplane: grpc_histogram = false
controlplane:
controlplane: [plugins]
controlplane:
controlplane: [plugins."io.containerd.gc.v1.scheduler"]
controlplane: deletion_threshold = 0
controlplane: mutation_threshold = 100
controlplane: pause_threshold = 0.02
controlplane: schedule_delay = "0s"
controlplane: startup_delay = "100ms"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri"]
controlplane: device_ownership_from_security_context = false
controlplane: disable_apparmor = false
controlplane: disable_cgroup = false
controlplane: disable_hugetlb_controller = true
controlplane: disable_proc_mount = false
controlplane: disable_tcp_service = true
controlplane: enable_selinux = false
controlplane: enable_tls_streaming = false
controlplane: enable_unprivileged_icmp = false
controlplane: enable_unprivileged_ports = false
controlplane: ignore_image_defined_volumes = false
controlplane: max_concurrent_downloads = 3
controlplane: max_container_log_line_size = 16384
controlplane: netns_mounts_under_state_dir = false
controlplane: restrict_oom_score_adj = false
controlplane: sandbox_image = "registry.k8s.io/pause:3.6"
controlplane: selinux_category_range = 1024
controlplane: stats_collect_period = 10
controlplane: stream_idle_timeout = "4h0m0s"
controlplane: stream_server_address = "127.0.0.1"
controlplane: stream_server_port = "0"
controlplane: systemd_cgroup = false
controlplane: tolerate_missing_hugetlb_controller = true
controlplane: unset_seccomp_profile = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".cni]
controlplane: bin_dir = "/opt/cni/bin"
controlplane: conf_dir = "/etc/cni/net.d"
controlplane: conf_template = ""
controlplane: ip_pref = ""
controlplane: max_conf_num = 1
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd]
controlplane: default_runtime_name = "runc"
controlplane: disable_snapshot_annotations = true
controlplane: discard_unpacked_layers = false
controlplane: ignore_rdt_not_enabled_errors = false
controlplane: no_pivot = false
controlplane: snapshotter = "overlayfs"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = "io.containerd.runc.v2"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
controlplane: BinaryName = ""
controlplane: CriuImagePath = ""
controlplane: CriuPath = ""
controlplane: CriuWorkPath = ""
controlplane: IoGid = 0
controlplane: IoUid = 0
controlplane: NoNewKeyring = false
controlplane: NoPivotRoot = false
controlplane: Root = ""
controlplane: ShimCgroup = ""
controlplane: SystemdCgroup = false
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".image_decryption]
controlplane: key_model = "node"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry]
controlplane: config_path = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.auths]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.configs]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.headers]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
controlplane: tls_cert_file = ""
controlplane: tls_key_file = ""
controlplane:
controlplane: [plugins."io.containerd.internal.v1.opt"]
controlplane: path = "/opt/containerd"
controlplane:
controlplane: [plugins."io.containerd.internal.v1.restart"]
controlplane: interval = "10s"
controlplane:
controlplane: [plugins."io.containerd.internal.v1.tracing"]
controlplane: sampling_ratio = 1.0
controlplane: service_name = "containerd"
controlplane:
controlplane: [plugins."io.containerd.metadata.v1.bolt"]
controlplane: content_sharing_policy = "shared"
controlplane:
controlplane: [plugins."io.containerd.monitor.v1.cgroups"]
controlplane: no_prometheus = false
controlplane:
controlplane: [plugins."io.containerd.runtime.v1.linux"]
controlplane: no_shim = false
controlplane: runtime = "runc"
controlplane: runtime_root = ""
controlplane: shim = "containerd-shim"
controlplane: shim_debug = false
controlplane:
controlplane: [plugins."io.containerd.runtime.v2.task"]
controlplane: platforms = ["linux/amd64"]
controlplane: sched_core = false
controlplane:
controlplane: [plugins."io.containerd.service.v1.diff-service"]
controlplane: default = ["walking"]
controlplane:
controlplane: [plugins."io.containerd.service.v1.tasks-service"]
controlplane: rdt_config_file = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.aufs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.btrfs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.devmapper"]
controlplane: async_remove = false
controlplane: base_image_size = ""
controlplane: discard_blocks = false
controlplane: fs_options = ""
controlplane: fs_type = ""
controlplane: pool_name = ""
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.native"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.overlayfs"]
controlplane: root_path = ""
controlplane: upperdir_label = false
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.zfs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.tracing.processor.v1.otlp"]
controlplane: endpoint = ""
controlplane: insecure = false
controlplane: protocol = ""
controlplane:
controlplane: [proxy_plugins]
controlplane:
controlplane: [stream_processors]
controlplane:
controlplane: [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
controlplane: accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
controlplane: args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
controlplane: env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
controlplane: path = "ctd-decoder"
controlplane: returns = "application/vnd.oci.image.layer.v1.tar"
controlplane:
controlplane: [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
controlplane: accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
controlplane: args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
controlplane: env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
controlplane: path = "ctd-decoder"
controlplane: returns = "application/vnd.oci.image.layer.v1.tar+gzip"
controlplane:
controlplane: [timeouts]
controlplane: "io.containerd.timeout.bolt.open" = "0s"
controlplane: "io.containerd.timeout.shim.cleanup" = "5s"
controlplane: "io.containerd.timeout.shim.load" = "5s"
controlplane: "io.containerd.timeout.shim.shutdown" = "3s"
controlplane: "io.containerd.timeout.task.state" = "2s"
controlplane:
controlplane: [ttrpc]
controlplane: address = ""
controlplane: gid = 0
controlplane: uid = 0
controlplane: I0427 06:59:00.561826 3287 initconfiguration.go:255] loading configuration from "/var/sync/shared/kubeadm.yaml"
controlplane: I0427 06:59:00.566074 3287 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
controlplane: I0427 06:59:00.566398 3287 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
controlplane: I0427 06:59:00.572248 3287 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.27.txt
controlplane: I0427 06:59:02.123023 3287 common.go:128] WARNING: tolerating control plane version v1.27.1 as a pre-release version
controlplane: [init] Using Kubernetes version: v1.27.1
controlplane: [preflight] Running pre-flight checks
controlplane: I0427 06:59:02.124403 3287 checks.go:563] validating Kubernetes and kubeadm version
controlplane: I0427 06:59:02.124450 3287 checks.go:168] validating if the firewall is enabled and active
controlplane: I0427 06:59:02.141594 3287 checks.go:203] validating availability of port 6443
controlplane: I0427 06:59:02.141832 3287 checks.go:203] validating availability of port 10259
controlplane: I0427 06:59:02.141849 3287 checks.go:203] validating availability of port 10257
controlplane: I0427 06:59:02.141865 3287 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
controlplane: I0427 06:59:02.141883 3287 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
controlplane: I0427 06:59:02.141889 3287 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
controlplane: I0427 06:59:02.141894 3287 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
controlplane: I0427 06:59:02.141901 3287 checks.go:430] validating if the connectivity type is via proxy or direct
controlplane: I0427 06:59:02.141919 3287 checks.go:469] validating http connectivity to first IP address in the CIDR
controlplane: I0427 06:59:02.141940 3287 checks.go:469] validating http connectivity to first IP address in the CIDR
controlplane: I0427 06:59:02.141949 3287 checks.go:104] validating the container runtime
controlplane: I0427 06:59:02.187295 3287 checks.go:639] validating whether swap is enabled or not
controlplane: I0427 06:59:02.187341 3287 checks.go:370] validating the presence of executable crictl
controlplane: I0427 06:59:02.187362 3287 checks.go:370] validating the presence of executable conntrack
controlplane: I0427 06:59:02.187381 3287 checks.go:370] validating the presence of executable ip
controlplane: I0427 06:59:02.187395 3287 checks.go:370] validating the presence of executable iptables
controlplane: I0427 06:59:02.187451 3287 checks.go:370] validating the presence of executable mount
controlplane: I0427 06:59:02.187466 3287 checks.go:370] validating the presence of executable nsenter
controlplane: I0427 06:59:02.187489 3287 checks.go:370] validating the presence of executable ebtables
controlplane: I0427 06:59:02.187502 3287 checks.go:370] validating the presence of executable ethtool
controlplane: I0427 06:59:02.187512 3287 checks.go:370] validating the presence of executable socat
controlplane: I0427 06:59:02.187524 3287 checks.go:370] validating the presence of executable tc
controlplane: I0427 06:59:02.187542 3287 checks.go:370] validating the presence of executable touch
controlplane: I0427 06:59:02.187554 3287 checks.go:516] running all checks
controlplane: I0427 06:59:02.213490 3287 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
controlplane: I0427 06:59:02.213952 3287 checks.go:605] validating kubelet version
controlplane: I0427 06:59:02.296494 3287 checks.go:130] validating if the "kubelet" service is enabled and active
controlplane: [preflight] Pulling images required for setting up a Kubernetes cluster
controlplane: [preflight] This might take a minute or two, depending on the speed of your internet connection
controlplane: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
controlplane: I0427 06:59:02.310001 3287 checks.go:203] validating availability of port 10250
controlplane: I0427 06:59:02.310101 3287 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
controlplane: I0427 06:59:02.310130 3287 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
controlplane: I0427 06:59:02.310142 3287 checks.go:203] validating availability of port 2379
controlplane: I0427 06:59:02.310156 3287 checks.go:203] validating availability of port 2380
controlplane: I0427 06:59:02.310167 3287 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
controlplane: W0427 06:59:02.310262 3287 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
controlplane: I0427 06:59:02.310273 3287 checks.go:828] using image pull policy: IfNotPresent
controlplane: I0427 06:59:02.350471 3287 checks.go:854] pulling: registry.k8s.io/kube-apiserver:v1.27.1
controlplane: I0427 06:59:22.490875 3287 checks.go:854] pulling: registry.k8s.io/kube-controller-manager:v1.27.1
controlplane: I0427 06:59:39.567063 3287 checks.go:854] pulling: registry.k8s.io/kube-scheduler:v1.27.1
controlplane: I0427 06:59:49.929335 3287 checks.go:854] pulling: registry.k8s.io/kube-proxy:v1.27.1
controlplane: W0427 07:00:03.854194 3287 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
controlplane: I0427 07:00:03.890309 3287 checks.go:854] pulling: registry.k8s.io/pause:3.9
controlplane: I0427 07:00:06.691329 3287 checks.go:854] pulling: registry.k8s.io/etcd:3.5.7-0
controlplane: I0427 07:00:55.073700 3287 checks.go:854] pulling: registry.k8s.io/coredns/coredns:v1.10.1
controlplane: [certs] Using certificateDir folder "/etc/kubernetes/pki"
controlplane: I0427 07:01:05.129665 3287 certs.go:112] creating a new certificate authority for ca
controlplane: [certs] Generating "ca" certificate and key
controlplane: I0427 07:01:05.182828 3287 certs.go:519] validating certificate period for ca certificate
controlplane: [certs] Generating "apiserver" certificate and key
controlplane: [certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.20.30.10]
controlplane: [certs] Generating "apiserver-kubelet-client" certificate and key
controlplane: I0427 07:01:05.460114 3287 certs.go:112] creating a new certificate authority for front-proxy-ca
controlplane: [certs] Generating "front-proxy-ca" certificate and key
controlplane: I0427 07:01:05.965267 3287 certs.go:519] validating certificate period for front-proxy-ca certificate
controlplane: [certs] Generating "front-proxy-client" certificate and key
controlplane: I0427 07:01:06.273437 3287 certs.go:112] creating a new certificate authority for etcd-ca
controlplane: [certs] Generating "etcd/ca" certificate and key
controlplane: I0427 07:01:06.411808 3287 certs.go:519] validating certificate period for etcd/ca certificate
controlplane: [certs] Generating "etcd/server" certificate and key
controlplane: [certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [10.20.30.10 127.0.0.1 ::1]
controlplane: [certs] Generating "etcd/peer" certificate and key
controlplane: [certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [10.20.30.10 127.0.0.1 ::1]
controlplane: [certs] Generating "etcd/healthcheck-client" certificate and key
controlplane: [certs] Generating "apiserver-etcd-client" certificate and key
controlplane: I0427 07:01:07.585180 3287 certs.go:78] creating new public/private key files for signing service account users
controlplane: [certs] Generating "sa" key and public key
controlplane: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
controlplane: I0427 07:01:07.671651 3287 kubeconfig.go:103] creating kubeconfig file for admin.conf
controlplane: [kubeconfig] Writing "admin.conf" kubeconfig file
controlplane: I0427 07:01:08.236544 3287 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
controlplane: [kubeconfig] Writing "kubelet.conf" kubeconfig file
controlplane: I0427 07:01:08.411863 3287 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
controlplane: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
controlplane: I0427 07:01:08.586315 3287 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
controlplane: [kubeconfig] Writing "scheduler.conf" kubeconfig file
controlplane: I0427 07:01:08.678296 3287 kubelet.go:67] Stopping the kubelet
controlplane: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
controlplane: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
controlplane: [kubelet-start] Starting the kubelet
controlplane: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
controlplane: [control-plane] Creating static Pod manifest for "kube-apiserver"
controlplane: [control-plane] Creating static Pod manifest for "kube-controller-manager"
controlplane: [control-plane] Creating static Pod manifest for "kube-scheduler"
controlplane: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
controlplane: I0427 07:01:09.002160 3287 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0427 07:01:09.002368 3287 certs.go:519] validating certificate period for CA certificate
controlplane: I0427 07:01:09.002419 3287 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
controlplane: I0427 07:01:09.002424 3287 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
controlplane: I0427 07:01:09.002428 3287 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
controlplane: I0427 07:01:09.002431 3287 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
controlplane: I0427 07:01:09.002434 3287 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
controlplane: I0427 07:01:09.002438 3287 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
controlplane: I0427 07:01:09.004381 3287 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
controlplane: I0427 07:01:09.004394 3287 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0427 07:01:09.004543 3287 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004548 3287 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004552 3287 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004555 3287 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004558 3287 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004562 3287 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004565 3287 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
controlplane: I0427 07:01:09.004568 3287 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
controlplane: I0427 07:01:09.005135 3287 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
controlplane: I0427 07:01:09.005144 3287 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0427 07:01:09.005284 3287 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
controlplane: I0427 07:01:09.005595 3287 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
controlplane: W0427 07:01:09.005696 3287 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
controlplane: I0427 07:01:09.023299 3287 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
controlplane: I0427 07:01:09.023444 3287 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
controlplane: I0427 07:01:09.023912 3287 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
controlplane: I0427 07:01:09.025327 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:09.526474 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:10.026455 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:10.526637 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:11.026550 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:11.526865 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:12.027864 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 1 milliseconds
controlplane: I0427 07:01:12.527222 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:13.026212 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:13.526838 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0427 07:01:17.541314 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 3509 milliseconds
controlplane: I0427 07:01:18.027277 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
controlplane: I0427 07:01:18.528978 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
controlplane: I0427 07:01:19.027026 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
controlplane: I0427 07:01:19.531897 3287 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 200 OK in 3 milliseconds
controlplane: [apiclient] All control plane components are healthy after 10.507731 seconds
controlplane: [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
controlplane: I0427 07:01:19.532917 3287 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
controlplane: I0427 07:01:19.538221 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
controlplane: I0427 07:01:19.544186 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
controlplane: I0427 07:01:19.554183 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 9 milliseconds
controlplane: I0427 07:01:19.554667 3287 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
controlplane: [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
controlplane: I0427 07:01:19.564993 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 9 milliseconds
controlplane: I0427 07:01:19.569858 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
controlplane: I0427 07:01:19.575910 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:19.575986 3287 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
controlplane: I0427 07:01:19.575993 3287 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "controlplane" as an annotation
controlplane: I0427 07:01:20.086965 3287 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 9 milliseconds
controlplane: I0427 07:01:20.096462 3287 round_trippers.go:553] PATCH https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 6 milliseconds
controlplane: [upload-certs] Skipping phase. Please see --upload-certs
controlplane: [mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
controlplane: [mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
controlplane: I0427 07:01:20.600271 3287 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 2 milliseconds
controlplane: I0427 07:01:20.612999 3287 round_trippers.go:553] PATCH https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 11 milliseconds
controlplane: [bootstrap-token] Using token: y4ens8.0evn8sunh1m3njlq
controlplane: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
controlplane: I0427 07:01:20.617643 3287 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-y4ens8?timeout=10s 404 Not Found in 2 milliseconds
controlplane: I0427 07:01:20.624875 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 5 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
controlplane: I0427 07:01:20.629634 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 4 milliseconds
controlplane: I0427 07:01:20.634585 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
controlplane: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
controlplane: I0427 07:01:20.639941 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
controlplane: I0427 07:01:20.650215 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 10 milliseconds
controlplane: [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
controlplane: I0427 07:01:20.655611 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:20.655683 3287 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
controlplane: I0427 07:01:20.656110 3287 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: I0427 07:01:20.656175 3287 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
controlplane: I0427 07:01:20.656416 3287 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
controlplane: I0427 07:01:20.662065 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:20.663434 3287 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
controlplane: I0427 07:01:20.669439 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:20.676424 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 6 milliseconds
controlplane: [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
controlplane: I0427 07:01:20.677162 3287 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
controlplane: I0427 07:01:20.677540 3287 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
controlplane: I0427 07:01:20.678086 3287 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
controlplane: I0427 07:01:21.110483 3287 round_trippers.go:553] GET https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 5 milliseconds
controlplane: I0427 07:01:21.116550 3287 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 4 milliseconds
controlplane: I0427 07:01:21.123058 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 milliseconds
controlplane: I0427 07:01:21.134720 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 11 milliseconds
controlplane: I0427 07:01:21.140417 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:21.154451 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 11 milliseconds
controlplane: I0427 07:01:21.189635 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 32 milliseconds
controlplane: I0427 07:01:21.213972 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 22 milliseconds
controlplane: [addons] Applied essential addon: CoreDNS
controlplane: I0427 07:01:21.226951 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 11 milliseconds
controlplane: I0427 07:01:21.246997 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 19 milliseconds
controlplane: I0427 07:01:21.260439 3287 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 11 milliseconds
controlplane: I0427 07:01:21.271868 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 9 milliseconds
controlplane: I0427 07:01:21.279611 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 5 milliseconds
controlplane: I0427 07:01:21.288732 3287 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 8 milliseconds
controlplane: [addons] Applied essential addon: kube-proxy
controlplane: I0427 07:01:21.291806 3287 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: I0427 07:01:21.292904 3287 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane:
controlplane: Your Kubernetes control-plane has initialized successfully!
controlplane:
controlplane: To start using your cluster, you need to run the following as a regular user:
controlplane:
controlplane: mkdir -p $HOME/.kube
controlplane: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
controlplane: sudo chown $(id -u):$(id -g) $HOME/.kube/config
controlplane:
controlplane: Alternatively, if you are the root user, you can run:
controlplane:
controlplane: export KUBECONFIG=/etc/kubernetes/admin.conf
controlplane:
controlplane: You should now deploy a pod network to the cluster.
controlplane: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
controlplane: https://kubernetes.io/docs/concepts/cluster-administration/addons/
controlplane:
controlplane: Then you can join any number of worker nodes by running the following on each as root:
controlplane:
controlplane: kubeadm join 10.20.30.10:6443 --token y4ens8.0evn8sunh1m3njlq \
controlplane: --discovery-token-ca-cert-hash sha256:383006b8615b7279c437a953e7f9aa1be259af49f7af8d99b10fe3ad4f5f8558
controlplane: serviceaccount/kube-proxy-windows created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:kube-proxy created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god2 created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god3 created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god4 created
controlplane: Testing controlplane nodes!
controlplane: NAMESPACE NAME READY STATUS RESTARTS AGE
controlplane: kube-system etcd-controlplane 0/1 Running 0 2s
controlplane: kube-system kube-apiserver-controlplane 0/1 Running 0 1s
controlplane: kube-system kube-controller-manager-controlplane 0/1 Running 0 1s
controlplane: kube-system kube-scheduler-controlplane 0/1 Running 0 1s
==> controlplane: Running provisioner: shell...
controlplane: Running: C:/Users/mateuszl/AppData/Local/Temp/vagrant-shell20230427-23372-oks0cu.sh
controlplane: running calico installer now with pod_cidr 100.244.0.0/16
controlplane: node/controlplane untainted
controlplane: error: taint "node-role.kubernetes.io/master" not found
controlplane: namespace/calico-system created
controlplane: namespace/tigera-operator created
controlplane: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
controlplane: serviceaccount/tigera-operator created
controlplane: clusterrole.rbac.authorization.k8s.io/tigera-operator created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
controlplane: deployment.apps/tigera-operator created
controlplane: --2023-04-27 07:01:28-- https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
controlplane: Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...
controlplane: Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
controlplane: HTTP request sent, awaiting response... 200 OK
controlplane: Length: 827 [text/plain]
controlplane: Saving to: ÔÇÿtrigera-custom-resource.yamlÔÇÖ
controlplane:
controlplane: 0K 100% 54.6M=0s
controlplane:
controlplane: 2023-04-27 07:01:29 (54.6 MB/s) - ÔÇÿtrigera-custom-resource.yamlÔÇÖ saved [827/827]
controlplane:
controlplane: installation.operator.tigera.io/default created
controlplane: apiserver.operator.tigera.io/default created
controlplane: installation.operator.tigera.io/default patched
controlplane: waiting 20s for calico pods...
controlplane: --2023-04-27 07:01:50-- https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico-windows-vxlan.yaml
controlplane: Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ...
controlplane: Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
controlplane: HTTP request sent, awaiting response... 200 OK
controlplane: Length: 4157 (4.1K) [text/plain]
controlplane: Saving to: ÔÇÿcalico-windows.yamlÔÇÖ
controlplane:
controlplane: 0K .... 100% 35.5M=0s
controlplane:
controlplane: 2023-04-27 07:01:51 (35.5 MB/s) - ÔÇÿcalico-windows.yamlÔÇÖ saved [4157/4157]
controlplane:
controlplane: configmap/calico-windows-config created
controlplane: daemonset.apps/calico-node-windows created
controlplane: % Total % Received % Xferd Average Speed Time Time Time Current
controlplane: Dload Upload Total Spent Left Speed
controlplane:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
controlplane:
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
1 60.8M 1 815k 0 0 406k 0 0:02:33 0:00:02 0:02:31 820k
5 60.8M 5 3267k 0 0 1087k 0 0:00:57 0:00:03 0:00:54 1638k
9 60.8M 9 5850k 0 0 1461k 0 0:00:42 0:00:04 0:00:38 1955k
13 60.8M 13 8261k 0 0 1650k 0 0:00:37 0:00:05 0:00:32 2069k
17 60.8M 17 10.4M 0 0 1780k 0 0:00:34 0:00:06 0:00:28 2140k
21 60.8M 21 12.8M 0 0 1879k 0 0:00:33 0:00:07 0:00:26 2470k
25 60.8M 25 15.3M 0 0 1965k 0 0:00:31 0:00:08 0:00:23 2494k
29 60.8M 29 17.6M 0 0 2010k 0 0:00:30 0:00:09 0:00:21 2450k
32 60.8M 32 19.8M 0 0 2029k 0 0:00:30 0:00:10 0:00:20 2407k
35 60.8M 35 21.7M 0 0 2023k 0 0:00:30 0:00:11 0:00:19 2316k
38 60.8M 38 23.6M 0 0 2015k 0 0:00:30 0:00:12 0:00:18 2205k
41 60.8M 41 25.3M 0 0 1995k 0 0:00:31 0:00:13 0:00:18 2042k
45 60.8M 45 27.3M 0 0 2002k 0 0:00:31 0:00:14 0:00:17 1987k
47 60.8M 47 28.8M 0 0 1968k 0 0:00:31 0:00:15 0:00:16 1848k
49 60.8M 49 30.1M 0 0 1928k 0 0:00:32 0:00:16 0:00:16 1719k
51 60.8M 51 31.1M 0 0 1863k 0 0:00:33 0:00:17 0:00:16 1505k
53 60.8M 53 32.2M 0 0 1836k 0 0:00:33 0:00:18 0:00:15 1422k
54 60.8M 54 33.3M 0 0 1792k 0 0:00:34 0:00:19 0:00:15 1211k
56 60.8M 56 34.4M 0 0 1764k 0 0:00:35 0:00:20 0:00:15 1151k
58 60.8M 58 35.5M 0 0 1731k 0 0:00:35 0:00:21 0:00:14 1099k
60 60.8M 60 36.6M 0 0 1705k 0 0:00:36 0:00:22 0:00:14 1157k
62 60.8M 62 37.7M 0 0 1681k 0 0:00:37 0:00:23 0:00:14 1125k
63 60.8M 63 38.8M 0 0 1659k 0 0:00:37 0:00:24 0:00:13 1144k
65 60.8M 65 40.0M 0 0 1638k 0 0:00:38 0:00:25 0:00:13 1133k
67 60.8M 67 41.0M 0 0 1616k 0 0:00:38 0:00:26 0:00:12 1131k
69 60.8M 69 42.3M 0 0 1606k 0 0:00:38 0:00:27 0:00:11 1169k
71 60.8M 71 43.4M 0 0 1587k 0 0:00:39 0:00:28 0:00:11 1150k
72 60.8M 72 44.2M 0 0 1562k 0 0:00:39 0:00:29 0:00:10 1102k
74 60.8M 74 45.0M 0 0 1537k 0 0:00:40 0:00:30 0:00:10 1034k
75 60.8M 75 45.8M 0 0 1513k 0 0:00:41 0:00:31 0:00:10 981k
76 60.8M 76 46.6M 0 0 1487k 0 0:00:41 0:00:32 0:00:09 855k
77 60.8M 77 47.3M 0 0 1468k 0 0:00:42 0:00:33 0:00:09 804k
79 60.8M 79 48.0M 0 0 1447k 0 0:00:43 0:00:34 0:00:09 776k
81 60.8M 81 49.7M 0 0 1455k 0 0:00:42 0:00:35 0:00:07 958k
84 60.8M 84 51.5M 0 0 1467k 0 0:00:42 0:00:36 0:00:06 1179k
88 60.8M 88 53.5M 0 0 1482k 0 0:00:42 0:00:37 0:00:05 1450k
91 60.8M 91 55.8M 0 0 1502k 0 0:00:41 0:00:38 0:00:03 1723k
95 60.8M 95 58.0M 0 0 1524k 0 0:00:40 0:00:39 0:00:01 2051k
99 60.8M 99 60.3M 0 0 1543k 0 0:00:40 0:00:40 --:--:-- 2163k
100 60.8M 100 60.8M 0 0 1550k 0 0:00:40 0:00:40 --:--:-- 2262k
controlplane: Successfully set StrictAffinity to: true
controlplane: NAME READY STATUS RESTARTS AGE
controlplane: calico-kube-controllers-789dc4c76b-2hsm9 0/1 Pending 0 39s
controlplane: calico-node-2j767 0/1 Init:0/2 0 39s
controlplane: calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 40s
controlplane: csi-node-driver-brbj8 0/2 ContainerCreating 0 39s
2023/04/27 07:02:34 run.go:153: [swdt] Setting SSH private key permissions for .vagrant\machines\controlplane\virtualbox\private_key
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant:r" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Remove:g" "Administrator" "Authenticated Users" "BUILTIN\\Administrators" "BUILTIN" "Everyone" "System" "Users"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key"
.vagrant\machines\controlplane\virtualbox\private_key CADCORP\MateuszL:(F)
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:02:34 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
[Vagrantfile] Provisioning winw1 node with Calico: 3.25.1; containerd: 1.7.0
Current machine states:
controlplane running (virtualbox)
winw1 not created (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:02:48 run.go:57: [swdt] Creating worker Windows node
2023/04/27 07:02:48 run.go:58: [swdt] ##########################################################
2023/04/27 07:02:48 run.go:59: [swdt] Retry vagrant up if the first time the Windows node failed
2023/04/27 07:02:48 run.go:60: [swdt] ##########################################################
2023/04/27 07:02:48 run.go:64: [swdt] vagrant status winw1 - attempt 1 of 10
2023/04/27 07:02:48 cmd.go:142: [swdt] exec: vagrant "status" "winw1"
2023/04/27 07:02:55 run.go:70: [swdt] winw1 not created (virtualbox)
2023/04/27 07:02:55 run.go:74: [swdt] vagrant up winw1 - attempt 1 of 10
2023/04/27 07:02:55 cmd.go:142: [swdt] exec: vagrant "up" "winw1"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
[Vagrantfile] Provisioning winw1 node with Calico: 3.25.1; containerd: 1.7.0
Bringing machine 'winw1' up with 'virtualbox' provider...
==> winw1: Importing base box 'mloskot/sig-windows-dev-tools-windows-2019'...
Progress: 10%
==> winw1: Matching MAC address for NAT networking...
==> winw1: Checking if box 'mloskot/sig-windows-dev-tools-windows-2019' version '1.0' is up to date...
==> winw1: Setting the name of the VM: sig-windows-dev-tools_winw1_1682579038638_68336
==> winw1: Fixed port collision for 22 => 2222. Now on port 2200.
==> winw1: Clearing any previously set network interfaces...
==> winw1: Preparing network interfaces based on configuration...
winw1: Adapter 1: nat
winw1: Adapter 2: hostonly
==> winw1: Forwarding ports...
winw1: 5985 (guest) => 55985 (host) (adapter 1)
winw1: 5986 (guest) => 55986 (host) (adapter 1)
winw1: 22 (guest) => 2200 (host) (adapter 1)
==> winw1: Running 'pre-boot' VM customizations...
==> winw1: Booting VM...
==> winw1: Waiting for machine to boot. This may take a few minutes...
winw1: WinRM address: 127.0.0.1:55985
winw1: WinRM username: vagrant
winw1: WinRM execution_time_limit: PT2H
winw1: WinRM transport: negotiate
==> winw1: Machine booted and ready!
[winw1] GuestAdditions 7.0.8 running --- OK.
==> winw1: Checking for guest additions in VM...
==> winw1: Configuring and enabling network interfaces...
==> winw1: Mounting shared folders...
winw1: C:/forked => D:/_kubernetes/sig-windows-dev-tools/forked
winw1: C:/sync/shared => D:/_kubernetes/sig-windows-dev-tools/sync/shared
winw1: C:/sync/windows => D:/_kubernetes/sig-windows-dev-tools/sync/windows
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/0-containerd.ps1 as C:\tmp\vagrant-shell.ps1
winw1: Stopping ContainerD & Kubelet
winw1: Downloading Calico using ContainerD - [calico: 3.25] [containerd: 1.7.0]
winw1: Installing 7Zip
winw1: VERBOSE: Using the provider 'PowerShellGet' for searching packages.
winw1: VERBOSE: Using the provider 'NuGet' for searching packages.
winw1: VERBOSE: Total package yield:'0' for the specified package '7Zip4PowerShell'.
winw1: VERBOSE: The -Repository parameter was not specified. PowerShellGet will use all of the registered repositories.
winw1: VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
winw1: VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2' and PackageManagementProvider is 'NuGet'.
winw1: VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='7Zip4PowerShell'' for
winw1: ''.
winw1: VERBOSE: Total package yield:'1' for the specified package '7Zip4PowerShell'.
winw1: VERBOSE: Performing the operation "Install Package" on target "Package '7Zip4Powershell' version '2.3.0' from
winw1: 'PSGallery'.".
winw1: VERBOSE: The installation scope is specified to be 'CurrentUser'.
winw1: VERBOSE: The specified module will be installed in 'C:\Users\vagrant\Documents\WindowsPowerShell\Modules'.
winw1: VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
winw1: VERBOSE: Downloading module '7Zip4Powershell' with version '2.3.0' from the repository
winw1: 'https://www.powershellgallery.com/api/v2'.
winw1: VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='7Zip4Powershell'' for
winw1: ''.
winw1: VERBOSE: InstallPackage' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\1926023981'
winw1: VERBOSE: DownloadPackage' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\1926023981\7Zip4Powershell\7Zip4Powershell.nupkg',
winw1: uri='https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'
winw1: VERBOSE: Downloading 'https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'.
winw1: VERBOSE: Completed downloading 'https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'.
winw1: VERBOSE: Completed downloading '7Zip4Powershell'.
winw1: VERBOSE: Hash for package '7Zip4Powershell' does not match hash provided from the server.
winw1: VERBOSE: InstallPackageLocal' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\1926023981'
winw1: VERBOSE: Catalog file '7Zip4Powershell.cat' is not found in the contents of the module '7Zip4Powershell' being
winw1: installed.
winw1: VERBOSE: Module '7Zip4Powershell' was installed successfully to path
winw1: 'C:\Users\vagrant\Documents\WindowsPowerShell\Modules\7Zip4Powershell\2.3.0'.
winw1:
winw1: Name Version Source Summary
winw1: ---- ------- ------ -------
winw1: 7Zip4Powershell 2.3.0 PSGallery Powershell module for creating and extracting 7-Zip...
winw1: Getting ContainerD binaries
winw1: Downloading https://github.com/containerd/containerd/releases/download/v1.7.0/containerd-1.7.0-windows-amd64.tar.gz to C:\Program Files\containerd\containerd.tar.gz
winw1: x containerd.exe
winw1: x ctr.exe
winw1: x containerd-stress.exe
winw1: x containerd-shim-runhcs-v1.exe
winw1: Registering ContainerD as a service
winw1: time="2023-04-27T00:11:53.783833900-07:00" level=fatal msg="The specified service already exists."
winw1: Starting ContainerD service
winw1: Done - please remember to add '--cri-socket "npipe:////./pipe/containerd-containerd"' to your kubeadm join command
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/forked.ps1 as C:\tmp\vagrant-shell.ps1
winw1:
winw1:
winw1: Directory: C:\
winw1:
winw1:
winw1: Mode LastWriteTime Length Name
winw1: ---- ------------- ------ ----
winw1: d----- 1/21/2022 3:44 AM k
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/shared/kubejoin.ps1 as C:\tmp\vagrant-shell.ps1
winw1: [preflight] Running pre-flight checks
winw1: [preflight] Reading configuration from the cluster...
winw1: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
winw1: [kubelet-start] Writing kubelet configuration to file "\\var\\lib\\kubelet\\config.yaml"
winw1: [kubelet-start] Writing kubelet environment file with flags to file "\\var\\lib\\kubelet\\kubeadm-flags.env"
winw1: [kubelet-start] Starting the kubelet
winw1: W0427 00:14:55.747182 2352 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "npipe" to the "criSocket" with value "unix:///var/run/unknown.sock". Please update your configuration!
winw1: W0427 00:14:55.763675 2352 utils.go:69] The recommended value for "authentication.x509.clientCAFile" in "KubeletConfiguration" is: \etc\kubernetes\pki\ca.crt; the provided value is: /etc/kubernetes/pki/ca.crt
winw1: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
winw1:
winw1: This node has joined the cluster:
winw1: * Certificate signing request was sent to apiserver and a response was received.
winw1: * The Kubelet was informed of the new secure connection details.
winw1:
winw1: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2023/04/27 07:15:18 run.go:64: [swdt] vagrant status winw1 - attempt 2 of 10
2023/04/27 07:15:18 cmd.go:142: [swdt] exec: vagrant "status" "winw1"
2023/04/27 07:15:26 run.go:70: [swdt] winw1 running (virtualbox)
2023/04/27 07:15:26 run.go:153: [swdt] Setting SSH private key permissions for .vagrant\machines\controlplane\virtualbox\private_key
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant:r" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Remove:g" "Administrator" "Authenticated Users" "BUILTIN\\Administrators" "BUILTIN" "Everyone" "System" "Users"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key"
.vagrant\machines\controlplane\virtualbox\private_key CADCORP\MateuszL:(F)
Successfully processed 1 files; Failed processing 0 files
2023/04/27 07:15:26 run.go:84: [swdt] kubectl get nodes | grep winw1 - attempt 1 of 10
2023/04/27 07:15:26 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
Connection to 127.0.0.1 closed.
2023/04/27 07:15:38 run.go:89: [swdt] [Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
[Vagrantfile] Provisioning winw1 node with Calico: 3.25.1; containerd: 1.7.0
hNAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 14m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 NotReady <none> 21s v1.27.1-10+0e4269487ffcc8
2023/04/27 07:15:38 run.go:102: [swdt] Creating .lock\joined indicator file for Vagrantfile
2023/04/27 07:15:38 run.go:106: [swdt] Creating .lock\cni indicator file for Vagrantfile
2023/04/27 07:15:38 run.go:109: [swdt] Cluster created
2023/04/27 07:15:38 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:15:47 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 14m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 NotReady <none> 41s v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
Running dependency: main.Test.Smoke
2023/04/27 07:15:58 main.go:106: [swdt] Target Run finished in 19.88 minutes
2023/04/27 07:15:58 test.go:21: [swdt] kubectl apply -f /var/sync/linux/smoke-test.yaml
2023/04/27 07:15:58 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl apply -f /var/sync/linux/smoke-test.yaml"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
deployment.apps/nginx-deployment created
service/nginx created
deployment.apps/whoami-windows created
service/whoami-windows created
pod/netshoot created
Connection to 127.0.0.1 closed.
2023/04/27 07:16:11 test.go:27: [swdt] kubectl scale deployment whoami-windows --replicas 0
2023/04/27 07:16:11 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl scale deployment whoami-windows --replicas 0"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
deployment.apps/whoami-windows scaled
Connection to 127.0.0.1 closed.
2023/04/27 07:16:22 test.go:33: [swdt] kubectl scale deployment whoami-windows --replicas 3
2023/04/27 07:16:22 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl scale deployment whoami-windows --replicas 3"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
deployment.apps/whoami-windows scaled
Connection to 127.0.0.1 closed.
2023/04/27 07:16:33 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl wait --for=condition=Ready=true pod -l 'app=whoami-windows' --timeout=600s"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
pod/whoami-windows-9d46bfd7-8ml98 condition met
pod/whoami-windows-9d46bfd7-hdfqm condition met
pod/whoami-windows-9d46bfd7-tknkd condition met
Connection to 127.0.0.1 closed.
2023/04/27 07:23:43 test.go:43: [swdt] kubectl exec -it netshoot -- curl http://whoami-windows:80/
2023/04/27 07:23:43 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl exec -it netshoot -- curl http://whoami-windows:80/"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
curl: (28) Failed to connect to whoami-windows port 80 after 131010 ms: Couldn't connect to server
command terminated with exit code 28
Connection to 127.0.0.1 closed.
Error: running "vagrant ssh controlplane -c kubectl exec -it netshoot -- curl http://whoami-windows:80/" failed with exit code 28
$ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:19:33 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:19:33 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:19:33 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:19:33 status.go:19: [swdt] vagrant status
2023/04/27 07:19:33 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:19:45 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:19:45 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 18m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 NotReady <none> 4m43s v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 07:20:00 status.go:31: [swdt] kubectl get pods
2023/04/27 07:20:00 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 1/1 Running 0 8m3s 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 1/1 Running 0 8m3s 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 1/1 Running 0 18m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 0/2 Init:0/1 0 4m54s 10.0.2.15 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 18m 100.244.49.65 controlplane <none> <none>
default netshoot 0/1 ContainerCreating 0 4m <none> controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 4m 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 0/1 Pending 0 3m38s <none> <none> <none> <none>
default whoami-windows-9d46bfd7-hdfqm 0/1 Pending 0 3m38s <none> <none> <none> <none>
default whoami-windows-9d46bfd7-tknkd 0/1 Pending 0 3m38s <none> <none> <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 18m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 18m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 0 18m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 07:20:11 main.go:106: [swdt] Target Status finished in 0.63 minutes
$ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:20:54 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:20:54 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:20:54 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:20:54 status.go:19: [swdt] vagrant status
2023/04/27 07:20:54 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:21:03 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:21:03 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 19m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 Ready <none> 5m59s v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 07:21:15 status.go:31: [swdt] kubectl get pods
2023/04/27 07:21:15 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 1/1 Running 0 9m18s 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 1/1 Running 0 9m18s 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 1/1 Running 0 19m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 0 19m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 1/2 Running 0 6m9s 10.0.2.15 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 19m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 19m 100.244.49.65 controlplane <none> <none>
default netshoot 1/1 Running 0 5m15s 100.244.49.72 controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 5m15s 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 0/1 ContainerCreating 0 4m53s <none> win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-hdfqm 0/1 ContainerCreating 0 4m53s <none> win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-tknkd 0/1 ContainerCreating 0 4m53s <none> win-8vnbvnmjau2 <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 19m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 19m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 20m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 20m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 1/1 Running 0 20m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 19m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 1/1 Running 0 20m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 0 19m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 07:21:26 main.go:106: [swdt] Target Status finished in 0.54 minutes
$ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:25:19 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:25:19 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:25:19 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:25:19 status.go:19: [swdt] vagrant status
2023/04/27 07:25:19 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:25:29 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:25:29 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 24m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 Ready <none> 10m v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 07:25:45 status.go:31: [swdt] kubectl get pods
2023/04/27 07:25:45 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 1/1 Running 0 13m 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 1/1 Running 0 13m 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 1/1 Running 0 24m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 2/2 Running 1 (3m53s ago) 10m 10.20.30.11 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 24m 100.244.49.65 controlplane <none> <none>
default netshoot 1/1 Running 0 9m53s 100.244.49.72 controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 9m53s 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 1/1 Running 0 9m31s 100.244.46.132 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-hdfqm 1/1 Running 0 9m31s 100.244.46.131 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-tknkd 1/1 Running 0 9m31s 100.244.46.133 win-8vnbvnmjau2 <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 24m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 24m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 0 24m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 07:26:04 main.go:106: [swdt] Target Status finished in 0.75 minutes
$ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:26:44 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:26:45 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:26:45 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:26:45 status.go:19: [swdt] vagrant status
2023/04/27 07:26:45 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:26:54 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:26:54 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 25m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 Ready <none> 11m v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 07:27:05 status.go:31: [swdt] kubectl get pods
2023/04/27 07:27:05 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 1/1 Running 0 15m 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 1/1 Running 0 15m 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 1/1 Running 0 25m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 2/2 Running 1 (5m5s ago) 11m 10.20.30.11 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 25m 100.244.49.65 controlplane <none> <none>
default netshoot 1/1 Running 0 11m 100.244.49.72 controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 11m 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 1/1 Running 0 10m 100.244.46.132 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-hdfqm 1/1 Running 0 10m 100.244.46.131 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-tknkd 1/1 Running 0 10m 100.244.46.133 win-8vnbvnmjau2 <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 25m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 25m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 0 25m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 07:27:16 main.go:106: [swdt] Target Status finished in 0.52 minutes
$ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:52:48 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:52:48 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:52:48 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:52:48 status.go:19: [swdt] vagrant status
2023/04/27 07:52:48 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:52:58 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:52:58 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 51m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 Ready <none> 37m v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 07:53:10 status.go:31: [swdt] kubectl get pods
2023/04/27 07:53:10 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 0/1 Running 1 (9s ago) 41m 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 0/1 Running 1 (9s ago) 41m 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 0/1 Running 0 51m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 0 51m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 2/2 Running 1 (31m ago) 38m 10.20.30.11 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 51m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 51m 100.244.49.65 controlplane <none> <none>
default netshoot 1/1 Running 0 37m 100.244.49.72 controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 37m 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 1/1 Running 0 36m 100.244.46.132 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-hdfqm 1/1 Running 0 36m 100.244.46.131 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-tknkd 1/1 Running 0 36m 100.244.46.133 win-8vnbvnmjau2 <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 51m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 51m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 51m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 51m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 0/1 Running 1 (42s ago) 51m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 51m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 0/1 Running 1 (41s ago) 51m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 1 (12s ago) 51m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 07:53:21 main.go:106: [swdt] Target Status finished in 0.55 minutes
$▶ mage status
Running target: Status
Running dependency: startup
2023/04/27 07:56:34 main.go:71: [swdt] Setting environment variable VAGRANT_VARIABLES=variables.local.yaml
Running dependency: checkVagrant
2023/04/27 07:56:34 cmd.go:142: [swdt] exec: vagrant "--version"
2023/04/27 07:56:35 main.go:98: [swdt] Using Vagrant 2.3.4
2023/04/27 07:56:35 status.go:19: [swdt] vagrant status
2023/04/27 07:56:35 cmd.go:142: [swdt] exec: vagrant "status"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
Current machine states:
controlplane running (virtualbox)
winw1 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
2023/04/27 07:56:44 status.go:25: [swdt] kubectl get nodes
2023/04/27 07:56:44 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAME STATUS ROLES AGE VERSION
controlplane NotReady control-plane 61m v1.27.1-10+0e4269487ffcc8
win-8vnbvnmjau2 Ready <none> 47m v1.27.1-10+0e4269487ffcc8
Connection to 127.0.0.1 closed.
2023/04/27 08:02:43 status.go:31: [swdt] kubectl get pods
2023/04/27 08:02:43 cmd.go:142: [swdt] exec: vagrant "ssh" "controlplane" "-c" "kubectl get --all-namespaces pods --output=wide"
[Vagrantfile] Loading settings from variables.local.yaml
[Vagrantfile] Using Kubernetes version: 1.27
[Vagrantfile] Using Kubernetes CNI: calico
[Vagrantfile] Provisioning controlplane with Calico: 3.25.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-66fd9ff49b-cn8lb 0/1 Running 2 (9s ago) 50m 100.244.49.70 controlplane <none> <none>
calico-apiserver calico-apiserver-66fd9ff49b-rhhg8 1/1 Running 1 (9m43s ago) 50m 100.244.49.69 controlplane <none> <none>
calico-system calico-kube-controllers-789dc4c76b-2hsm9 1/1 Running 0 61m 100.244.49.66 controlplane <none> <none>
calico-system calico-node-2j767 1/1 Running 1 (8s ago) 61m 10.20.30.10 controlplane <none> <none>
calico-system calico-node-windows-zlrj9 2/2 Running 1 (40m ago) 47m 10.20.30.11 win-8vnbvnmjau2 <none> <none>
calico-system calico-typha-7b96c56c5d-xtxlv 1/1 Running 0 61m 10.20.30.10 controlplane <none> <none>
calico-system csi-node-driver-brbj8 2/2 Running 0 61m 100.244.49.65 controlplane <none> <none>
default netshoot 1/1 Running 0 46m 100.244.49.72 controlplane <none> <none>
default nginx-deployment-7f97bd64fb-sgc49 1/1 Running 0 46m 100.244.49.71 controlplane <none> <none>
default whoami-windows-9d46bfd7-8ml98 1/1 Running 0 46m 100.244.46.132 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-hdfqm 1/1 Running 0 46m 100.244.46.131 win-8vnbvnmjau2 <none> <none>
default whoami-windows-9d46bfd7-tknkd 1/1 Running 0 46m 100.244.46.133 win-8vnbvnmjau2 <none> <none>
kube-system coredns-5d78c9869d-jspqk 1/1 Running 0 61m 100.244.49.67 controlplane <none> <none>
kube-system coredns-5d78c9869d-khj6n 1/1 Running 0 61m 100.244.49.68 controlplane <none> <none>
kube-system etcd-controlplane 1/1 Running 0 61m 10.20.30.10 controlplane <none> <none>
kube-system kube-apiserver-controlplane 1/1 Running 0 61m 10.20.30.10 controlplane <none> <none>
kube-system kube-controller-manager-controlplane 1/1 Running 2 (7m53s ago) 61m 10.20.30.10 controlplane <none> <none>
kube-system kube-proxy-tfswb 1/1 Running 0 61m 10.20.30.10 controlplane <none> <none>
kube-system kube-scheduler-controlplane 0/1 CrashLoopBackOff 2 (12s ago) 61m 10.20.30.10 controlplane <none> <none>
tigera-operator tigera-operator-549d4f9bdb-skgl6 1/1 Running 2 (7m53s ago) 61m 10.20.30.10 controlplane <none> <none>
Connection to 127.0.0.1 closed.
2023/04/27 08:02:54 main.go:106: [swdt] Target Status finished in 6.33 minutes
$ vagrant ssh controlplane
[Vagrantfile] Loading default settings from variables.yaml
Last login: Thu Apr 27 08:02:53 2023 from 10.0.2.2
vagrant@controlplane:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready control-plane 127m v1.27.1-10+0e4269487ffcc8 10.20.30.10 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
win-8vnbvnmjau2 Ready <none> 113m v1.27.1-10+0e4269487ffcc8 10.20.30.11 <none> Windows Server 2019 Standard Evaluation 10.0.17763.2452 containerd://1.7.0
vagrant@controlplane:~$ kubectl get nodes -o wide
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
vagrant@controlplane:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane NotReady control-plane 134m v1.27.1-10+0e4269487ffcc8 10.20.30.10 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
win-8vnbvnmjau2 Ready <none> 120m v1.27.1-10+0e4269487ffcc8 10.20.30.11 <none> Windows Server 2019 Standard Evaluation 10.0.17763.2452 containerd://1.7.0
vagrant@controlplane:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready control-plane 134m v1.27.1-10+0e4269487ffcc8 10.20.30.10 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
win-8vnbvnmjau2 Ready <none> 120m v1.27.1-10+0e4269487ffcc8 10.20.30.11 <none> Windows Server 2019 Standard Evaluation 10.0.17763.2452 containerd://1.7.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment