Skip to content

Instantly share code, notes, and snippets.

@mloskot
Last active April 29, 2023 13:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mloskot/9ac090a3d86ff0303881c10285cf98c7 to your computer and use it in GitHub Desktop.
Save mloskot/9ac090a3d86ff0303881c10285cf98c7 to your computer and use it in GitHub Desktop.
Test run of https://github.com/kubernetes-sigs/sig-windows-dev-tools/pull/266 with current default settings.yaml in master-windows-native branch
$▶ mage | tee.exe x.log
Running dependency: main.Config.Settings
Running dependency: Fetch
[swdt-mage] --- Begin of configuration from settings.yaml ------------
calico:
calico_version: 3.25.0
kubernetes:
kubernetes_build_from_source: false
kubernetes_version: "1.26"
network:
cni: calico
linux_node_ip: 10.20.30.10
windows_node_ip: 10.20.30.11
pod_cidr: 100.244.0.0/16
vagrant:
vagrant_linux_box: mloskot/sig-windows-dev-tools-ubuntu-2204
vagrant_linux_box_version: "1.0"
vagrant_linux_cpus: 2
vagrant_linux_ram: 4096
vagrant_windows_box: mloskot/sig-windows-dev-tools-windows-2019
vagrant_windows_box_version: "1.0"
vagrant_windows_cpus: 2
vagrant_windows_ram: 6048
vagrant_windows_max_provision_attempts: 10
vagrant_vbguest_auto_update: false
[swdt-mage] --- End of configuration from settings.yaml ------------
[swdt-mage] Target Settings finished in 0.00 minutes
[swdt-mage] Downloading manifest https://storage.googleapis.com/k8s-release-dev/ci/latest-1.26.txt
[swdt-mage] Downloading binaries of Kubernetes v1.26.4-15+1f16a36a2abdae
[swdt-mage] Downloading sync\linux\bin\kubeadm from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/linux/amd64/kubeadm
[swdt-mage] Downloading sync\linux\bin\kubectl from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/linux/amd64/kubectl
[swdt-mage] Downloading sync\linux\bin\kubelet from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/linux/amd64/kubelet
[swdt-mage] Downloading sync\windows\bin\kubeadm.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/windows/amd64/kubeadm.exe
[swdt-mage] Downloading sync\windows\bin\kubelet.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/windows/amd64/kubelet.exe
[swdt-mage] Downloading sync\windows\bin\kube-proxy.exe from https://storage.googleapis.com/k8s-release-dev/ci/v1.26.4-15+1f16a36a2abdae/bin/windows/amd64/kube-proxy.exe
Running dependency: Run
Running dependency: checkClusterNotExist
Running dependency: main.Config.Vagrant
[swdt-mage] Target Fetch finished in 3.60 minutes
[swdt-mage] exec: vagrant "--version"
[swdt-mage] Using Vagrant 2.3.4
[swdt-mage] exec: vagrant "validate"
[Vagrantfile] Loading default settings from settings.yaml
Vagrantfile validated successfully.
[swdt-mage] Target Config finished in 3.73 minutes
[swdt-mage] Creating .lock directory
[swdt-mage] Creating control plane Linux node
[swdt-mage] exec: vagrant "up" "controlplane"
[Vagrantfile] Loading default settings from settings.yaml
Bringing machine 'controlplane' up with 'virtualbox' provider...
==> controlplane: Importing base box 'mloskot/sig-windows-dev-tools-ubuntu-2204'...
Progress: 20%
Progress: 40%
Progress: 90%
==> controlplane: Matching MAC address for NAT networking...
==> controlplane: Checking if box 'mloskot/sig-windows-dev-tools-ubuntu-2204' version '1.0' is up to date...
==> controlplane: Setting the name of the VM: sig-windows-dev-tools-2_controlplane_1682769392683_22883
==> controlplane: Clearing any previously set network interfaces...
==> controlplane: Preparing network interfaces based on configuration...
controlplane: Adapter 1: nat
controlplane: Adapter 2: hostonly
==> controlplane: Forwarding ports...
controlplane: 22 (guest) => 2222 (host) (adapter 1)
==> controlplane: Running 'pre-boot' VM customizations...
==> controlplane: Booting VM...
==> controlplane: Waiting for machine to boot. This may take a few minutes...
controlplane: SSH address: 127.0.0.1:2222
controlplane: SSH username: vagrant
controlplane: SSH auth method: private key
controlplane:
controlplane: Vagrant insecure key detected. Vagrant will automatically replace
controlplane: this with a newly generated keypair for better security.
controlplane:
controlplane: Inserting generated public key within guest...
==> controlplane: Machine booted and ready!
==> controlplane: Checking for guest additions in VM...
==> controlplane: Setting hostname...
==> controlplane: Configuring and enabling network interfaces...
==> controlplane: Mounting shared folders...
controlplane: /var/sync/linux => D:/_kubernetes/sig-windows-dev-tools-2/sync/linux
controlplane: /var/sync/forked => D:/_kubernetes/sig-windows-dev-tools-2/forked
controlplane: /var/sync/shared => D:/_kubernetes/sig-windows-dev-tools-2/sync/shared
==> controlplane: Running provisioner: shell...
controlplane: Running: C:/Users/mateuszl/AppData/Local/Temp/vagrant-shell20230429-26060-3tuzhp.sh
controlplane: ARGS: 1.26 10.20.30.10 100.244.0.0/16
controlplane: Using 1.26 as the Kubernetes version
controlplane: Setting up internet connectivity to /etc/resolv.conf
controlplane: nameserver 8.8.8.8
controlplane: nameserver 1.1.1.1
controlplane: now curling to add keys...
controlplane: Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
controlplane: OK
controlplane: deb https://apt.kubernetes.io/ kubernetes-xenial main
controlplane: SWDT: Running apt get update -y
controlplane: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease
controlplane: Get:3 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease [119 kB]
controlplane: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
controlplane: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [65.7 kB]
controlplane: Get:5 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease [108 kB]
controlplane: Get:6 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease [110 kB]
controlplane: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 Packages [1,069 kB]
controlplane: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main Translation-en [221 kB]
controlplane: Get:9 https://mirrors.edge.kernel.org/ubuntu jammy-updates/main amd64 c-n-f Metadata [14.3 kB]
controlplane: Get:10 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 Packages [912 kB]
controlplane: Get:11 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe Translation-en [183 kB]
controlplane: Get:12 https://mirrors.edge.kernel.org/ubuntu jammy-updates/universe amd64 c-n-f Metadata [18.6 kB]
controlplane: Get:13 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 Packages [800 kB]
controlplane: Get:14 https://mirrors.edge.kernel.org/ubuntu jammy-security/main Translation-en [156 kB]
controlplane: Get:15 https://mirrors.edge.kernel.org/ubuntu jammy-security/main amd64 c-n-f Metadata [9,144 B]
controlplane: Get:16 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted amd64 Packages [830 kB]
controlplane: Get:17 https://mirrors.edge.kernel.org/ubuntu jammy-security/restricted Translation-en [131 kB]
controlplane: Get:18 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 Packages [729 kB]
controlplane: Get:19 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe Translation-en [121 kB]
controlplane: Get:20 https://mirrors.edge.kernel.org/ubuntu jammy-security/universe amd64 c-n-f Metadata [14.2 kB]
controlplane: Fetched 5,619 kB in 14s (396 kB/s)
controlplane: Reading package lists...
controlplane: W: https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
controlplane: overlay
controlplane: br_netfilter
controlplane: SWDT: Running modprobes
controlplane: net.bridge.bridge-nf-call-iptables = 1
controlplane: net.ipv4.ip_forward = 1
controlplane: net.bridge.bridge-nf-call-ip6tables = 1
controlplane: * Applying /etc/sysctl.d/10-console-messages.conf ...
controlplane: kernel.printk = 4 4 1 7
controlplane: * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
controlplane: net.ipv6.conf.all.use_tempaddr = 2
controlplane: net.ipv6.conf.default.use_tempaddr = 2
controlplane: * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
controlplane: kernel.kptr_restrict = 1
controlplane: * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
controlplane: kernel.sysrq = 176
controlplane: * Applying /etc/sysctl.d/10-network-security.conf ...
controlplane: net.ipv4.conf.default.rp_filter = 2
controlplane: net.ipv4.conf.all.rp_filter = 2
controlplane: * Applying /etc/sysctl.d/10-ptrace.conf ...
controlplane: kernel.yama.ptrace_scope = 1
controlplane: * Applying /etc/sysctl.d/10-zeropage.conf ...
controlplane: vm.mmap_min_addr = 65536
controlplane: * Applying /usr/lib/sysctl.d/50-default.conf ...
controlplane: kernel.core_uses_pid = 1
controlplane: net.ipv4.conf.default.rp_filter = 2
controlplane: net.ipv4.conf.default.accept_source_route = 0
controlplane: sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
controlplane: net.ipv4.conf.default.promote_secondaries = 1
controlplane: sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
controlplane: net.ipv4.ping_group_range = 0 2147483647
controlplane: net.core.default_qdisc = fq_codel
controlplane: fs.protected_hardlinks = 1
controlplane: fs.protected_symlinks = 1
controlplane: fs.protected_regular = 1
controlplane: fs.protected_fifos = 1
controlplane: * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
controlplane: kernel.pid_max = 4194304
controlplane: * Applying /etc/sysctl.d/99-kubernetes-cri.conf ...
controlplane: net.bridge.bridge-nf-call-iptables = 1
controlplane: net.ipv4.ip_forward = 1
controlplane: net.bridge.bridge-nf-call-ip6tables = 1
controlplane: * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
controlplane: fs.protected_fifos = 1
controlplane: fs.protected_hardlinks = 1
controlplane: fs.protected_regular = 2
controlplane: fs.protected_symlinks = 1
controlplane: * Applying /etc/sysctl.d/99-sysctl.conf ...
controlplane: net.ipv6.conf.all.disable_ipv6 = 1
controlplane: * Applying /etc/sysctl.conf ...
controlplane: net.ipv6.conf.all.disable_ipv6 = 1
controlplane: SWDT installing kubelet, kubeadm, kubectl will overwrite them later as needeed...
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: The following additional packages will be installed:
controlplane: conntrack cri-tools ebtables kubernetes-cni socat
controlplane: The following NEW packages will be installed:
controlplane: conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
controlplane: 0 upgraded, 8 newly installed, 0 to remove and 41 not upgraded.
controlplane: Need to get 85.9 MB of archives.
controlplane: After this operation, 328 MB of additional disk space will be used.
controlplane: Get:3 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
controlplane: Get:7 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]
controlplane: Get:8 https://mirrors.edge.kernel.org/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]
controlplane: Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
controlplane: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
controlplane: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.27.1-00 [18.7 MB]
controlplane: Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.27.1-00 [10.2 MB]
controlplane: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.27.1-00 [9,928 kB]
controlplane: dpkg-preconfigure: unable to re-open stdin: No such file or directory
controlplane: Fetched 85.9 MB in 37s (2,302 kB/s)
controlplane: Selecting previously unselected package conntrack.
controlplane: (Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 80676 files and directories currently installed.)
controlplane: Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
controlplane: Unpacking conntrack (1:1.4.6-2build2) ...
controlplane: Selecting previously unselected package cri-tools.
controlplane: Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
controlplane: Unpacking cri-tools (1.26.0-00) ...
controlplane: Selecting previously unselected package ebtables.
controlplane: Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
controlplane: Unpacking ebtables (2.0.11-4build2) ...
controlplane: Selecting previously unselected package kubernetes-cni.
controlplane: Preparing to unpack .../3-kubernetes-cni_1.2.0-00_amd64.deb ...
controlplane: Unpacking kubernetes-cni (1.2.0-00) ...
controlplane: Selecting previously unselected package socat.
controlplane: Preparing to unpack .../4-socat_1.7.4.1-3ubuntu4_amd64.deb ...
controlplane: Unpacking socat (1.7.4.1-3ubuntu4) ...
controlplane: Selecting previously unselected package kubelet.
controlplane: Preparing to unpack .../5-kubelet_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubelet (1.27.1-00) ...
controlplane: Selecting previously unselected package kubectl.
controlplane: Preparing to unpack .../6-kubectl_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubectl (1.27.1-00) ...
controlplane: Selecting previously unselected package kubeadm.
controlplane: Preparing to unpack .../7-kubeadm_1.27.1-00_amd64.deb ...
controlplane: Unpacking kubeadm (1.27.1-00) ...
controlplane: Setting up conntrack (1:1.4.6-2build2) ...
controlplane: Setting up kubectl (1.27.1-00) ...
controlplane: Setting up ebtables (2.0.11-4build2) ...
controlplane: Setting up socat (1.7.4.1-3ubuntu4) ...
controlplane: Setting up cri-tools (1.26.0-00) ...
controlplane: Setting up kubernetes-cni (1.2.0-00) ...
controlplane: Setting up kubelet (1.27.1-00) ...
controlplane: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service Ã"Ã¥Ã┼ /lib/systemd/system/kubelet.service.
controlplane: Setting up kubeadm (1.27.1-00) ...
controlplane: Processing triggers for man-db (2.10.2-1) ...
controlplane:
controlplane: Running kernel seems to be up-to-date.
controlplane:
controlplane: No services need to be restarted.
controlplane:
controlplane: No containers need to be restarted.
controlplane:
controlplane: No user sessions are running outdated binaries.
controlplane:
controlplane: No VM guests are running outdated hypervisor (qemu) binaries on this host.
controlplane: kubelet set on hold.
controlplane: kubeadm set on hold.
controlplane: kubectl set on hold.
controlplane: Configuring Containerd
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: lsb-release is already the newest version (11.1.0ubuntu4).
controlplane: ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
controlplane: ca-certificates set to manually installed.
controlplane: gnupg is already the newest version (2.2.27-3ubuntu2.1).
controlplane: 0 upgraded, 0 newly installed, 0 to remove and 41 not upgraded.
controlplane: Hit:1 https://mirrors.edge.kernel.org/ubuntu jammy InRelease
controlplane: Hit:3 https://mirrors.edge.kernel.org/ubuntu jammy-updates InRelease
controlplane: Hit:4 https://mirrors.edge.kernel.org/ubuntu jammy-backports InRelease
controlplane: Hit:5 https://mirrors.edge.kernel.org/ubuntu jammy-security InRelease
controlplane: Get:6 https://download.docker.com/linux/ubuntu jammy InRelease [48.9 kB]
controlplane: Get:7 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [16.7 kB]
controlplane: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
controlplane: Fetched 74.6 kB in 2s (46.7 kB/s)
controlplane: Reading package lists...
controlplane: W: https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
controlplane: Reading package lists...
controlplane: Building dependency tree...
controlplane: Reading state information...
controlplane: The following NEW packages will be installed:
controlplane: containerd.io
controlplane: 0 upgraded, 1 newly installed, 0 to remove and 41 not upgraded.
controlplane: Need to get 28.3 MB of archives.
controlplane: After this operation, 116 MB of additional disk space will be used.
controlplane: Get:1 https://download.docker.com/linux/ubuntu jammy/stable amd64 containerd.io amd64 1.6.20-1 [28.3 MB]
controlplane: dpkg-preconfigure: unable to re-open stdin: No such file or directory
controlplane: Fetched 28.3 MB in 12s (2,444 kB/s)
controlplane: Selecting previously unselected package containerd.io.
controlplane: (Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 80770 files and directories currently installed.)
controlplane: Preparing to unpack .../containerd.io_1.6.20-1_amd64.deb ...
controlplane: Unpacking containerd.io (1.6.20-1) ...
controlplane: Setting up containerd.io (1.6.20-1) ...
controlplane: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service Ã"Ã¥Ã┼ /lib/systemd/system/containerd.service.
controlplane: Processing triggers for man-db (2.10.2-1) ...
controlplane:
controlplane: Running kernel seems to be up-to-date.
controlplane:
controlplane: No services need to be restarted.
controlplane:
controlplane: No containers need to be restarted.
controlplane:
controlplane: No user sessions are running outdated binaries.
controlplane:
controlplane: No VM guests are running outdated hypervisor (qemu) binaries on this host.
controlplane: copying /var/sync/linux/bin/kubeadm to node path..
controlplane: copying /var/sync/linux/bin/kubectl to node path..
controlplane: copying /var/sync/linux/bin/kubelet to node path..
controlplane: disabled_plugins = []
controlplane: imports = []
controlplane: oom_score = 0
controlplane: plugin_dir = ""
controlplane: required_plugins = []
controlplane: root = "/var/lib/containerd"
controlplane: state = "/run/containerd"
controlplane: temp = ""
controlplane: version = 2
controlplane:
controlplane: [cgroup]
controlplane: path = ""
controlplane:
controlplane: [debug]
controlplane: address = ""
controlplane: format = ""
controlplane: gid = 0
controlplane: level = ""
controlplane: uid = 0
controlplane:
controlplane: [grpc]
controlplane: address = "/run/containerd/containerd.sock"
controlplane: gid = 0
controlplane: max_recv_message_size = 16777216
controlplane: max_send_message_size = 16777216
controlplane: tcp_address = ""
controlplane: tcp_tls_ca = ""
controlplane: tcp_tls_cert = ""
controlplane: tcp_tls_key = ""
controlplane: uid = 0
controlplane:
controlplane: [metrics]
controlplane: address = ""
controlplane: grpc_histogram = false
controlplane:
controlplane: [plugins]
controlplane:
controlplane: [plugins."io.containerd.gc.v1.scheduler"]
controlplane: deletion_threshold = 0
controlplane: mutation_threshold = 100
controlplane: pause_threshold = 0.02
controlplane: schedule_delay = "0s"
controlplane: startup_delay = "100ms"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri"]
controlplane: device_ownership_from_security_context = false
controlplane: disable_apparmor = false
controlplane: disable_cgroup = false
controlplane: disable_hugetlb_controller = true
controlplane: disable_proc_mount = false
controlplane: disable_tcp_service = true
controlplane: enable_selinux = false
controlplane: enable_tls_streaming = false
controlplane: enable_unprivileged_icmp = false
controlplane: enable_unprivileged_ports = false
controlplane: ignore_image_defined_volumes = false
controlplane: max_concurrent_downloads = 3
controlplane: max_container_log_line_size = 16384
controlplane: netns_mounts_under_state_dir = false
controlplane: restrict_oom_score_adj = false
controlplane: sandbox_image = "registry.k8s.io/pause:3.6"
controlplane: selinux_category_range = 1024
controlplane: stats_collect_period = 10
controlplane: stream_idle_timeout = "4h0m0s"
controlplane: stream_server_address = "127.0.0.1"
controlplane: stream_server_port = "0"
controlplane: systemd_cgroup = false
controlplane: tolerate_missing_hugetlb_controller = true
controlplane: unset_seccomp_profile = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".cni]
controlplane: bin_dir = "/opt/cni/bin"
controlplane: conf_dir = "/etc/cni/net.d"
controlplane: conf_template = ""
controlplane: ip_pref = ""
controlplane: max_conf_num = 1
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd]
controlplane: default_runtime_name = "runc"
controlplane: disable_snapshot_annotations = true
controlplane: discard_unpacked_layers = false
controlplane: ignore_rdt_not_enabled_errors = false
controlplane: no_pivot = false
controlplane: snapshotter = "overlayfs"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = "io.containerd.runc.v2"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
controlplane: BinaryName = ""
controlplane: CriuImagePath = ""
controlplane: CriuPath = ""
controlplane: CriuWorkPath = ""
controlplane: IoGid = 0
controlplane: IoUid = 0
controlplane: NoNewKeyring = false
controlplane: NoPivotRoot = false
controlplane: Root = ""
controlplane: ShimCgroup = ""
controlplane: SystemdCgroup = false
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
controlplane: base_runtime_spec = ""
controlplane: cni_conf_dir = ""
controlplane: cni_max_conf_num = 0
controlplane: container_annotations = []
controlplane: pod_annotations = []
controlplane: privileged_without_host_devices = false
controlplane: runtime_engine = ""
controlplane: runtime_path = ""
controlplane: runtime_root = ""
controlplane: runtime_type = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".image_decryption]
controlplane: key_model = "node"
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry]
controlplane: config_path = ""
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.auths]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.configs]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.headers]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
controlplane:
controlplane: [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
controlplane: tls_cert_file = ""
controlplane: tls_key_file = ""
controlplane:
controlplane: [plugins."io.containerd.internal.v1.opt"]
controlplane: path = "/opt/containerd"
controlplane:
controlplane: [plugins."io.containerd.internal.v1.restart"]
controlplane: interval = "10s"
controlplane:
controlplane: [plugins."io.containerd.internal.v1.tracing"]
controlplane: sampling_ratio = 1.0
controlplane: service_name = "containerd"
controlplane:
controlplane: [plugins."io.containerd.metadata.v1.bolt"]
controlplane: content_sharing_policy = "shared"
controlplane:
controlplane: [plugins."io.containerd.monitor.v1.cgroups"]
controlplane: no_prometheus = false
controlplane:
controlplane: [plugins."io.containerd.runtime.v1.linux"]
controlplane: no_shim = false
controlplane: runtime = "runc"
controlplane: runtime_root = ""
controlplane: shim = "containerd-shim"
controlplane: shim_debug = false
controlplane:
controlplane: [plugins."io.containerd.runtime.v2.task"]
controlplane: platforms = ["linux/amd64"]
controlplane: sched_core = false
controlplane:
controlplane: [plugins."io.containerd.service.v1.diff-service"]
controlplane: default = ["walking"]
controlplane:
controlplane: [plugins."io.containerd.service.v1.tasks-service"]
controlplane: rdt_config_file = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.aufs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.btrfs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.devmapper"]
controlplane: async_remove = false
controlplane: base_image_size = ""
controlplane: discard_blocks = false
controlplane: fs_options = ""
controlplane: fs_type = ""
controlplane: pool_name = ""
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.native"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.overlayfs"]
controlplane: root_path = ""
controlplane: upperdir_label = false
controlplane:
controlplane: [plugins."io.containerd.snapshotter.v1.zfs"]
controlplane: root_path = ""
controlplane:
controlplane: [plugins."io.containerd.tracing.processor.v1.otlp"]
controlplane: endpoint = ""
controlplane: insecure = false
controlplane: protocol = ""
controlplane:
controlplane: [proxy_plugins]
controlplane:
controlplane: [stream_processors]
controlplane:
controlplane: [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
controlplane: accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
controlplane: args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
controlplane: env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
controlplane: path = "ctd-decoder"
controlplane: returns = "application/vnd.oci.image.layer.v1.tar"
controlplane:
controlplane: [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
controlplane: accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
controlplane: args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
controlplane: env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
controlplane: path = "ctd-decoder"
controlplane: returns = "application/vnd.oci.image.layer.v1.tar+gzip"
controlplane:
controlplane: [timeouts]
controlplane: "io.containerd.timeout.bolt.open" = "0s"
controlplane: "io.containerd.timeout.shim.cleanup" = "5s"
controlplane: "io.containerd.timeout.shim.load" = "5s"
controlplane: "io.containerd.timeout.shim.shutdown" = "3s"
controlplane: "io.containerd.timeout.task.state" = "2s"
controlplane:
controlplane: [ttrpc]
controlplane: address = ""
controlplane: gid = 0
controlplane: uid = 0
controlplane: I0429 11:58:49.117140 3241 initconfiguration.go:254] loading configuration from "/var/sync/shared/kubeadm.yaml"
controlplane: I0429 11:58:49.120033 3241 initconfiguration.go:116] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
controlplane: I0429 11:58:49.120055 3241 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
controlplane: I0429 11:58:49.126564 3241 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.26.txt
controlplane: [init] Using Kubernetes version: v1.26.4
controlplane: I0429 11:58:50.515932 3241 common.go:128] WARNING: tolerating control plane version v1.26.4 as a pre-release version
controlplane: [preflight] Running pre-flight checks
controlplane: I0429 11:58:50.516881 3241 checks.go:568] validating Kubernetes and kubeadm version
controlplane: I0429 11:58:50.516927 3241 checks.go:168] validating if the firewall is enabled and active
controlplane: I0429 11:58:50.534633 3241 checks.go:203] validating availability of port 6443
controlplane: I0429 11:58:50.535000 3241 checks.go:203] validating availability of port 10259
controlplane: I0429 11:58:50.535016 3241 checks.go:203] validating availability of port 10257
controlplane: I0429 11:58:50.535031 3241 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
controlplane: I0429 11:58:50.535049 3241 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
controlplane: I0429 11:58:50.535054 3241 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
controlplane: I0429 11:58:50.535060 3241 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
controlplane: I0429 11:58:50.535075 3241 checks.go:430] validating if the connectivity type is via proxy or direct
controlplane: I0429 11:58:50.535090 3241 checks.go:469] validating http connectivity to first IP address in the CIDR
controlplane: I0429 11:58:50.535103 3241 checks.go:469] validating http connectivity to first IP address in the CIDR
controlplane: I0429 11:58:50.535111 3241 checks.go:104] validating the container runtime
controlplane: I0429 11:58:50.594671 3241 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
controlplane: I0429 11:58:50.594758 3241 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
controlplane: I0429 11:58:50.594772 3241 checks.go:644] validating whether swap is enabled or not
controlplane: I0429 11:58:50.594795 3241 checks.go:370] validating the presence of executable crictl
controlplane: I0429 11:58:50.594812 3241 checks.go:370] validating the presence of executable conntrack
controlplane: I0429 11:58:50.594828 3241 checks.go:370] validating the presence of executable ip
controlplane: I0429 11:58:50.594837 3241 checks.go:370] validating the presence of executable iptables
controlplane: I0429 11:58:50.594881 3241 checks.go:370] validating the presence of executable mount
controlplane: I0429 11:58:50.594890 3241 checks.go:370] validating the presence of executable nsenter
controlplane: I0429 11:58:50.594912 3241 checks.go:370] validating the presence of executable ebtables
controlplane: I0429 11:58:50.594922 3241 checks.go:370] validating the presence of executable ethtool
controlplane: I0429 11:58:50.594930 3241 checks.go:370] validating the presence of executable socat
controlplane: I0429 11:58:50.594942 3241 checks.go:370] validating the presence of executable tc
controlplane: I0429 11:58:50.594955 3241 checks.go:370] validating the presence of executable touch
controlplane: I0429 11:58:50.594964 3241 checks.go:516] running all checks
controlplane: I0429 11:58:50.615716 3241 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
controlplane: I0429 11:58:50.615794 3241 checks.go:610] validating kubelet version
controlplane: I0429 11:58:50.683852 3241 checks.go:130] validating if the "kubelet" service is enabled and active
controlplane: [preflight] Pulling images required for setting up a Kubernetes cluster
controlplane: [preflight] This might take a minute or two, depending on the speed of your internet connection
controlplane: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
controlplane: I0429 11:58:50.705020 3241 checks.go:203] validating availability of port 10250
controlplane: I0429 11:58:50.705145 3241 checks.go:203] validating availability of port 2379
controlplane: I0429 11:58:50.705160 3241 checks.go:203] validating availability of port 2380
controlplane: I0429 11:58:50.705174 3241 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
controlplane: I0429 11:58:50.705241 3241 checks.go:832] using image pull policy: IfNotPresent
controlplane: I0429 11:58:50.744149 3241 checks.go:849] pulling: registry.k8s.io/kube-apiserver:v1.26.4
controlplane: I0429 11:59:11.406425 3241 checks.go:849] pulling: registry.k8s.io/kube-controller-manager:v1.26.4
controlplane: I0429 11:59:29.102486 3241 checks.go:849] pulling: registry.k8s.io/kube-scheduler:v1.26.4
controlplane: I0429 11:59:39.238459 3241 checks.go:849] pulling: registry.k8s.io/kube-proxy:v1.26.4
controlplane: I0429 11:59:52.399000 3241 checks.go:849] pulling: registry.k8s.io/pause:3.9
controlplane: I0429 11:59:55.354484 3241 checks.go:849] pulling: registry.k8s.io/etcd:3.5.6-0
controlplane: I0429 12:00:45.143568 3241 checks.go:849] pulling: registry.k8s.io/coredns/coredns:v1.9.3
controlplane: [certs] Using certificateDir folder "/etc/kubernetes/pki"
controlplane: I0429 12:00:55.038379 3241 certs.go:112] creating a new certificate authority for ca
controlplane: [certs] Generating "ca" certificate and key
controlplane: I0429 12:00:55.314240 3241 certs.go:519] validating certificate period for ca certificate
controlplane: [certs] Generating "apiserver" certificate and key
controlplane: [certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.20.30.10]
controlplane: [certs] Generating "apiserver-kubelet-client" certificate and key
controlplane: I0429 12:00:55.728826 3241 certs.go:112] creating a new certificate authority for front-proxy-ca
controlplane: [certs] Generating "front-proxy-ca" certificate and key
controlplane: I0429 12:00:55.962446 3241 certs.go:519] validating certificate period for front-proxy-ca certificate
controlplane: [certs] Generating "front-proxy-client" certificate and key
controlplane: I0429 12:00:56.168521 3241 certs.go:112] creating a new certificate authority for etcd-ca
controlplane: [certs] Generating "etcd/ca" certificate and key
controlplane: I0429 12:00:56.285469 3241 certs.go:519] validating certificate period for etcd/ca certificate
controlplane: [certs] Generating "etcd/server" certificate and key
controlplane: [certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [10.20.30.10 127.0.0.1 ::1]
controlplane: [certs] Generating "etcd/peer" certificate and key
controlplane: [certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [10.20.30.10 127.0.0.1 ::1]
controlplane: [certs] Generating "etcd/healthcheck-client" certificate and key
controlplane: [certs] Generating "apiserver-etcd-client" certificate and key
controlplane: I0429 12:00:57.052467 3241 certs.go:78] creating new public/private key files for signing service account users
controlplane: [certs] Generating "sa" key and public key
controlplane: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
controlplane: I0429 12:00:57.325072 3241 kubeconfig.go:103] creating kubeconfig file for admin.conf
controlplane: [kubeconfig] Writing "admin.conf" kubeconfig file
controlplane: I0429 12:00:57.501959 3241 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
controlplane: [kubeconfig] Writing "kubelet.conf" kubeconfig file
controlplane: I0429 12:00:57.696045 3241 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
controlplane: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
controlplane: I0429 12:00:57.846710 3241 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
controlplane: [kubeconfig] Writing "scheduler.conf" kubeconfig file
controlplane: I0429 12:00:57.994549 3241 kubelet.go:67] Stopping the kubelet
controlplane: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
controlplane: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
controlplane: [kubelet-start] Starting the kubelet
controlplane: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
controlplane: [control-plane] Creating static Pod manifest for "kube-apiserver"
controlplane: [control-plane] Creating static Pod manifest for "kube-controller-manager"
controlplane: [control-plane] Creating static Pod manifest for "kube-scheduler"
controlplane: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
controlplane: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
controlplane: I0429 12:00:58.320463 3241 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0429 12:00:58.320668 3241 certs.go:519] validating certificate period for CA certificate
controlplane: I0429 12:00:58.320720 3241 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
controlplane: I0429 12:00:58.320725 3241 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
controlplane: I0429 12:00:58.320729 3241 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
controlplane: I0429 12:00:58.320732 3241 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
controlplane: I0429 12:00:58.320735 3241 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
controlplane: I0429 12:00:58.320738 3241 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
controlplane: I0429 12:00:58.322767 3241 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
controlplane: I0429 12:00:58.322779 3241 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0429 12:00:58.322934 3241 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322940 3241 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322943 3241 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322947 3241 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322950 3241 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322953 3241 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322956 3241 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
controlplane: I0429 12:00:58.322960 3241 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
controlplane: I0429 12:00:58.323513 3241 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
controlplane: I0429 12:00:58.323521 3241 manifests.go:99] [control-plane] getting StaticPodSpecs
controlplane: I0429 12:00:58.323655 3241 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
controlplane: I0429 12:00:58.323959 3241 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
controlplane: I0429 12:00:58.324621 3241 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
controlplane: I0429 12:00:58.324631 3241 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
controlplane: I0429 12:00:58.325045 3241 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: I0429 12:00:58.325776 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:00:58.826640 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:00:59.334065 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:00:59.861084 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:01:00.328222 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 1 milliseconds
controlplane: I0429 12:01:00.827021 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:01:01.326722 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:01:01.829725 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 1 milliseconds
controlplane: I0429 12:01:02.327515 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:01:02.826643 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 0 milliseconds
controlplane: I0429 12:01:03.327076 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s in 1 milliseconds
controlplane: I0429 12:01:07.160253 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 3313 milliseconds
controlplane: I0429 12:01:07.346279 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 19 milliseconds
controlplane: I0429 12:01:07.828228 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
controlplane: I0429 12:01:08.331509 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
controlplane: I0429 12:01:08.830095 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
controlplane: I0429 12:01:09.327448 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 500 Internal Server Error in 0 milliseconds
controlplane: I0429 12:01:09.827365 3241 round_trippers.go:553] GET https://10.20.30.10:6443/healthz?timeout=10s 200 OK in 1 milliseconds
controlplane: [apiclient] All control plane components are healthy after 11.502115 seconds
controlplane: I0429 12:01:09.827668 3241 uploadconfig.go:111] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
controlplane: [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
controlplane: I0429 12:01:09.834029 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 5 milliseconds
controlplane: I0429 12:01:09.839813 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 5 milliseconds
controlplane: I0429 12:01:09.846776 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 6 milliseconds
controlplane: [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
controlplane: I0429 12:01:09.848643 3241 uploadconfig.go:125] [upload-config] Uploading the kubelet component config to a ConfigMap
controlplane: I0429 12:01:09.857090 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 milliseconds
controlplane: I0429 12:01:09.863740 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 6 milliseconds
controlplane: I0429 12:01:09.869666 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:09.870554 3241 uploadconfig.go:130] [upload-config] Preserving the CRISocket information for the control-plane node
controlplane: I0429 12:01:09.870722 3241 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "controlplane" as an annotation
controlplane: I0429 12:01:10.378493 3241 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 4 milliseconds
controlplane: I0429 12:01:10.392508 3241 round_trippers.go:553] PATCH https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 12 milliseconds
controlplane: [upload-certs] Skipping phase. Please see --upload-certs
controlplane: [mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
controlplane: [mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
controlplane: I0429 12:01:10.898574 3241 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 2 milliseconds
controlplane: I0429 12:01:10.915372 3241 round_trippers.go:553] PATCH https://10.20.30.10:6443/api/v1/nodes/controlplane?timeout=10s 200 OK in 15 milliseconds
controlplane: [bootstrap-token] Using token: o77p5r.bc9670e3247x4278
controlplane: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
controlplane: I0429 12:01:10.922941 3241 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-o77p5r?timeout=10s 404 Not Found in 4 milliseconds
controlplane: I0429 12:01:10.933194 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 8 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
controlplane: I0429 12:01:10.938496 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:10.945614 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 6 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
controlplane: I0429 12:01:10.954004 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 7 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
controlplane: I0429 12:01:10.959231 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
controlplane: I0429 12:01:10.966921 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds
controlplane: [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
controlplane: I0429 12:01:10.968301 3241 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
controlplane: I0429 12:01:10.969093 3241 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: I0429 12:01:10.969210 3241 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
controlplane: I0429 12:01:10.972354 3241 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
controlplane: I0429 12:01:10.988793 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 16 milliseconds
controlplane: I0429 12:01:10.988956 3241 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
controlplane: I0429 12:01:10.994508 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 5 milliseconds
controlplane: I0429 12:01:10.998919 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:10.999172 3241 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
controlplane: [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
controlplane: I0429 12:01:10.999652 3241 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
controlplane: I0429 12:01:11.000198 3241 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
controlplane: I0429 12:01:11.439247 3241 round_trippers.go:553] GET https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 7 milliseconds
controlplane: I0429 12:01:11.446620 3241 round_trippers.go:553] GET https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 4 milliseconds
controlplane: I0429 12:01:11.451257 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:11.457049 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:11.464126 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds
controlplane: I0429 12:01:11.469609 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:11.483307 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 11 milliseconds
controlplane: I0429 12:01:11.525571 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 40 milliseconds
controlplane: [addons] Applied essential addon: CoreDNS
controlplane: I0429 12:01:11.534550 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 7 milliseconds
controlplane: I0429 12:01:11.544129 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 8 milliseconds
controlplane: I0429 12:01:11.548814 3241 round_trippers.go:553] POST https://10.20.30.10:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:11.553683 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 3 milliseconds
controlplane: I0429 12:01:11.558456 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
controlplane: I0429 12:01:11.564134 3241 round_trippers.go:553] POST https://10.20.30.10:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 4 milliseconds
controlplane: [addons] Applied essential addon: kube-proxy
controlplane:
controlplane: Your Kubernetes control-plane has initialized successfully!
controlplane:
controlplane: To start using your cluster, you need to run the following as a regular user:
controlplane:
controlplane: mkdir -p $HOME/.kube
controlplane: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
controlplane: sudo chown $(id -u):$(id -g) $HOME/.kube/config
controlplane:
controlplane: Alternatively, if you are the root user, you can run:
controlplane:
controlplane: export KUBECONFIG=/etc/kubernetes/admin.conf
controlplane:
controlplane: You should now deploy a pod network to the cluster.
controlplane: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
controlplane: https://kubernetes.io/docs/concepts/cluster-administration/addons/
controlplane:
controlplane: Then you can join any number of worker nodes by running the following on each as root:
controlplane:
controlplane: kubeadm join 10.20.30.10:6443 --token o77p5r.bc9670e3247x4278 \
controlplane: --discovery-token-ca-cert-hash sha256:dfebe12f37fafe629b292fe163e1e989ceb1de07e5afe5f00cf1106132702b92
controlplane: I0429 12:01:11.565183 3241 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: I0429 12:01:11.565792 3241 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
controlplane: serviceaccount/kube-proxy-windows created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:kube-proxy created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god2 created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god3 created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/node:god4 created
controlplane: Testing controlplane nodes!
controlplane: NAMESPACE NAME READY STATUS RESTARTS AGE
controlplane: kube-system etcd-controlplane 0/1 Pending 0 3s
controlplane: kube-system kube-apiserver-controlplane 0/1 Pending 0 1s
controlplane: kube-system kube-controller-manager-controlplane 0/1 Running 0 5s
controlplane: kube-system kube-scheduler-controlplane 0/1 Pending 0 1s
==> controlplane: Running provisioner: shell...
controlplane: Running: C:/Users/mateuszl/AppData/Local/Temp/vagrant-shell20230429-26060-q5772q.sh
controlplane: running calico installer now with pod_cidr 100.244.0.0/16
controlplane: node/controlplane untainted
controlplane: error: taint "node-role.kubernetes.io/master" not found
controlplane: namespace/calico-system created
controlplane: namespace/tigera-operator created
controlplane: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
controlplane: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
controlplane: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
controlplane: serviceaccount/tigera-operator created
controlplane: clusterrole.rbac.authorization.k8s.io/tigera-operator created
controlplane: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
controlplane: deployment.apps/tigera-operator created
controlplane: --2023-04-29 12:01:20-- https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml
controlplane: Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...
controlplane: Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
controlplane: HTTP request sent, awaiting response... 200 OK
controlplane: Length: 827 [text/plain]
controlplane: Saving to: Ã"Ã╬ÿtrigera-custom-resource.yamlÃ"Ã╬Ã-
controlplane:
controlplane: 0K 100% 50.9M=0s
controlplane:
controlplane: 2023-04-29 12:01:20 (50.9 MB/s) - Ã"Ã╬ÿtrigera-custom-resource.yamlÃ"Ã╬Ã- saved [827/827]
controlplane:
controlplane: installation.operator.tigera.io/default created
controlplane: apiserver.operator.tigera.io/default created
controlplane: installation.operator.tigera.io/default patched
controlplane: waiting 20s for calico pods...
controlplane: --2023-04-29 12:01:42-- https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-windows-vxlan.yaml
controlplane: Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
controlplane: Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
controlplane: HTTP request sent, awaiting response... 200 OK
controlplane: Length: 4157 (4.1K) [text/plain]
controlplane: Saving to: Ã"Ã╬ÿcalico-windows.yamlÃ"Ã╬Ã-
controlplane:
controlplane: 0K .... 100% 3.19M=0.001s
controlplane:
controlplane: 2023-04-29 12:01:43 (3.19 MB/s) - Ã"Ã╬ÿcalico-windows.yamlÃ"Ã╬Ã- saved [4157/4157]
controlplane:
controlplane: configmap/calico-windows-config created
controlplane: daemonset.apps/calico-node-windows created
controlplane: % Total % Received % Xferd Average Speed Time Time Time Current
controlplane: Dload Upload Total Spent Left Speed
controlplane:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
controlplane:
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
1 56.8M 1 783k 0 0 390k 0 0:02:28 0:00:02 0:02:26 833k
5 56.8M 5 3135k 0 0 1043k 0 0:00:55 0:00:03 0:00:52 1617k
8 56.8M 8 5148k 0 0 1285k 0 0:00:45 0:00:04 0:00:41 1751k
13 56.8M 13 7647k 0 0 1514k 0 0:00:38 0:00:05 0:00:33 1918k
15 56.8M 15 8743k 0 0 1455k 0 0:00:39 0:00:06 0:00:33 1769k
17 56.8M 17 9.9M 0 0 1451k 0 0:00:40 0:00:07 0:00:33 1877k
21 56.8M 21 12.0M 0 0 1541k 0 0:00:37 0:00:08 0:00:29 1839k
24 56.8M 24 14.1M 0 0 1607k 0 0:00:36 0:00:09 0:00:27 1865k
27 56.8M 27 15.7M 0 0 1615k 0 0:00:36 0:00:10 0:00:26 1719k
30 56.8M 30 17.5M 0 0 1589k 0 0:00:36 0:00:11 0:00:25 1741k
33 56.8M 33 19.2M 0 0 1641k 0 0:00:35 0:00:12 0:00:23 1908k
36 56.8M 36 20.8M 0 0 1642k 0 0:00:35 0:00:13 0:00:22 1805k
39 56.8M 39 22.5M 0 0 1645k 0 0:00:35 0:00:14 0:00:21 1714k
42 56.8M 42 24.2M 0 0 1651k 0 0:00:35 0:00:15 0:00:20 1722k
44 56.8M 44 25.4M 0 0 1631k 0 0:00:35 0:00:16 0:00:19 1731k
47 56.8M 47 26.8M 0 0 1613k 0 0:00:36 0:00:17 0:00:19 1546k
49 56.8M 49 28.3M 0 0 1611k 0 0:00:36 0:00:18 0:00:18 1531k
52 56.8M 52 29.5M 0 0 1593k 0 0:00:36 0:00:19 0:00:17 1446k
53 56.8M 53 30.5M 0 0 1564k 0 0:00:37 0:00:20 0:00:17 1304k
55 56.8M 55 31.8M 0 0 1550k 0 0:00:37 0:00:21 0:00:16 1293k
58 56.8M 58 33.4M 0 0 1557k 0 0:00:37 0:00:22 0:00:15 1363k
60 56.8M 60 34.5M 0 0 1539k 0 0:00:37 0:00:23 0:00:14 1281k
63 56.8M 63 35.8M 0 0 1529k 0 0:00:38 0:00:24 0:00:14 1288k
65 56.8M 65 37.0M 0 0 1515k 0 0:00:38 0:00:25 0:00:13 1320k
67 56.8M 67 38.0M 0 0 1500k 0 0:00:38 0:00:26 0:00:12 1288k
68 56.8M 68 39.0M 0 0 1467k 0 0:00:39 0:00:27 0:00:12 1093k
70 56.8M 70 40.1M 0 0 1467k 0 0:00:39 0:00:28 0:00:11 1133k
72 56.8M 72 40.9M 0 0 1445k 0 0:00:40 0:00:29 0:00:11 1043k
73 56.8M 73 41.7M 0 0 1423k 0 0:00:40 0:00:30 0:00:10 961k
74 56.8M 74 42.5M 0 0 1405k 0 0:00:41 0:00:31 0:00:10 910k
76 56.8M 76 43.3M 0 0 1384k 0 0:00:42 0:00:32 0:00:10 908k
79 56.8M 79 44.9M 0 0 1393k 0 0:00:41 0:00:33 0:00:08 982k
82 56.8M 82 46.6M 0 0 1405k 0 0:00:41 0:00:34 0:00:07 1169k
85 56.8M 85 48.4M 0 0 1418k 0 0:00:41 0:00:35 0:00:06 1391k
88 56.8M 88 50.3M 0 0 1431k 0 0:00:40 0:00:36 0:00:04 1596k
91 56.8M 91 52.1M 0 0 1442k 0 0:00:40 0:00:37 0:00:03 1817k
94 56.8M 94 53.9M 0 0 1454k 0 0:00:40 0:00:38 0:00:02 1852k
97 56.8M 97 55.4M 0 0 1454k 0 0:00:39 0:00:39 --:--:-- 1792k
99 56.8M 99 56.4M 0 0 1445k 0 0:00:40 0:00:40 --:--:-- 1632k
100 56.8M 100 56.8M 0 0 1438k 0 0:00:40 0:00:40 --:--:-- 1494k
controlplane: Successfully set StrictAffinity to: true
controlplane: NAME READY STATUS RESTARTS AGE
controlplane: calico-kube-controllers-6b7b9c649d-k95hg 0/1 Pending 0 44s
controlplane: calico-node-h485b 0/1 Init:0/2 0 44s
controlplane: calico-typha-b58dcf67d-6ddrs 1/1 Running 0 44s
[swdt-mage] Setting SSH private key permissions for .vagrant\machines\controlplane\virtualbox\private_key
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant:r" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Remove:g" "Administrator" "Authenticated Users" "BUILTIN\\Administrators" "BUILTIN" "Everyone" "System" "Users"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key"
.vagrant\machines\controlplane\virtualbox\private_key CADCORP\MateuszL:(F)
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: vagrant "status"
[Vagrantfile] Loading default settings from settings.yaml
Current machine states:
controlplane running (virtualbox)
winw1 not created (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
[swdt-mage] Creating worker Windows node
[swdt-mage] ##########################################################
[swdt-mage] Retry vagrant up if the first time the Windows node failed
[swdt-mage] ##########################################################
[swdt-mage] vagrant status winw1 - attempt 1 of 10
[swdt-mage] exec: vagrant "status" "winw1"
[swdt-mage] winw1 not created (virtualbox)
[swdt-mage] vagrant up winw1 - attempt 1 of 10
[swdt-mage] exec: vagrant "up" "winw1"
[Vagrantfile] Loading default settings from settings.yaml
Bringing machine 'winw1' up with 'virtualbox' provider...
==> winw1: Importing base box 'mloskot/sig-windows-dev-tools-windows-2019'...
Progress: 10%
Progress: 90%
==> winw1: Matching MAC address for NAT networking...
==> winw1: Checking if box 'mloskot/sig-windows-dev-tools-windows-2019' version '1.0' is up to date...
==> winw1: Setting the name of the VM: sig-windows-dev-tools-2_winw1_1682769828465_96319
==> winw1: Fixed port collision for 22 => 2222. Now on port 2200.
==> winw1: Clearing any previously set network interfaces...
==> winw1: Preparing network interfaces based on configuration...
winw1: Adapter 1: nat
winw1: Adapter 2: hostonly
==> winw1: Forwarding ports...
winw1: 5985 (guest) => 55985 (host) (adapter 1)
winw1: 5986 (guest) => 55986 (host) (adapter 1)
winw1: 22 (guest) => 2200 (host) (adapter 1)
==> winw1: Running 'pre-boot' VM customizations...
==> winw1: Booting VM...
==> winw1: Waiting for machine to boot. This may take a few minutes...
winw1: WinRM address: 127.0.0.1:55985
winw1: WinRM username: vagrant
winw1: WinRM execution_time_limit: PT2H
winw1: WinRM transport: negotiate
==> winw1: Machine booted and ready!
==> winw1: Checking for guest additions in VM...
==> winw1: Configuring and enabling network interfaces...
==> winw1: Mounting shared folders...
winw1: C:/forked => D:/_kubernetes/sig-windows-dev-tools-2/forked
winw1: C:/sync/shared => D:/_kubernetes/sig-windows-dev-tools-2/sync/shared
winw1: C:/sync/windows => D:/_kubernetes/sig-windows-dev-tools-2/sync/windows
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/0-containerd.ps1 as C:\tmp\vagrant-shell.ps1
winw1: Stopping ContainerD & Kubelet
winw1: Downloading Calico using ContainerD - [calico: 3.25] [containerd: 1.6.15]
winw1: Installing 7Zip
winw1: VERBOSE: Using the provider 'PowerShellGet' for searching packages.
winw1: VERBOSE: Using the provider 'NuGet' for searching packages.
winw1: VERBOSE: Total package yield:'0' for the specified package '7Zip4PowerShell'.
winw1: VERBOSE: The -Repository parameter was not specified. PowerShellGet will use all of the registered repositories.
winw1: VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
winw1: VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2' and PackageManagementProvider is 'NuGet'.
winw1: VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='7Zip4PowerShell'' for
winw1: ''.
winw1: VERBOSE: Total package yield:'1' for the specified package '7Zip4PowerShell'.
winw1: VERBOSE: Performing the operation "Install Package" on target "Package '7Zip4Powershell' version '2.3.0' from
winw1: 'PSGallery'.".
winw1: VERBOSE: The installation scope is specified to be 'CurrentUser'.
winw1: VERBOSE: The specified module will be installed in 'C:\Users\vagrant\Documents\WindowsPowerShell\Modules'.
winw1: VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
winw1: VERBOSE: Downloading module '7Zip4Powershell' with version '2.3.0' from the repository
winw1: 'https://www.powershellgallery.com/api/v2'.
winw1: VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='7Zip4Powershell'' for
winw1: ''.
winw1: VERBOSE: InstallPackage' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\868697821'
winw1: VERBOSE: DownloadPackage' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\868697821\7Zip4Powershell\7Zip4Powershell.nupkg',
winw1: uri='https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'
winw1: VERBOSE: Downloading 'https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'.
winw1: VERBOSE: Completed downloading 'https://www.powershellgallery.com/api/v2/package/7Zip4Powershell/2.3.0'.
winw1: VERBOSE: Completed downloading '7Zip4Powershell'.
winw1: VERBOSE: Hash for package '7Zip4Powershell' does not match hash provided from the server.
winw1: VERBOSE: InstallPackageLocal' - name='7Zip4Powershell',
winw1: version='2.3.0',destination='C:\Users\vagrant\AppData\Local\Temp\868697821'
winw1: VERBOSE: Catalog file '7Zip4Powershell.cat' is not found in the contents of the module '7Zip4Powershell' being
winw1: installed.
winw1: VERBOSE: Module '7Zip4Powershell' was installed successfully to path
winw1: 'C:\Users\vagrant\Documents\WindowsPowerShell\Modules\7Zip4Powershell\2.3.0'.
winw1:
winw1: Name Version Source Summary
winw1: ---- ------- ------ -------
winw1: 7Zip4Powershell 2.3.0 PSGallery Powershell module for creating and extracting 7-Zip...
winw1: Getting ContainerD binaries
winw1: Downloading https://github.com/containerd/containerd/releases/download/v1.6.15/containerd-1.6.15-windows-amd64.tar.gz to C:\Program Files\containerd\containerd.tar.gz
winw1: x containerd-shim-runhcs-v1.exe
winw1: x ctr.exe
winw1: x containerd-stress.exe
winw1: x containerd.exe
winw1: Registering ContainerD as a service
winw1: Starting ContainerD service
winw1: time="2023-04-29T05:08:55.296269000-07:00" level=fatal msg="The specified service already exists."
winw1: Done - please remember to add '--cri-socket "npipe:////./pipe/containerd-containerd"' to your kubeadm join command
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/forked.ps1 as C:\tmp\vagrant-shell.ps1
winw1:
winw1:
winw1: Directory: C:\
winw1:
winw1:
winw1: Mode LastWriteTime Length Name
winw1: ---- ------------- ------ ----
winw1: d----- 1/21/2022 3:44 AM k
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/shared/kubejoin.ps1 as C:\tmp\vagrant-shell.ps1
winw1: [preflight] Running pre-flight checks
winw1: [preflight] Reading configuration from the cluster...
winw1: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
winw1: W0429 05:12:04.692561 3932 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "npipe" to the "criSocket" with value "unix:///var/run/unknown.sock". Please update your configuration!
winw1: W0429 05:12:04.717016 3932 utils.go:69] The recommended value for "authentication.x509.clientCAFile" in "KubeletConfiguration" is: \etc\kubernetes\pki\ca.crt; the provided value is: /etc/kubernetes/pki/ca.crt
winw1: [kubelet-start] Writing kubelet configuration to file "\\var\\lib\\kubelet\\config.yaml"
winw1: [kubelet-start] Writing kubelet environment file with flags to file "\\var\\lib\\kubelet\\kubeadm-flags.env"
winw1: [kubelet-start] Starting the kubelet
winw1: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
winw1:
winw1: This node has joined the cluster:
winw1: * Certificate signing request was sent to apiserver and a response was received.
winw1: * The Kubelet was informed of the new secure connection details.
winw1:
winw1: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[swdt-mage] vagrant status winw1 - attempt 2 of 10
[swdt-mage] exec: vagrant "status" "winw1"
[swdt-mage] winw1 running (virtualbox)
[swdt-mage] Setting SSH private key permissions for .vagrant\machines\controlplane\virtualbox\private_key
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Inheritance:d"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Grant:r" "MateuszL:F"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key" "/c" "/t" "/Remove:g" "Administrator" "Authenticated Users" "BUILTIN\\Administrators" "BUILTIN" "Everyone" "System" "Users"
processed file: .vagrant\machines\controlplane\virtualbox\private_key
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] exec: icacls ".vagrant\\machines\\controlplane\\virtualbox\\private_key"
.vagrant\machines\controlplane\virtualbox\private_key CADCORP\MateuszL:(F)
Successfully processed 1 files; Failed processing 0 files
[swdt-mage] kubectl get nodes | grep winw1 - attempt 1 of 10
[swdt-mage] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
Connection to 127.0.0.1 closed.
[swdt-mage] [Vagrantfile] Loading default settings from settings.yaml
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 12m v1.26.4-15+1f16a36a2abdae
win-8vnbvnmjau2 NotReady <none> 55s v1.26.4-15+1f16a36a2abdae
[swdt-mage] vagrant provision winw1 - attempt 1 of 10
[swdt-mage] exec: vagrant "provision" "winw1"
[Vagrantfile] Loading default settings from settings.yaml
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/0-containerd.ps1 as C:\tmp\vagrant-shell.ps1
winw1: Stopping ContainerD & Kubelet
winw1: Downloading Calico using ContainerD - [calico: 3.25] [containerd: 1.6.15]
winw1: Installing 7Zip
winw1: Getting ContainerD binaries
winw1: Downloading https://github.com/containerd/containerd/releases/download/v1.6.15/containerd-1.6.15-windows-amd64.tar.gz to C:\Program Files\containerd\containerd.tar.gz
winw1: x containerd-shim-runhcs-v1.exe
winw1: x ctr.exe
winw1: x containerd-stress.exe
winw1: x containerd.exe: Can't unlink already-existing object
winw1: tar.exe: Error exit delayed from previous errors.
winw1: Registering ContainerD as a service
winw1: Starting ContainerD service
winw1: Done - please remember to add '--cri-socket "npipe:////./pipe/containerd-containerd"' to your kubeadm join command
winw1: time="2023-04-29T05:14:45.463598600-07:00" level=fatal msg="The specified service already exists."
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/forked.ps1 as C:\tmp\vagrant-shell.ps1
winw1:
winw1:
winw1: Directory: C:\
winw1:
winw1:
winw1: Mode LastWriteTime Length Name
winw1: ---- ------------- ------ ----
winw1: d----- 1/21/2022 3:44 AM k
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/shared/kubejoin.ps1 as C:\tmp\vagrant-shell.ps1
winw1: [preflight] Running pre-flight checks
winw1: error execution phase preflight: [preflight] Some fatal errors occurred:
winw1: [ERROR FileAvailable-\etc\kubernetes\kubelet.conf]: \etc\kubernetes\kubelet.conf already exists
winw1: [ERROR FileAvailable-C:-etc-kubernetes-pki-ca.crt]: C:/etc/kubernetes/pki/ca.crt already exists
winw1: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
winw1: To see the stack trace of this error execute with --v=5 or higher
[swdt-mage] kubectl get nodes | grep winw1 - attempt 2 of 10
[swdt-mage] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
Connection to 127.0.0.1 closed.
[swdt-mage] [Vagrantfile] Loading default settings from settings.yaml
NAME STATUS ROLES AGE VERSION
controlplane NotReady control-plane 16m v1.26.4-15+1f16a36a2abdae
win-8vnbvnmjau2 NotReady <none> 4m56s v1.26.4-15+1f16a36a2abdae
[swdt-mage] vagrant provision winw1 - attempt 2 of 10
[swdt-mage] exec: vagrant "provision" "winw1"
[Vagrantfile] Loading default settings from settings.yaml
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/0-containerd.ps1 as C:\tmp\vagrant-shell.ps1
winw1: Stopping ContainerD & Kubelet
winw1: Downloading Calico using ContainerD - [calico: 3.25] [containerd: 1.6.15]
winw1: Installing 7Zip
winw1: Getting ContainerD binaries
winw1: Downloading https://github.com/containerd/containerd/releases/download/v1.6.15/containerd-1.6.15-windows-amd64.tar.gz to C:\Program Files\containerd\containerd.tar.gz
winw1: x containerd-shim-runhcs-v1.exe
winw1: x ctr.exe
winw1: x containerd-stress.exe
winw1: x containerd.exe: Can't unlink already-existing object
winw1: tar.exe: Error exit delayed from previous errors.
winw1: Registering ContainerD as a service
winw1: Starting ContainerD service
winw1: Done - please remember to add '--cri-socket "npipe:////./pipe/containerd-containerd"' to your kubeadm join command
winw1: time="2023-04-29T05:20:12.726578500-07:00" level=fatal msg="The specified service already exists."
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/forked.ps1 as C:\tmp\vagrant-shell.ps1
winw1:
winw1:
winw1: Directory: C:\
winw1:
winw1:
winw1: Mode LastWriteTime Length Name
winw1: ---- ------------- ------ ----
winw1: d----- 1/21/2022 3:44 AM k
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/shared/kubejoin.ps1 as C:\tmp\vagrant-shell.ps1
winw1: [preflight] Running pre-flight checks
winw1: error execution phase preflight: [preflight] Some fatal errors occurred:
winw1: [ERROR FileAvailable-\etc\kubernetes\kubelet.conf]: \etc\kubernetes\kubelet.conf already exists
winw1: [ERROR FileAvailable-C:-etc-kubernetes-pki-ca.crt]: C:/etc/kubernetes/pki/ca.crt already exists
winw1: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
winw1: To see the stack trace of this error execute with --v=5 or higher
[swdt-mage] kubectl get nodes | grep winw1 - attempt 3 of 10
[swdt-mage] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
Connection to 127.0.0.1 closed.
[swdt-mage] [Vagrantfile] Loading default settings from settings.yaml
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 22m v1.26.4-15+1f16a36a2abdae
win-8vnbvnmjau2 NotReady <none> 11m v1.26.4-15+1f16a36a2abdae
[swdt-mage] vagrant provision winw1 - attempt 3 of 10
[swdt-mage] exec: vagrant "provision" "winw1"
[Vagrantfile] Loading default settings from settings.yaml
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/0-containerd.ps1 as C:\tmp\vagrant-shell.ps1
winw1: Stopping ContainerD & Kubelet
winw1: Downloading Calico using ContainerD - [calico: 3.25] [containerd: 1.6.15]
winw1: Installing 7Zip
winw1: Getting ContainerD binaries
winw1: Downloading https://github.com/containerd/containerd/releases/download/v1.6.15/containerd-1.6.15-windows-amd64.tar.gz to C:\Program Files\containerd\containerd.tar.gz
winw1: x containerd-shim-runhcs-v1.exe
winw1: x ctr.exe
winw1: x containerd-stress.exe
winw1: x containerd.exe
winw1: Registering ContainerD as a service
winw1: time="2023-04-29T05:25:37.049864900-07:00" level=fatal msg="The specified service already exists."
winw1: Starting ContainerD service
winw1: Done - please remember to add '--cri-socket "npipe:////./pipe/containerd-containerd"' to your kubeadm join command
==> winw1: Running provisioner: shell...
winw1: Running: sync/windows/forked.ps1 as C:\tmp\vagrant-shell.ps1
winw1:
winw1:
winw1: Directory: C:\
winw1:
winw1:
winw1: Mode LastWriteTime Length Name
winw1: ---- ------------- ------ ----
winw1: d----- 1/21/2022 3:44 AM k
winw1:
winw1:
==> winw1: Running provisioner: shell...
winw1: Running: sync/shared/kubejoin.ps1 as C:\tmp\vagrant-shell.ps1
winw1: [preflight] Running pre-flight checks
winw1: error execution phase preflight: [preflight] Some fatal errors occurred:
winw1: [ERROR FileAvailable-\etc\kubernetes\kubelet.conf]: \etc\kubernetes\kubelet.conf already exists
winw1: [ERROR FileAvailable-C:-etc-kubernetes-pki-ca.crt]: C:/etc/kubernetes/pki/ca.crt already exists
winw1: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
winw1: To see the stack trace of this error execute with --v=5 or higher
[swdt-mage] kubectl get nodes | grep winw1 - attempt 4 of 10
[swdt-mage] exec: vagrant "ssh" "controlplane" "-c" "kubectl get nodes"
Error: running "vagrant ssh controlplane -c kubectl get nodes" failed with exit code 255
# Vagrant and Kubernetes default configuration
#
# The settings.yaml is the default configuration file loaded by Mage and Vagrant.
#
# Defaults can be overriden using modified copy of this file passed via either
# 1. Environment variable SWDT_SETTINGS_FILE=my.yaml
# 2. Naming user-specific file as settings.local.yaml
## Vagrant machines
# Assume VirtualBox and VirtualBox Guest Additions on Vagrant box are up to date,
# for faster Vagrant machines creation. If Vagrant fails due to the version
# difference, then enable the auto update.
vagrant_vbguest_auto_update: false
# Linux control plane
vagrant_linux_box: "mloskot/sig-windows-dev-tools-ubuntu-2204"
vagrant_linux_box_version: "1.0"
vagrant_linux_cpus: 2
vagrant_linux_ram: 4096
# Windows worker node
vagrant_windows_box: "mloskot/sig-windows-dev-tools-windows-2019"
vagrant_windows_box_version: "1.0"
vagrant_windows_cpus: 2
vagrant_windows_ram: 6048
vagrant_windows_max_provision_attempts: 10 # Try vagrant up/provision this number of times before giving up
## Kubernetes
# Download pre-built Kubernetes binaries (false) or build locally from sources.
kubernetes_build_from_source: false
# Pick major and minor, patch will be the latest released.
kubernetes_version: "1.26"
## Container Runtime (containerd only supported)
containerd_version: "1.6.15"
## Calico
calico_version: "3.25.0"
## Networking
#cni: "calico" || "antrea"
cni: "calico"
# IP of Kubernetes API server deployed on Linux control plane node
# registered also as control plane node IP with kubelet.
# Appears in
# - InitConfiguration.localAPIEndpoint.advertiseAddress
# - InitConfiguration.localAPIEndpoint.nodeRegistration.node-ip
linux_node_ip: "10.20.30.10"
# IP of Windows node network.
# Appears in /var/lib/kubelet/kubeadm-flags.env for kubelet on Windows node.
windows_node_ip: "10.20.30.11"
# IP pool of subnet used by pods.
# Appears in ClusterConfiguration.podSubnet
pod_cidr: "100.244.0.0/16"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment