Skip to content

Instantly share code, notes, and snippets.

@JoshuaChi
Created March 14, 2019 07:03
Show Gist options
  • Save JoshuaChi/4bcf4e2eb2ab051dfb45b709712b0815 to your computer and use it in GitHub Desktop.
Save JoshuaChi/4bcf4e2eb2ab051dfb45b709712b0815 to your computer and use it in GitHub Desktop.
TASK [kubernetes/master : Backup old certs and keys] *******************************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-certificate.yml:2
Thursday 14 March 2019 06:05:10 +0000 (0:00:01.655) 0:06:23.774 ********
TASK [kubernetes/master : Remove old certs and keys] *******************************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-certificate.yml:16
Thursday 14 March 2019 06:05:10 +0000 (0:00:00.245) 0:06:24.019 ********
TASK [kubernetes/master : Generate new certs and keys] *****************************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-certificate.yml:28
Thursday 14 March 2019 06:05:10 +0000 (0:00:00.235) 0:06:24.255 ********
TASK [kubernetes/master : Generate new certs and keys] *****************************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-certificate.yml:36
Thursday 14 March 2019 06:05:10 +0000 (0:00:00.164) 0:06:24.419 ********
TASK [kubernetes/master : kubeadm | Initialize first master] ***********************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-setup.yml:104
Thursday 14 March 2019 06:05:10 +0000 (0:00:00.162) 0:06:24.581 ********
fatal: [ip-172-16-0-157.cn-northwest-1.compute.internal]: FAILED! => {
"changed": true,
"cmd": [
"timeout",
"-k",
"600s",
"600s",
"/usr/local/bin/kubeadm",
"init",
"--config=/etc/kubernetes/kubeadm-config.yaml",
"--ignore-preflight-errors=all"
],
"delta": "0:01:58.518400",
"end": "2019-03-14 06:07:09.767123",
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "timeout -k 600s 600s /usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2019-03-14 06:05:11.248723",
"stderr": "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster",
"stderr_lines": [
"error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"
],
"stdout": "[init] Using Kubernetes version: v1.13.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [ip-172-16-0-157.cn-northwest-1.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.staging.k8s.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.staging.k8s.local localhost ip-172-16-0-157.cn-northwest-1.compute.internal ip-172-16-0-231.cn-northwest-1.compute.internal] and IPs [10.233.0.1 172.16.0.157 172.16.0.157 10.233.0.1 127.0.0.1 172.16.0.157 172.16.0.231]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] External etcd mode: Skipping etcd/ca certificate authority generation\n[certs] External etcd mode: Skipping etcd/server certificate authority generation\n[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation\n[certs] External etcd mode: Skipping etcd/peer certificate authority generation\n[certs] External etcd mode: Skipping apiserver-etcd-client certificate authority generation\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'",
"stdout_lines": [
"[init] Using Kubernetes version: v1.13.3",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Activating the kubelet service",
"[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [ip-172-16-0-157.cn-northwest-1.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.staging.k8s.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.staging.k8s.local localhost ip-172-16-0-157.cn-northwest-1.compute.internal ip-172-16-0-231.cn-northwest-1.compute.internal] and IPs [10.233.0.1 172.16.0.157 172.16.0.157 10.233.0.1 127.0.0.1 172.16.0.157 172.16.0.231]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] External etcd mode: Skipping etcd/ca certificate authority generation",
"[certs] External etcd mode: Skipping etcd/server certificate authority generation",
"[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation",
"[certs] External etcd mode: Skipping etcd/peer certificate authority generation",
"[certs] External etcd mode: Skipping apiserver-etcd-client certificate authority generation",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s",
"[kubelet-check] Initial timeout of 40s passed.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.",
"",
"Unfortunately, an error has occurred:",
"\ttimed out waiting for the condition",
"",
"This error is likely caused by:",
"\t- The kubelet is not running",
"\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)",
"",
"If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:",
"\t- 'systemctl status kubelet'",
"\t- 'journalctl -xeu kubelet'",
"",
"Additionally, a control plane component may have crashed or exited when started by the container runtime.",
"To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.",
"Here is one example how you may list all Kubernetes containers running in docker:",
"\t- 'docker ps -a | grep kube | grep -v pause'",
"\tOnce you have found the failing container, you can inspect its logs with:",
"\t- 'docker logs CONTAINERID'"
]
}
NO MORE HOSTS LEFT *****************************************************************************
to retry, use: --limit @/home/centos/kubespray/cluster.retry
PLAY RECAP *************************************************************************************
ip-172-16-0-157.cn-northwest-1.compute.internal : ok=318 changed=52 unreachable=0 failed=1
ip-172-16-0-231.cn-northwest-1.compute.internal : ok=291 changed=49 unreachable=0 failed=0
ip-172-16-0-32.cn-northwest-1.compute.internal : ok=275 changed=46 unreachable=0 failed=0
localhost : ok=1 changed=0 unreachable=0 failed=0
Thursday 14 March 2019 06:07:09 +0000 (0:01:58.811) 0:08:23.393 ********
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment