Last active
October 12, 2023 12:53
-
-
Save AlienResidents/02ec2ae62d4d49c722e55c14cc339ee8 to your computer and use it in GitHub Desktop.
kubectl logs - 405 method not allowed
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
So, this happened. Shirley I must be missing something! | |
I was unable to see k8s logs with "kubectl logs". I was running a single node cluster Ubuntu 20.04.3. | |
I initially started out with 1.27, and then upgraded to 1.28. | |
Somehow over at least 4 server "shutdown -r now"'s, the issue of not being able to read logs with kubectl persisted. | |
This was my error. I use coredns as an easy obfuscation method. | |
I asked for help on the Kubernetes Slack channel, and @liptan has been helping me out so far. | |
kubectl logs -n kube-system coredns-5d78c9869d-6cdfh | |
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource ( pods/log coredns-5d78c9869d-6cdfh) | |
Ok, let's check my clusterrole: | |
kubectl get clusterroles/cluster-admin -o yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRole | |
metadata: | |
annotations: | |
rbac.authorization.kubernetes.io/autoupdate: "true" | |
creationTimestamp: "2023-02-09T08:38:40Z" | |
labels: | |
kubernetes.io/bootstrapping: rbac-defaults | |
name: cluster-admin | |
resourceVersion: "73" | |
uid: 7500e0d2-9c91-433a-85d6-d31af6d0c374 | |
rules: | |
- apiGroups: | |
- '*' | |
resources: | |
- '*' | |
verbs: | |
- '*' | |
- nonResourceURLs: | |
- '*' | |
verbs: | |
- '*' | |
That clusterrole looks good, let's check my clusterrolebinding: | |
kubectl get clusterrolebinding/cluster-admin -o yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRoleBinding | |
metadata: | |
annotations: | |
rbac.authorization.kubernetes.io/autoupdate: "true" | |
creationTimestamp: "2023-02-09T08:38:40Z" | |
labels: | |
kubernetes.io/bootstrapping: rbac-defaults | |
name: cluster-admin | |
resourceVersion: "136" | |
uid: e9270e44-d6e3-47bc-a413-4ff8cc02cce4 | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: cluster-admin | |
subjects: | |
- apiGroup: rbac.authorization.k8s.io | |
kind: Group | |
name: system:masters | |
Yeah, that looks good too, and so who the fuck am I? | |
kubectl auth whoami | |
ATTRIBUTE VALUE | |
Username kubernetes-admin | |
Groups [system:masters system:authenticated] | |
Phew! some essence of sanity, and it seems like I am who I thought I should be. | |
Let's check some logs. I increase the verbosity of the apiserver here. | |
GET https://<endpoint>/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log?container=coredns 405 Method Not Allowed in 3 milliseconds | |
I1012 21:31:06.844738 703505 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server does not allow this method on the requested resource ( pods/log coredns-5d78c9869d-6cdfh)","reason":"MethodNotAllowed","details":{"name":"coredns-5d78c9869d-6cdfh","kind":"pods/log"},"code":405} | |
I1012 21:31:06.844895 703505 helpers.go:246] server response object: [{ | |
"metadata": {}, | |
"status": "Failure", | |
"message": "the server does not allow this method on the requested resource ( pods/log coredns-5d78c9869d-6cdfh)", | |
"reason": "MethodNotAllowed", | |
"details": { | |
"name": "coredns-5d78c9869d-6cdfh", | |
"kind": "pods/log" | |
}, | |
"code": 405 | |
}] | |
Ok, so what does the increased verbosity show me now? | |
1 I1012 11:09:25.754979 1 apf_controller.go:989] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) | |
2 I1012 11:09:25.755047 1 apf_controller.go:1037] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta3. FlowDistinguisherMethod)(nil), plName="exempt", numQueues=0 | |
3 I1012 11:09:25.755110 1 queueset.go:742] QS(exempt) at t=2023-10-12 11:09:25.755078148 R=0.00000000ss: immediate dispatch of request "exempt" &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion: "v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}} &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}, qs will have 1 executing | |
4 I1012 11:09:25.755148 1 apf_filter.go:174] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta3.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false | |
5 I1012 11:09:25.755181 1 queueset.go:433] QS(exempt): Dispatching request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name: "coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}} &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)} from its queue | |
6 I1012 11:09:25.755232 1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh" satisfied by nonGoRestful | |
7 I1012 11:09:25.755248 1 pathrecorder.go:248] kube-aggregator: "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh" satisfied by prefix /api/ | |
8 I1012 11:09:25.755264 1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh" satisfied by gorestful with webservice /api/v1 | |
9 I1012 11:09:25.757137 1 queueset.go:967] QS(exempt) at t=2023-10-12 11:09:25.757116951 R=0.00000000ss: request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}} &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)} finished all use of 1 seats, qs will have 0 requests occupying 0 seats | |
10 I1012 11:09:25.757169 1 apf_filter.go:178] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta3.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false, Finish() => panicking=false idle=true | |
11 I1012 11:09:25.757225 1 httplog.go:132] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh" latency="3.114466ms" userAgent="kubectl/v1.28.2 (linux/amd64) kubernetes/89a4ea3" audit-ID="e100bea6-9775-4e7d-9fa6-15f9b7189707" srcIP="10.0.0.1:43030" apf_pl="exempt" apf_fs="exempt" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="1.905063ms" resp=200 | |
12 I1012 11:09:25.760010 1 priority-and-fairness.go:100] Serving RequestInfo=&request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource: "log", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh", "log"}}, user.Info=&user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)} as longrunning | |
13 I1012 11:09:25.760058 1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log" satisfied by nonGoRestful | |
14 I1012 11:09:25.760071 1 pathrecorder.go:248] kube-aggregator: "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log" satisfied by prefix /api/ | |
15 I1012 11:09:25.760083 1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log" satisfied by gorestful with webservice /api/v1 | |
16 I1012 11:09:25.761468 1 round_trippers.go:466] curl -v -XGET 'https://10.0.0.1:10250/containerLogs/kube-system/coredns-5d78c9869d-6cdfh/coredns' | |
17 I1012 11:09:25.763737 1 round_trippers.go:553] GET https://10.0.0.1:10250/containerLogs/kube-system/coredns-5d78c9869d-6cdfh/coredns 405 Method Not Allowed in 2 milliseconds | |
18 I1012 11:09:25.763928 1 httplog.go:132] "HTTP" verb="CONNECT" URI="/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh/log?container=coredns" latency="4.363387ms" userAgent="kubectl/v1.28.2 (linux/amd64) kubernetes/89a4ea3" audit-ID="081cd59c-db7c-4153-92a7-89a328ad5f7b" srcIP="10.0.0.1:43030" resp=405 | |
19 I1012 11:09:41.169569 1 apf_controller.go:989] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) | |
20 I1012 11:09:41.169613 1 apf_controller.go:1037] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-6cdfh", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"kube-system", Resource:"pods", Subresource:"", Name:"coredns-5d78c9869d-6cdfh", Parts:[]string{"pods", "coredns-5d78c9869d-6cdfh"}}, User: &user.DefaultInfo{Name:"kubernetes-admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)} | |
Noice, I can see more about the error now. I pasted the above verbose output in the Kubernetes Slack channel, | |
and @liptan informaed me that kubelet listens on 10250! Thanks @liptan! Now I can dig into more stuff, mainly kubelet. | |
How is kubelet configured? I better add some verbosity to it, let's try verbose level 4, and restart kubelet. | |
sudo cat /var/lib/kubelet/kubeadm-flags.env | |
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.8 --v=4" | |
PC load letter, WTF? https://www.youtube.com/watch?v=5QQdNbvSGok | |
Why is it working all of a sudden??? I don't understand, aaand still don't. | |
My TMUX session #1 | |
history | grep -E '20231012\.22:[1-5]' | |
9767 20231012.22:14:40+1100 cp /tmp/ttt ~/kubectl-logs-coredns-apiserver-logs.txt | |
9768 20231012.22:15:37+1100 sudo crictl logs -f 15d66415344ff 2>&1 | grep -i coredns > /tmp/tttt | |
9769 20231012.22:46:31+1100 journalctl -fu kubelet | |
9770 20231012.22:49:54+1100 journalctl -fu kubelet | grep -Ev 'keycloak' | |
9771 20231012.22:50:02+1100 journalctl -fu kubelet | grep -Ev 'keycloak|RemoveContainer' | |
9772 20231012.22:50:44+1100 systemctl status kubelet | |
9773 20231012.22:50:52+1100 #Environment="KUBELET_LOG_LEVEL=2" | |
9774 20231012.22:51:04+1100 vim /lib/systemd/system/kubelet.service | |
9775 20231012.22:51:24+1100 vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9776 20231012.22:51:28+1100 sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9777 20231012.22:52:02+1100 sudo systemctl daemon-reload | |
9778 20231012.22:52:09+1100 sudo systemctl restart kubelet | |
9779 20231012.22:52:15+1100 kubectl get pod | |
9780 20231012.22:52:19+1100 journalctl -fu kubelet | grep -Ev 'keycloak|RemoveContainer' | |
9781 20231012.22:52:33+1100 sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9782 20231012.22:52:41+1100 sudo systemctl daemon-reload | |
9783 20231012.22:52:46+1100 sudo systemctl restart kubelet | |
9784 20231012.22:53:28+1100 fg | |
9785 20231012.22:53:34+1100 sudo systemctl daemon-reload | |
9786 20231012.22:53:37+1100 sudo systemctl restart kubelet | |
9787 20231012.22:54:16+1100 kubectl get pod | |
9788 20231012.22:54:22+1100 ps -ef | grep -i apiser | |
9789 20231012.22:54:31+1100 journalctl -fu kubelet | |
9790 20231012.22:54:42+1100 kubectl get pod | |
9791 20231012.22:55:08+1100 journalctl -fu kubelet | |
9792 20231012.22:55:49+1100 journalctl -fu kubelet | grep -i coredns | |
9793 20231012.22:56:30+1100 fg | |
9794 20231012.22:56:48+1100 sudo cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9795 20231012.22:58:10+1100 sudo ls -l /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9796 20231012.22:58:54+1100 jobs -l | |
9797 20231012.22:59:18+1100 history|tail -50 | |
My TMUX session #2 | |
9671 20231012.22:23:34+1100 mkdir kubectl-logs-405 | |
9672 20231012.22:23:52+1100 mv kubectl-logs-coredns-apiserver-logs.txt kubectl-logs-405/ | |
9673 20231012.22:25:00+1100 cp /tmp/tttt kubectl-logs-405/kubectl-get-pod_coredns-apiserver-logs.txt | |
9674 20231012.22:25:02+1100 cd kubectl-logs-405/ | |
9675 20231012.22:25:03+1100 ls -tlr | |
9676 20231012.22:25:18+1100 mv kubectl-logs-coredns-apiserver-logs.txt kubectl-logs-coredns_apiserver-logs.txt | |
9677 20231012.22:25:33+1100 mv kubectl-get-pod_coredns-apiserver-logs.txt kubectl-get-pod-coredns_apiserver-logs.txt | |
9678 20231012.22:27:17+1100 ls -ltr | |
9679 20231012.22:27:22+1100 grep 10250 * | |
9680 20231012.22:28:47+1100 sudo ls -l /etc/kubernetes/manifests/ | |
9681 20231012.22:29:05+1100 systemctl status kubelet | |
9682 20231012.22:29:22+1100 cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9683 20231012.22:29:30+1100 ls -l -/var/lib/kubelet/kubeadm-flags.env | |
9684 20231012.22:29:32+1100 ls -l /var/lib/kubelet/kubeadm-flags.env | |
9685 20231012.22:29:35+1100 sudo ls -l /var/lib/kubelet/kubeadm-flags.env | |
9686 20231012.22:29:39+1100 sudo cat /var/lib/kubelet/kubeadm-flags.env | |
9687 20231012.22:29:56+1100 sudo ls -l /var/lib/kubelet/config.yaml | |
9688 20231012.22:29:59+1100 sudo cat /var/lib/kubelet/config.yaml | |
9689 20231012.22:30:40+1100 sudo ls -l /etc/kubernetes/manifests/ | |
9690 20231012.22:30:47+1100 sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | |
9691 20231012.22:31:04+1100 sudo ls -l /etc/kubernetes/ | |
9692 20231012.22:31:12+1100 sudo cat /etc/kubernetes/kubelet.conf | |
9693 20231012.22:31:18+1100 sudo ls -l /etc/kubernetes/ | |
9694 20231012.22:31:21+1100 sudo ls -l /etc/kubernetes/tmp | |
9695 20231012.22:31:36+1100 sudo ls -l /etc/kubernetes/tmp/kubeadm-kubelet-config1546982341 | |
9696 20231012.22:31:40+1100 sudo ls -l /etc/kubernetes/tmp/kubeadm-kubelet-config1546982341/config.yaml | |
9697 20231012.22:31:43+1100 sudo cat /etc/kubernetes/tmp/kubeadm-kubelet-config1546982341/config.yaml | |
9698 20231012.22:32:13+1100 sudo diff /etc/kubernetes/tmp/kubeadm-kubelet-config1546982341/config.yaml /etc/kubernetes/tmp/kubeadm-kubelet-config1156805868 | |
9699 20231012.22:32:53+1100 ls-l /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23 | |
9700 20231012.22:32:56+1100 sudo ls -l /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23 | |
9701 20231012.22:33:40+1100 sudo ls -l /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23 /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01 /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-12-55-30 | |
9702 20231012.22:35:49+1100 diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01/kube-scheduler.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23/kube-scheduler.yaml | |
9703 20231012.22:35:52+1100 sudo diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01/kube-scheduler.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23/kube-scheduler.yaml | |
9704 20231012.22:36:08+1100 sudo diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01/kube-controller-manager.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23/kube-controller-manager.yaml | |
9705 20231012.22:36:20+1100 sudo diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01/kube-apiserver.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23/kube-apiserver.yaml | |
9706 20231012.22:36:44+1100 sudo diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-07-12-13-11-01/ /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-10-12-26-23/etcd.yaml | |
9707 20231012.22:36:58+1100 sudo ls -l find | |
9708 20231012.22:37:03+1100 sudo find /etc/kubernetes/ | |
9709 20231012.22:37:29+1100 sudo find /etc/kubernetes/ -exec ls -l {} \; | |
9710 20231012.22:37:56+1100 sudo ls -l kubelet.conf | |
9711 20231012.22:38:03+1100 sudo ls -l/etc/kubernetes/kubelet.conf | |
9712 20231012.22:38:05+1100 sudo ls -l /etc/kubernetes/kubelet.conf | |
9713 20231012.22:38:09+1100 sudo cat /etc/kubernetes/kubelet.conf | |
9714 20231012.22:39:34+1100 cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9715 20231012.22:39:41+1100 ls -l /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | |
9716 20231012.22:39:54+1100 ls -l /var/lib/kubelet/config.yaml | |
9717 20231012.22:39:56+1100 sudo ls -l /var/lib/kubelet/config.yaml | |
9718 20231012.22:40:01+1100 sudo cat /var/lib/kubelet/config.yaml | |
9719 20231012.22:43:45+1100 kubectl get all,ds | |
9720 20231012.22:43:51+1100 kubectl get all,ds,ss | |
9721 20231012.22:43:54+1100 kubectl get all,ds,statefulset | |
9722 20231012.22:44:05+1100 ps -ef | grep -i kubelet | |
9723 20231012.22:44:14+1100 sudo cat /etc/kubernetes/bootstrap-kubelet.conf | |
9724 20231012.22:44:19+1100 sudo cat /etc/kubernetes/kubelet.conf | |
9725 20231012.22:44:27+1100 sudo cat /var/lib/kubelet/config.yaml | |
9726 20231012.22:45:25+1100 sudo vim /var/lib/kubelet/config.yaml | |
9727 20231012.22:45:36+1100 ps -ef | grep -i kubelet | |
9728 20231012.22:45:41+1100 kubectl get pod | |
9729 20231012.22:57:00+1100 sudo cat /var/lib/kubelet/config.yaml | |
9730 20231012.22:57:16+1100 sudo ls -l /var/lib/kubelet/config.yaml | |
9731 20231012.22:57:31+1100 sudo ls -l /var/lib/kubelet/ | |
9732 20231012.22:57:50+1100 sudo ls -l /var/lib/kubelet/kubeadm-flags.env | |
9733 20231012.22:57:54+1100 sudo cat /var/lib/kubelet/kubeadm-flags.env | |
9734 20231012.22:59:33+1100 history | tail -50 | |
I am pretty sure my config hasn't change enough to warrant a fucking 405 yo! | |
sudo cat /var/lib/kubelet/config.yaml | |
apiVersion: kubelet.config.k8s.io/v1beta1 | |
authentication: | |
anonymous: | |
enabled: false | |
webhook: | |
cacheTTL: 0s | |
enabled: true | |
x509: | |
clientCAFile: /etc/kubernetes/pki/ca.crt | |
authorization: | |
mode: Webhook | |
webhook: | |
cacheAuthorizedTTL: 0s | |
cacheUnauthorizedTTL: 0s | |
cgroupDriver: systemd | |
clusterDNS: | |
- 192.168.192.10 | |
clusterDomain: cluster.local | |
containerRuntimeEndpoint: "" | |
cpuManagerReconcilePeriod: 0s | |
evictionPressureTransitionPeriod: 0s | |
fileCheckFrequency: 0s | |
healthzBindAddress: 127.0.0.1 | |
healthzPort: 10248 | |
httpCheckFrequency: 0s | |
imageMinimumGCAge: 0s | |
kind: KubeletConfiguration | |
logging: | |
flushFrequency: 0 | |
options: | |
json: | |
infoBufferSize: "0" | |
verbosity: 10 | |
memorySwap: {} | |
nodeStatusReportFrequency: 0s | |
nodeStatusUpdateFrequency: 0s | |
rotateCertificates: true | |
runtimeRequestTimeout: 0s | |
shutdownGracePeriod: 0s | |
shutdownGracePeriodCriticalPods: 0s | |
staticPodPath: /etc/kubernetes/manifests | |
streamingConnectionIdleTimeout: 0s | |
syncFrequency: 0s | |
volumeStatsAggPeriod: 0s | |
Whatevs dude, don't let this happen again, or I'll microsoft the shit out of you and reboot you! | |
kubectl logs kube-proxy-vbwl5 | |
I1010 01:28:44.061203 1 server_others.go:69] "Using iptables proxy" | |
I1010 01:28:44.071013 1 node.go:141] Successfully retrieved node IP: 10.0.0.1 #### this IP is made up | |
I1010 01:28:44.074330 1 conntrack.go:52] "Setting nf_conntrack_max" nfConntrackMax=524288 | |
I1010 01:28:44.092527 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" | |
I1010 01:28:44.093590 1 server_others.go:152] "Using iptables Proxier" | |
I1010 01:28:44.093622 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6" | |
I1010 01:28:44.093627 1 server_others.go:438] "Defaulting to no-op detect-local" | |
I1010 01:28:44.093758 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" | |
I1010 01:28:44.093963 1 server.go:846] "Version info" version="v1.28.2" | |
I1010 01:28:44.093971 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" | |
I1010 01:28:44.094881 1 config.go:188] "Starting service config controller" | |
I1010 01:28:44.094952 1 config.go:97] "Starting endpoint slice config controller" | |
I1010 01:28:44.095024 1 config.go:315] "Starting node config controller" | |
I1010 01:28:44.095193 1 shared_informer.go:311] Waiting for caches to sync for service config | |
I1010 01:28:44.095196 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config | |
I1010 01:28:44.095194 1 shared_informer.go:311] Waiting for caches to sync for node config | |
I1010 01:28:44.195279 1 shared_informer.go:318] Caches are synced for endpoint slice config | |
I1010 01:28:44.195312 1 shared_informer.go:318] Caches are synced for node config | |
I1010 01:28:44.195337 1 shared_informer.go:318] Caches are synced for service config | |
E1012 10:58:21.685541 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) | |
E1012 10:58:21.685541 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) | |
Fucking Heisenbugs!! https://en.wikipedia.org/wiki/Heisenbug (NB: could be my own ignorance too) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment