Skip to content

Instantly share code, notes, and snippets.

@pnasrat
Created December 6, 2023 20:07
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pnasrat/b1482a82d01c4b34e1ecd12f895b096f to your computer and use it in GitHub Desktop.
Save pnasrat/b1482a82d01c4b34e1ecd12f895b096f to your computer and use it in GitHub Desktop.
Diff of minikube logs with my PR
--- logs-old.txt 2023-12-06 15:03:50
+++ logs.txt 2023-12-06 15:04:03
@@ -1,14 +1,14 @@
-*
-* ==> Audit <==
-* |---------|------|----------|---------|---------|---------------------|---------------------|
+
+==> Audit <==
+|---------|------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------|----------|---------|---------|---------------------|---------------------|
| start | | minikube | pnasrat | v1.32.0 | 06 Dec 23 15:00 EST | 06 Dec 23 15:00 EST |
|---------|------|----------|---------|---------|---------------------|---------------------|
-*
-* ==> Last Start <==
-* Log file created at: 2023/12/06 15:00:05
+
+==> Last Start <==
+Log file created at: 2023/12/06 15:00:05
Running on machine: qamar
Binary: Built with gc go1.21.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
@@ -828,9 +828,9 @@
I1206 15:00:50.085995 67405 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
I1206 15:00:50.090942 67405 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
-*
-* ==> Docker <==
-* Dec 06 20:00:39 minikube dockerd[575]: time="2023-12-06T20:00:39.765663858Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
+
+==> Docker <==
+Dec 06 20:00:39 minikube dockerd[575]: time="2023-12-06T20:00:39.765663858Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 06 20:00:39 minikube dockerd[575]: time="2023-12-06T20:00:39.811237079Z" level=info msg="Loading containers: done."
Dec 06 20:00:39 minikube dockerd[575]: time="2023-12-06T20:00:39.818754858Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
Dec 06 20:00:39 minikube dockerd[575]: time="2023-12-06T20:00:39.818800234Z" level=info msg="Daemon has completed initialization"
@@ -891,29 +891,29 @@
Dec 06 20:01:09 minikube cri-dockerd[1218]: time="2023-12-06T20:01:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 06 20:01:32 minikube dockerd[1006]: time="2023-12-06T20:01:32.601517944Z" level=info msg="ignoring event" container=dd8901281b19e77d22be4f9e859d9dc2264bd6a03646b30351b1e659c93b03e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
-*
-* ==> container status <==
-* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
+
+==> container status <==
+CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
dc3ee77905540 ba04bb24b9575 2 minutes ago Running storage-provisioner 1 91bb1cd5f6a47 storage-provisioner
-ec371770d31f9 97e04611ad434 2 minutes ago Running coredns 0 b375fb6587981 coredns-5dd5756b68-pffc9
-03b4560b6ce8c 3ca3ca488cf13 2 minutes ago Running kube-proxy 0 9d2d18f2cfcf8 kube-proxy-99k9q
-dd8901281b19e ba04bb24b9575 2 minutes ago Exited storage-provisioner 0 91bb1cd5f6a47 storage-provisioner
+ec371770d31f9 97e04611ad434 3 minutes ago Running coredns 0 b375fb6587981 coredns-5dd5756b68-pffc9
+03b4560b6ce8c 3ca3ca488cf13 3 minutes ago Running kube-proxy 0 9d2d18f2cfcf8 kube-proxy-99k9q
+dd8901281b19e ba04bb24b9575 3 minutes ago Exited storage-provisioner 0 91bb1cd5f6a47 storage-provisioner
b96b7d6584be4 9cdd6470f48c8 3 minutes ago Running etcd 0 79da869a9712e etcd-minikube
d786900740360 05c284c929889 3 minutes ago Running kube-scheduler 0 cd9ba2c903d9b kube-scheduler-minikube
2479b43b84c98 04b4c447bb9d4 3 minutes ago Running kube-apiserver 0 3d2377c6f25a0 kube-apiserver-minikube
f2098ce88e58d 9961cbceaf234 3 minutes ago Running kube-controller-manager 0 0166db9685573 kube-controller-manager-minikube
-*
-* ==> coredns [ec371770d31f] <==
-* .:53
+
+==> coredns [ec371770d31f] <==
+.:53
[INFO] plugin/reload: Running configuration SHA512 = 1c9e0efee4c9b699bca4ef9b4c192ffc26395d9a25913952a3dc04b0d2e2fb4c7f8e8dd505711884f084772ef94d0223fc624c00aad421444a2557788ab88255
CoreDNS-1.10.1
linux/arm64, go1.20, 055b2c3
[INFO] 127.0.0.1:41126 - 31686 "HINFO IN 7222622979289249835.1588028161236032201. udp 57 false 512" NOERROR qr,rd,ra 57 0.001157588s
-*
-* ==> describe nodes <==
-* Name: minikube
+
+==> describe nodes <==
+Name: minikube
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
@@ -936,7 +936,7 @@
Lease:
HolderIdentity: minikube
AcquireTime: <unset>
- RenewTime: Wed, 06 Dec 2023 20:03:41 +0000
+ RenewTime: Wed, 06 Dec 2023 20:04:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
@@ -981,13 +981,13 @@
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
- kube-system coredns-5dd5756b68-pffc9 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 2m47s
- kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 3m
- kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
- kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
- kube-system kube-proxy-99k9q 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m47s
- kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
- kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
+ kube-system coredns-5dd5756b68-pffc9 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 3m
+ kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 3m13s
+ kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m13s
+ kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m13s
+ kube-system kube-proxy-99k9q 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
+ kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m13s
+ kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m13s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
@@ -1002,17 +1002,16 @@
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
- Normal Starting 2m47s kube-proxy
- Normal Starting 3m1s kubelet Starting kubelet.
- Normal NodeAllocatableEnforced 3m1s kubelet Updated Node Allocatable limit across pods
- Normal NodeHasSufficientMemory 3m kubelet Node minikube status is now: NodeHasSufficientMemory
- Normal NodeHasNoDiskPressure 3m kubelet Node minikube status is now: NodeHasNoDiskPressure
- Normal NodeHasSufficientPID 3m kubelet Node minikube status is now: NodeHasSufficientPID
- Normal RegisteredNode 2m47s node-controller Node minikube event: Registered Node minikube in Controller
+ Normal Starting 3m kube-proxy
+ Normal Starting 3m14s kubelet Starting kubelet.
+ Normal NodeAllocatableEnforced 3m14s kubelet Updated Node Allocatable limit across pods
+ Normal NodeHasSufficientMemory 3m13s kubelet Node minikube status is now: NodeHasSufficientMemory
+ Normal NodeHasNoDiskPressure 3m13s kubelet Node minikube status is now: NodeHasNoDiskPressure
+ Normal NodeHasSufficientPID 3m13s kubelet Node minikube status is now: NodeHasSufficientPID
+ Normal RegisteredNode 3m node-controller Node minikube event: Registered Node minikube in Controller
-*
-* ==> dmesg <==
-* [ +0.000001] evict_inodes inode 00000000147df173, i_count = 1, was skipped!
+
+==> dmesg <==
[ +0.011264] kauditd_printk_skb: 86 callbacks suppressed
[ +0.012620] evict_inodes inode 00000000149f8d3f, i_count = 1, was skipped!
[ +0.000004] evict_inodes inode 00000000287365b1, i_count = 1, was skipped!
@@ -1072,10 +1071,11 @@
[ +17.125304] kauditd_printk_skb: 2 callbacks suppressed
[Dec 6 20:03] kauditd_printk_skb: 20 callbacks suppressed
[ +41.878455] kauditd_printk_skb: 20 callbacks suppressed
+[Dec 6 20:04] kauditd_printk_skb: 20 callbacks suppressed
-*
-* ==> etcd [b96b7d6584be] <==
-* {"level":"warn","ts":"2023-12-06T20:00:45.171585Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
+
+==> etcd [b96b7d6584be] <==
+{"level":"warn","ts":"2023-12-06T20:00:45.171585Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2023-12-06T20:00:45.171639Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"warn","ts":"2023-12-06T20:00:45.171679Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2023-12-06T20:00:45.171685Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]}
@@ -1123,15 +1123,15 @@
{"level":"info","ts":"2023-12-06T20:00:45.982852Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-12-06T20:00:45.982892Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
-*
-* ==> kernel <==
-* 20:03:49 up 2 days, 15:46, 0 users, load average: 0.06, 0.16, 0.10
+
+==> kernel <==
+ 20:04:02 up 2 days, 15:47, 0 users, load average: 0.12, 0.17, 0.10
Linux minikube 6.5.0-10-generic #10-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 13 18:28:22 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.3 LTS"
-*
-* ==> kube-apiserver [2479b43b84c9] <==
-* I1206 20:00:46.467496 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
+
+==> kube-apiserver [2479b43b84c9] <==
+I1206 20:00:46.467496 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1206 20:00:46.467628 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1206 20:00:46.467663 1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
I1206 20:00:46.467694 1 controller.go:80] Starting OpenAPI V3 AggregationController
@@ -1192,9 +1192,9 @@
I1206 20:01:02.143594 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
I1206 20:01:02.145592 1 controller.go:624] quota admission added evaluator for: replicasets.apps
-*
-* ==> kube-controller-manager [f2098ce88e58] <==
-* I1206 20:01:02.064640 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"minikube\" does not exist"
+
+==> kube-controller-manager [f2098ce88e58] <==
+I1206 20:01:02.064640 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"minikube\" does not exist"
I1206 20:01:02.065443 1 shared_informer.go:311] Waiting for caches to sync for garbage collector
I1206 20:01:02.089219 1 shared_informer.go:318] Caches are synced for TTL
I1206 20:01:02.089233 1 shared_informer.go:318] Caches are synced for disruption
@@ -1255,9 +1255,9 @@
I1206 20:01:03.031041 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.26097ms"
I1206 20:01:03.031114 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.458µs"
-*
-* ==> kube-proxy [03b4560b6ce8] <==
-* I1206 20:01:02.608598 1 server_others.go:69] "Using iptables proxy"
+
+==> kube-proxy [03b4560b6ce8] <==
+I1206 20:01:02.608598 1 server_others.go:69] "Using iptables proxy"
I1206 20:01:02.613808 1 node.go:141] Successfully retrieved node IP: 192.168.49.2
I1206 20:01:02.622506 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1206 20:01:02.623426 1 server_others.go:152] "Using iptables Proxier"
@@ -1276,9 +1276,9 @@
I1206 20:01:02.724307 1 shared_informer.go:318] Caches are synced for service config
I1206 20:01:02.724420 1 shared_informer.go:318] Caches are synced for node config
-*
-* ==> kube-scheduler [d78690074036] <==
-* I1206 20:00:45.468802 1 serving.go:348] Generated self-signed cert in-memory
+
+==> kube-scheduler [d78690074036] <==
+I1206 20:00:45.468802 1 serving.go:348] Generated self-signed cert in-memory
W1206 20:00:46.488429 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1206 20:00:46.488523 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1206 20:00:46.488550 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
@@ -1331,9 +1331,9 @@
E1206 20:00:47.574439 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I1206 20:00:48.097078 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
-*
-* ==> kubelet <==
-* Dec 06 20:00:48 minikube kubelet[2307]: I1206 20:00:48.968070 2307 status_manager.go:217] "Starting to sync pod status with apiserver"
+
+==> kubelet <==
+Dec 06 20:00:48 minikube kubelet[2307]: I1206 20:00:48.968070 2307 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 06 20:00:48 minikube kubelet[2307]: I1206 20:00:48.968078 2307 kubelet.go:2303] "Starting kubelet main sync loop"
Dec 06 20:00:48 minikube kubelet[2307]: E1206 20:00:48.968148 2307 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 06 20:00:48 minikube kubelet[2307]: I1206 20:00:48.980800 2307 cpu_manager.go:214] "Starting CPU manager" policy="none"
@@ -1394,9 +1394,9 @@
Dec 06 20:01:09 minikube kubelet[2307]: I1206 20:01:09.404584 2307 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Dec 06 20:01:33 minikube kubelet[2307]: I1206 20:01:33.110340 2307 scope.go:117] "RemoveContainer" containerID="dd8901281b19e77d22be4f9e859d9dc2264bd6a03646b30351b1e659c93b03e3"
-*
-* ==> storage-provisioner [dc3ee7790554] <==
-* I1206 20:01:33.171845 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
+
+==> storage-provisioner [dc3ee7790554] <==
+I1206 20:01:33.171845 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1206 20:01:33.175564 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1206 20:01:33.175633 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1206 20:01:33.179081 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
@@ -1404,8 +1404,8 @@
I1206 20:01:33.179145 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_9131e04b-af9d-4e40-9e24-632c637845d1!
I1206 20:01:33.279944 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_9131e04b-af9d-4e40-9e24-632c637845d1!
-*
-* ==> storage-provisioner [dd8901281b19] <==
-* I1206 20:01:02.593798 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
+
+==> storage-provisioner [dd8901281b19] <==
+I1206 20:01:02.593798 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1206 20:01:32.595270 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment