Skip to content

Instantly share code, notes, and snippets.

@onpaws
Last active December 15, 2020 20:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save onpaws/ba5bd7be08c97fdd561325ea2c8be99a to your computer and use it in GitHub Desktop.
Save onpaws/ba5bd7be08c97fdd561325ea2c8be99a to your computer and use it in GitHub Desktop.
root@datadog-pbnx2:/# agent status
Getting the status from the agent.
===============
Agent (v7.24.0)
===============
Status date: 2020-12-15 20:37:18.796087 UTC
Agent start: 2020-12-15 20:00:16.024187 UTC
Pid: 1
Go Version: go1.14.7
Python Version: 3.8.5
Build arch: amd64
Agent flavor: agent
Check Runners: 4
Log Level: INFO
Paths
=====
Config File: /etc/datadog-agent/datadog.yaml
conf.d: /etc/datadog-agent/conf.d
checks.d: /etc/datadog-agent/checks.d
Clocks
======
NTP offset: 1.408ms
System UTC time: 2020-12-15 20:37:18.796087 UTC
Host Info
=========
bootTime: 2020-12-15 15:33:32.000000 UTC
kernelArch: x86_64
kernelVersion: 5.4.0-1032-azure
os: linux
platform: debian
platformFamily: debian
platformVersion: bullseye/sid
procs: 210
uptime: 4h26m48s
Hostnames
=========
host_aliases: [183ae1f5-871f-49a1-b6c3-a58c29587d19 aks-agentpool-38016112-vmss000000-scout]
hostname: aks-agentpool-38016112-vmss000000-scout
socket-fqdn: datadog-pbnx2
socket-hostname: datadog-pbnx2
host tags:
kube_cluster_name:scout
cluster_name:scout
kube_node_role:agent
hostname provider: container
unused hostname providers:
aws: not retrieving hostname from AWS: the host is not an ECS instance and other providers already retrieve non-default hostnames
configuration/environment: hostname is empty
gce: unable to retrieve hostname from GCE: status code 404 trying to GET http://169.254.169.254/computeMetadata/v1/instance/hostname
Metadata
========
cloud_provider: Azure
hostname_source: container
=========
Collector
=========
Running Checks
==============
coredns (1.6.0)
---------------
Instance ID: coredns:562d9c34bc2089f [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/coredns.d/auto_conf.yaml
Total Runs: 146
Metric Samples: Last Run: 204, Total: 29,784
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 1, Total: 146
Average Execution Time : 26ms
Last Execution Date : 2020-12-15 20:37:15.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:15.000000 UTC
cpu
---
Instance ID: cpu [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/cpu.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 7, Total: 1,030
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:13.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:13.000000 UTC
disk (4.0.0)
------------
Instance ID: disk:e5dffb8bef24336f [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/disk.d/conf.yaml.default
Total Runs: 147
Metric Samples: Last Run: 650, Total: 95,574
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 77ms
Last Execution Date : 2020-12-15 20:37:05.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:05.000000 UTC
docker
------
Instance ID: docker [ERROR]
Configuration Source: file:/etc/datadog-agent/conf.d/docker.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 0, Total: 0
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:12.000000 UTC
Last Successful Execution Date : Never
Error: temporary failure in dockerutil, will retry later: try delay not elapsed yet
No traceback
Warning: Error initialising check: temporary failure in dockerutil, will retry later: try delay not elapsed yet
file_handle
-----------
Instance ID: file_handle [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/file_handle.d/conf.yaml.default
Total Runs: 147
Metric Samples: Last Run: 5, Total: 735
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:04.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:04.000000 UTC
io
--
Instance ID: io [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/io.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 130, Total: 19,150
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:11.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:11.000000 UTC
kubelet (5.0.0)
---------------
Instance ID: kubelet:d884b5186b651429 [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/kubelet.d/conf.yaml.default
Total Runs: 147
Metric Samples: Last Run: 427, Total: 62,767
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 4, Total: 588
Average Execution Time : 367ms
Last Execution Date : 2020-12-15 20:37:03.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:03.000000 UTC
load
----
Instance ID: load [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/load.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 6, Total: 888
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:10.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:10.000000 UTC
memory
------
Instance ID: memory [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/memory.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 18, Total: 2,664
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:17.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:17.000000 UTC
network (1.19.0)
----------------
Instance ID: network:5c571333f400457d [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/network.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 103, Total: 15,244
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 3ms
Last Execution Date : 2020-12-15 20:37:09.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:09.000000 UTC
ntp
---
Instance ID: ntp:d884b5186b651429 [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/ntp.d/conf.yaml.default
Total Runs: 3
Metric Samples: Last Run: 1, Total: 3
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 1, Total: 3
Average Execution Time : 54ms
Last Execution Date : 2020-12-15 20:30:21.000000 UTC
Last Successful Execution Date : 2020-12-15 20:30:21.000000 UTC
uptime
------
Instance ID: uptime [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/uptime.d/conf.yaml.default
Total Runs: 148
Metric Samples: Last Run: 1, Total: 148
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 0s
Last Execution Date : 2020-12-15 20:37:16.000000 UTC
Last Successful Execution Date : 2020-12-15 20:37:16.000000 UTC
========
JMXFetch
========
Information
==================
Initialized checks
==================
no checks
Failed checks
=============
no checks
=========
Forwarder
=========
Transactions
============
Deployments: 0
Dropped: 0
DroppedOnInput: 0
Nodes: 0
Pods: 0
ReplicaSets: 0
Requeued: 0
Retried: 0
RetryQueueSize: 0
Services: 0
Transaction Successes
=====================
Total number: 313
Successes By Endpoint:
check_run_v1: 148
intake: 17
series_v1: 148
API Keys status
===============
API key ending with 490c3: API Key valid
==========
Endpoints
==========
https://app.datadoghq.eu - API Key ending with:
- 490c3
==========
Logs Agent
==========
Sending compressed logs in HTTPS to agent-http-intake.logs.datadoghq.eu on port 443
BytesSent: 3.270528e+06
EncodedBytesSent: 535060
LogsProcessed: 4062
LogsSent: 4060
datadog/datadog-pbnx2/init-config
---------------------------------
Type: file
Path: /var/log/pods/datadog_datadog-pbnx2_2356b3aa-70b0-4230-9a42-022653bb8642/init-config/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
datadog/datadog-pbnx2/trace-agent
---------------------------------
Type: file
Path: /var/log/pods/datadog_datadog-pbnx2_2356b3aa-70b0-4230-9a42-022653bb8642/trace-agent/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
cattle-prometheus/prometheus-operator-monitoring-operator-f9759c4bf-6npnk/prometheus-operator
---------------------------------------------------------------------------------------------
Type: file
Path: /var/log/pods/cattle-prometheus_prometheus-operator-monitoring-operator-f9759c4bf-6npnk_ce3c2b15-5409-43b5-9be9-e6acacc02ec4/prometheus-operator/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
dev/grp-postgres-0/postgres
---------------------------
Type: file
Path: /var/log/pods/dev_grp-postgres-0_19461b25-1146-4391-b019-6bc5fdcbff68/postgres/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
cert-manager/cert-manager-webhook-64dc9fff44-74kfs/cert-manager
---------------------------------------------------------------
Type: file
Path: /var/log/pods/cert-manager_cert-manager-webhook-64dc9fff44-74kfs_df1f6336-c9b6-4054-be32-7850324b5e64/cert-manager/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
logging/logspout-papertrail-5sx4l/logspout
------------------------------------------
Type: file
Path: /var/log/pods/logging_logspout-papertrail-5sx4l_2762a715-077f-4b85-9691-bb72ed3d0dda/logspout/*.log
Status: OK
1 files tailed out of 1 files matching
Inputs: /var/log/pods/logging_logspout-papertrail-5sx4l_2762a715-077f-4b85-9691-bb72ed3d0dda/logspout/63.log
BytesRead: 0
container_collect_all
---------------------
Type: docker
Status: Pending
BytesRead: 0
datadog/datadog-pbnx2/init-volume
---------------------------------
Type: file
Path: /var/log/pods/datadog_datadog-pbnx2_2356b3aa-70b0-4230-9a42-022653bb8642/init-volume/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
datadog/datadog-pbnx2/agent
---------------------------
Type: file
Path: /var/log/pods/datadog_datadog-pbnx2_2356b3aa-70b0-4230-9a42-022653bb8642/agent/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
dev/minio-deployment-746fff599b-sdlsn/minio
-------------------------------------------
Type: file
Path: /var/log/pods/dev_minio-deployment-746fff599b-sdlsn_290e93b7-5583-4027-879c-2ac56bbc5aae/minio/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
kube-system/kube-proxy-z5qnl/kube-proxy
---------------------------------------
Type: file
Path: /var/log/pods/kube-system_kube-proxy-z5qnl_9510e73e-ed6d-4b38-92a3-695d86d9ff14/kube-proxy/*.log
Status: OK
1 files tailed out of 1 files matching
Inputs: /var/log/pods/kube-system_kube-proxy-z5qnl_9510e73e-ed6d-4b38-92a3-695d86d9ff14/kube-proxy/0.log
BytesRead: 15688
ingress/nginx-ingress-controller-7f44b8f77c-4q9p4/nginx-ingress-controller
--------------------------------------------------------------------------
Type: file
Path: /var/log/pods/ingress_nginx-ingress-controller-7f44b8f77c-4q9p4_7a420be9-05e8-4165-916c-aa2a8b3fba1a/nginx-ingress-controller/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
kubedb/kubedb-operator-7588ff78ff-vvv92/operator
------------------------------------------------
Type: file
Path: /var/log/pods/kubedb_kubedb-operator-7588ff78ff-vvv92_7fc1087c-8392-4569-b575-d14c7a97a02d/operator/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
kube-system/coredns-748cdb7bf4-ddnrc/coredns
--------------------------------------------
Type: file
Path: /var/log/pods/kube-system_coredns-748cdb7bf4-ddnrc_6f5ccbfe-0329-4e4a-8d96-14097a3937d5/coredns/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
datadog/datadog-pbnx2/process-agent
-----------------------------------
Type: file
Path: /var/log/pods/datadog_datadog-pbnx2_2356b3aa-70b0-4230-9a42-022653bb8642/process-agent/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
ingress/nginx-ingress-default-backend-64cb9d95fb-l2fwk/nginx-ingress-default-backend
------------------------------------------------------------------------------------
Type: file
Path: /var/log/pods/ingress_nginx-ingress-default-backend-64cb9d95fb-l2fwk_b5e14f57-ff80-4fe3-b215-8883da3b451d/nginx-ingress-default-backend/*.log
Status: Pending
1 files tailed out of 1 files matching
BytesRead: 0
=========
APM Agent
=========
Status: Running
Pid: 1
Uptime: 2222 seconds
Mem alloc: 9,375,464 bytes
Hostname: datadog-pbnx2
Receiver: 0.0.0.0:8126
Endpoints:
https://trace.agent.datadoghq.eu
Receiver (previous minute)
==========================
No traces received in the previous minute.
Default priority sampling rate: 100.0%
Writer (previous minute)
========================
Traces: 0 payloads, 0 traces, 0 events, 0 bytes
Stats: 0 payloads, 0 stats buckets, 0 bytes
=========
Aggregator
=========
Checks Metric Sample: 231,804
Dogstatsd Metric Sample: 15,009
Event: 1
Events Flushed: 1
Number Of Flushes: 148
Series Flushed: 148,453
Service Check: 2,521
Service Checks Flushed: 2,664
=========
DogStatsD
=========
Event Packets: 0
Event Parse Errors: 0
Metric Packets: 15,008
Metric Parse Errors: 0
Service Check Packets: 0
Service Check Parse Errors: 0
Udp Bytes: 1,110,865
Udp Packet Reading Errors: 0
Udp Packets: 1,489
Uds Bytes: 0
Uds Origin Detection Errors: 0
Uds Packet Reading Errors: 0
Uds Packets: 0
=====================
Datadog Cluster Agent
=====================
- Datadog Cluster Agent endpoint detected: https://10.0.34.90:5005
Successfully connected to the Datadog Cluster Agent.
- Running: 1.9.1+commit.2270e4d
$ kubectl -n datadog logs datadog-z4xfj agent -f
2020-12-15 20:15:24 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:25 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:26 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:27 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:28 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:29 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:30 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:31 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:32 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:33 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
2020-12-15 20:15:34 UTC | CORE | ERROR | (pkg/autodiscovery/config_poller.go:123 in collect) | Unable to collect configurations from provider docker: temporary failure in dockerutil, will retry later: try delay not elapsed yet
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-agentpool-38016112-vmss000000 Ready agent 146d v1.19.3 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1032-azure containerd://1.4.1+azure
aks-agentpool-38016112-vmss000001 Ready agent 136d v1.19.3 10.240.0.5 <none> Ubuntu 18.04.5 LTS 5.4.0-1032-azure containerd://1.4.1+azure
aks-agentpool-38016112-vmss000002 Ready agent 132d v1.19.3 10.240.0.6 <none> Ubuntu 18.04.5 LTS 5.4.0-1032-azure containerd://1.4.1+azure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment