Skip to content

Instantly share code, notes, and snippets.

@y56
Created April 17, 2021 12:55
Show Gist options
  • Save y56/292f0b4745c06bb31278df5ca09e7213 to your computer and use it in GitHub Desktop.
Save y56/292f0b4745c06bb31278df5ca09e7213 to your computer and use it in GitHub Desktop.
////////NAME
////////alertmanager-demo-prometheus-operator-alertmanager-0
Name: alertmanager-demo-prometheus-operator-alertmanager-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:49:25 +0800
Labels: alertmanager=demo-prometheus-operator-alertmanager
app=alertmanager
controller-revision-hash=alertmanager-demo-prometheus-operator-alertmanager-d6f7f9bbf
statefulset.kubernetes.io/pod-name=alertmanager-demo-prometheus-operator-alertmanager-0
Annotations: <none>
Status: Running
IP: 10.244.1.180
IPs:
IP: 10.244.1.180
Controlled By: StatefulSet/alertmanager-demo-prometheus-operator-alertmanager
Containers:
alertmanager:
Container ID: docker://ee79b5e330ec91b6fc998b3f36513cdd5c7dd293ad90598739fac1f988b4a464
Image: quay.io/prometheus/alertmanager:v0.21.0
Image ID: docker-pullable://quay.io/prometheus/alertmanager@sha256:24a5204b418e8fa0214cfb628486749003b039c279c56b5bddb5b10cd100d926
Ports: 9093/TCP, 9094/TCP, 9094/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP
Args:
--config.file=/etc/alertmanager/config/alertmanager.yaml
--cluster.listen-address=[$(POD_IP)]:9094
--storage.path=/alertmanager
--data.retention=120h
--web.listen-address=:9093
--web.external-url=http://demo-prometheus-operator-alertmanager.default:9093
--web.route-prefix=/
--cluster.peer=alertmanager-demo-prometheus-operator-alertmanager-0.alertmanager-operated.default.svc:9094
State: Running
Started: Sat, 17 Apr 2021 15:22:13 +0800
Last State: Terminated
Reason: Error
Message: level=info ts=2021-04-16T03:49:33.402Z caller=main.go:216 msg="Starting Alertmanager" version="(version=0.21.0, branch=HEAD, revision=4c6c03ebfe21009c546e4d1e9b92c371d67c021d)"
level=info ts=2021-04-16T03:49:33.403Z caller=main.go:217 build_context="(go=go1.14.4, user=root@dee35927357f, date=20200617-08:54:02)"
level=warn ts=2021-04-16T03:49:38.816Z caller=cluster.go:228 component=cluster msg="failed to join cluster" err="1 error occurred:\n\t* Failed to resolve alertmanager-demo-prometheus-operator-alertmanager-0.alertmanager-operated.default.svc:9094: lookup alertmanager-demo-prometheus-operator-alertmanager-0.alertmanager-operated.default.svc on 10.0.0.10:53: no such host\n\n"
level=info ts=2021-04-16T03:49:38.816Z caller=cluster.go:230 component=cluster msg="will retry joining cluster every 10s"
level=warn ts=2021-04-16T03:49:38.816Z caller=main.go:307 msg="unable to join gossip mesh" err="1 error occurred:\n\t* Failed to resolve alertmanager-demo-prometheus-operator-alertmanager-0.alertmanager-operated.default.svc:9094: lookup alertmanager-demo-prometheus-operator-alertmanager-0.alertmanager-operated.default.svc on 10.0.0.10:53: no such host\n\n"
level=info ts=2021-04-16T03:49:38.816Z caller=cluster.go:623 component=cluster msg="Waiting for gossip to settle..." interval=2s
level=info ts=2021-04-16T03:49:38.852Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2021-04-16T03:49:38.855Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2021-04-16T03:49:38.858Z caller=main.go:485 msg=Listening address=:9093
level=info ts=2021-04-16T03:49:40.820Z caller=cluster.go:648 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000093878s
level=info ts=2021-04-16T03:49:48.821Z caller=cluster.go:640 component=cluster msg="gossip settled; proceeding" elapsed=10.001436005s
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:27 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Requests:
memory: 200Mi
Liveness: http-get http://:web/-/healthy delay=0s timeout=3s period=10s #success=1 #failure=10
Readiness: http-get http://:web/-/ready delay=3s timeout=3s period=5s #success=1 #failure=10
Environment:
POD_IP: (v1:status.podIP)
Mounts:
/alertmanager from alertmanager-demo-prometheus-operator-alertmanager-db (rw)
/etc/alertmanager/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-alertmanager-token-q4vp5 (ro)
config-reloader:
Container ID: docker://c87070c7ce24df5275ceb62045c7b9a22cb255d751e8655e0a420f0f66707b52
Image: docker.io/jimmidyson/configmap-reload:v0.3.0
Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2
Port: <none>
Host Port: <none>
Args:
-webhook-url=http://127.0.0.1:9093/-/reload
-volume-dir=/etc/alertmanager/config
State: Running
Started: Sat, 17 Apr 2021 15:22:16 +0800
Last State: Terminated
Reason: Error
Message: 2021/04/16 03:49:29 Watching directory: "/etc/alertmanager/config"
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:28 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Limits:
cpu: 100m
memory: 25Mi
Requests:
cpu: 100m
memory: 25Mi
Environment: <none>
Mounts:
/etc/alertmanager/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-alertmanager-token-q4vp5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config-volume:
Type: Secret (a volume populated by a Secret)
SecretName: alertmanager-demo-prometheus-operator-alertmanager
Optional: false
alertmanager-demo-prometheus-operator-alertmanager-db:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
demo-prometheus-operator-alertmanager-token-q4vp5:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-operator-alertmanager-token-q4vp5
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////demo-grafana-67dd6c996b-8pjjn
Name: demo-grafana-67dd6c996b-8pjjn
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app.kubernetes.io/instance=demo
app.kubernetes.io/name=grafana
pod-template-hash=67dd6c996b
Annotations: checksum/config: 4803c27e51ead4ab23ed5f4f459fede28a18858e860b6ab4dec3968a910565f9
checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/sc-dashboard-provider-config: 8792d93fa4af85c7d446184965cf35b7cd186a8266ed48a0ae567642296ba70c
checksum/secret: 5bcd31e6132df5bb35b60a31785e713074d4d38a1f2bf7eef21f65edb6cdd043
Status: Running
IP: 10.244.2.171
IPs:
IP: 10.244.2.171
Controlled By: ReplicaSet/demo-grafana-67dd6c996b
Init Containers:
grafana-sc-datasources:
Container ID: docker://00fad7b31425d3e9827e47f7b8c448842e24edc5c5eab3ecfbc555e6f1633bee
Image: kiwigrid/k8s-sidecar:0.1.151
Image ID: docker-pullable://kiwigrid/k8s-sidecar@sha256:7b98eecdf6d117b053622e9f317c632a4b2b97636e8b2e96b311a5fd5c68d211
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 17 Apr 2021 15:22:19 +0800
Finished: Sat, 17 Apr 2021 15:22:29 +0800
Ready: True
Restart Count: 3
Environment:
METHOD: LIST
LABEL: grafana_datasource
FOLDER: /etc/grafana/provisioning/datasources
RESOURCE: both
Mounts:
/etc/grafana/provisioning/datasources from sc-datasources-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-grafana-token-n7df9 (ro)
Containers:
grafana-sc-dashboard:
Container ID: docker://d7a575dde9892c4a8357e8de9d6d2461fea14f55eb33634225fd7c46d5a702a7
Image: kiwigrid/k8s-sidecar:0.1.151
Image ID: docker-pullable://kiwigrid/k8s-sidecar@sha256:7b98eecdf6d117b053622e9f317c632a4b2b97636e8b2e96b311a5fd5c68d211
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 17 Apr 2021 15:22:36 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:40 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 3
Environment:
METHOD:
LABEL: grafana_dashboard
FOLDER: /tmp/dashboards
RESOURCE: both
Mounts:
/tmp/dashboards from sc-dashboard-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-grafana-token-n7df9 (ro)
grafana:
Container ID: docker://381a304a275a49960dab46f5797ccd877b230b25c0e64692449a8808ea8df1ff
Image: grafana/grafana:7.0.3
Image ID: docker-pullable://grafana/grafana@sha256:d72946c8e5d57a9a121bcc3ae8e4a8ccab96960d81031d18a4c31ad1f7aea03e
Ports: 80/TCP, 3000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sat, 17 Apr 2021 15:22:37 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:41 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:3000/api/health delay=60s timeout=30s period=10s #success=1 #failure=10
Readiness: http-get http://:3000/api/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
GF_SECURITY_ADMIN_USER: <set to the key 'admin-user' in secret 'demo-grafana'> Optional: false
GF_SECURITY_ADMIN_PASSWORD: <set to the key 'admin-password' in secret 'demo-grafana'> Optional: false
Mounts:
/etc/grafana/grafana.ini from config (rw,path="grafana.ini")
/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml from sc-dashboard-provider (rw,path="provider.yaml")
/etc/grafana/provisioning/datasources from sc-datasources-volume (rw)
/tmp/dashboards from sc-dashboard-volume (rw)
/var/lib/grafana from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-grafana-token-n7df9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: demo-grafana
Optional: false
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-dashboard-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-dashboard-provider:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: demo-grafana-config-dashboards
Optional: false
sc-datasources-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
demo-grafana-token-n7df9:
Type: Secret (a volume populated by a Secret)
SecretName: demo-grafana-token-n7df9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////demo-kube-state-metrics-6f676dfddc-nqxwn
Name: demo-kube-state-metrics-6f676dfddc-nqxwn
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:49:33 +0800
Labels: app.kubernetes.io/instance=demo
app.kubernetes.io/name=kube-state-metrics
pod-template-hash=6f676dfddc
Annotations: <none>
Status: Running
IP: 10.244.1.179
IPs:
IP: 10.244.1.179
Controlled By: ReplicaSet/demo-kube-state-metrics-6f676dfddc
Containers:
kube-state-metrics:
Container ID: docker://9d8991d9682d55e69d1902178c4fc11c12fe2a4fa0fd6cd6c77661f341203375
Image: quay.io/coreos/kube-state-metrics:v1.9.7
Image ID: docker-pullable://quay.io/coreos/kube-state-metrics@sha256:2f82f0da199c60a7699c43c63a295c44e673242de0b7ee1b17c2d5a23bec34cb
Port: 8080/TCP
Host Port: 0/TCP
Args:
--collectors=certificatesigningrequests
--collectors=configmaps
--collectors=cronjobs
--collectors=daemonsets
--collectors=deployments
--collectors=endpoints
--collectors=horizontalpodautoscalers
--collectors=ingresses
--collectors=jobs
--collectors=limitranges
--collectors=mutatingwebhookconfigurations
--collectors=namespaces
--collectors=networkpolicies
--collectors=nodes
--collectors=persistentvolumeclaims
--collectors=persistentvolumes
--collectors=poddisruptionbudgets
--collectors=pods
--collectors=replicasets
--collectors=replicationcontrollers
--collectors=resourcequotas
--collectors=secrets
--collectors=services
--collectors=statefulsets
--collectors=storageclasses
--collectors=validatingwebhookconfigurations
--collectors=volumeattachments
State: Running
Started: Sat, 17 Apr 2021 15:22:41 +0800
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sat, 17 Apr 2021 15:22:10 +0800
Finished: Sat, 17 Apr 2021 15:22:40 +0800
Ready: True
Restart Count: 2
Liveness: http-get http://:8080/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/ delay=5s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from demo-kube-state-metrics-token-dxdln (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
demo-kube-state-metrics-token-dxdln:
Type: Secret (a volume populated by a Secret)
SecretName: demo-kube-state-metrics-token-dxdln
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////demo-prometheus-node-exporter-9p8bx
Name: demo-prometheus-node-exporter-9p8bx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.15
IPs:
IP: 10.240.0.15
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://3071bd786c4b72d3ba1aacead82ab108891bdb2869a911ca0ebadf7a635ce19e
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:16 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:20 +0800
Finished: Sat, 17 Apr 2021 15:22:06 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-node-exporter-c5vbh
Name: demo-prometheus-node-exporter-c5vbh
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.4
IPs:
IP: 10.240.0.4
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://fe2a29d7ab23b691fc06d423dd6dddef359cabfe0927832410b210f6fec959c3
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:17 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:21 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-node-exporter-nvc92
Name: demo-prometheus-node-exporter-nvc92
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.7
IPs:
IP: 10.240.0.7
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://60404eb57125fe6062bc56f01e0f3279042dfeec21b71102d05916200b09dfeb
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:13 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:30 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-node-exporter-pndng
Name: demo-prometheus-node-exporter-pndng
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/10.240.0.6
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.6
IPs:
IP: 10.240.0.6
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://6a5aec72a36ebc1452a3d6ed7411621fc98de0d9612898307a71f84baf638c57
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:00 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:30 +0800
Finished: Sat, 17 Apr 2021 15:21:53 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-node-exporter-s5bfk
Name: demo-prometheus-node-exporter-s5bfk
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.8
IPs:
IP: 10.240.0.8
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://7888dfd31f4e282dc3d430de830b5c066e7b6969647115c90d6a89274f28bb7d
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:15 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:24 +0800
Finished: Sat, 17 Apr 2021 15:21:55 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-node-exporter-v2bps
Name: demo-prometheus-node-exporter-v2bps
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-node-exporter
chart=prometheus-node-exporter-1.10.0
controller-revision-hash=6d858f4cd4
heritage=Helm
jobLabel=node-exporter
pod-template-generation=1
release=demo
Annotations: <none>
Status: Running
IP: 10.240.0.5
IPs:
IP: 10.240.0.5
Controlled By: DaemonSet/demo-prometheus-node-exporter
Containers:
node-exporter:
Container ID: docker://df0fd78597c51d9adf08c91aff93f018144eec9ef96b91dc4db481e5d6d53c48
Image: quay.io/prometheus/node-exporter:v1.0.0
Image ID: docker-pullable://quay.io/prometheus/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
State: Running
Started: Sat, 17 Apr 2021 15:22:09 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:24 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-node-exporter-token-kqd9h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
demo-prometheus-node-exporter-token-kqd9h:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-node-exporter-token-kqd9h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
////////demo-prometheus-operator-operator-79c5f74b5f-w6svn
Name: demo-prometheus-operator-operator-79c5f74b5f-w6svn
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 19:29:50 +0800
Labels: app=prometheus-operator-operator
chart=prometheus-operator-9.3.2
heritage=Helm
pod-template-hash=79c5f74b5f
release=demo
Annotations: <none>
Status: Running
IP: 10.244.8.245
IPs:
IP: 10.244.8.245
Controlled By: ReplicaSet/demo-prometheus-operator-operator-79c5f74b5f
Containers:
prometheus-operator:
Container ID: docker://402d8b7b843f1301ff6de47fe0eb04384d726a05f86bf9ad5f4416f4c2947d2e
Image: quay.io/coreos/prometheus-operator:v0.38.1
Image ID: docker-pullable://quay.io/coreos/prometheus-operator@sha256:62b8cf466e9b238a9fcf0bcba74562c8833e7451042321e323a46de3f1dbe1bc
Port: 8080/TCP
Host Port: 0/TCP
Args:
--manage-crds=true
--kubelet-service=kube-system/demo-prometheus-operator-kubelet
--logtostderr=true
--localhost=127.0.0.1
--prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.38.1
--config-reloader-image=docker.io/jimmidyson/configmap-reload:v0.3.0
--config-reloader-cpu=100m
--config-reloader-memory=25Mi
State: Running
Started: Sat, 17 Apr 2021 15:22:20 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:20 +0800
Finished: Sat, 17 Apr 2021 15:22:06 +0800
Ready: True
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-operator-token-d8ps4 (ro)
tls-proxy:
Container ID: docker://5e660bc0e15d2107a36c2bed2c92cf9b9a570edca2c11da2aaf2d3b644459490
Image: squareup/ghostunnel:v1.5.2
Image ID: docker-pullable://squareup/ghostunnel@sha256:70f4cf270425dee074f49626ec63fc96e6712e9c0eedf127e7254e8132d25063
Port: 8443/TCP
Host Port: 0/TCP
Args:
server
--listen=:8443
--target=127.0.0.1:8080
--key=cert/key
--cert=cert/cert
--disable-authentication
State: Running
Started: Sat, 17 Apr 2021 15:22:21 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:23 +0800
Finished: Sat, 17 Apr 2021 15:22:06 +0800
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/cert from tls-proxy-secret (ro)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-operator-token-d8ps4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
tls-proxy-secret:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-operator-admission
Optional: false
demo-prometheus-operator-operator-token-d8ps4:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-operator-operator-token-d8ps4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-broker-75bf8774cc-m74fb
Name: druid-broker-75bf8774cc-m74fb
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Mon, 12 Apr 2021 18:48:16 +0800
Labels: app=druid
component=broker
pod-template-hash=75bf8774cc
release=druid
Annotations: <none>
Status: Running
IP: 10.244.2.170
IPs:
IP: 10.244.2.170
Controlled By: ReplicaSet/druid-broker-75bf8774cc
Containers:
druid:
Container ID: docker://09db818c08dbe7816f4ca0071955e6e06efc03aa974529e77511bf5c87e43f0e
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8082/TCP
Host Port: 0/TCP
Args:
broker
State: Running
Started: Sat, 17 Apr 2021 15:22:18 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:21 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 9
Liveness: http-get http://:8082/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8082/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 400m
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_processing_buffer_sizeBytes: 50000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-broker-75bf8774cc-ncsmx
Name: druid-broker-75bf8774cc-ncsmx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 09:50:26 +0800
Labels: app=druid
component=broker
pod-template-hash=75bf8774cc
release=druid
Annotations: <none>
Status: Running
IP: 10.244.3.171
IPs:
IP: 10.244.3.171
Controlled By: ReplicaSet/druid-broker-75bf8774cc
Containers:
druid:
Container ID: docker://5420d1e93ee2b0cbdbae021407b10d0fb2a82f8ddd5431a77f4338aa14d15d8e
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8082/TCP
Host Port: 0/TCP
Args:
broker
State: Running
Started: Sat, 17 Apr 2021 15:22:11 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:28 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 6
Liveness: http-get http://:8082/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8082/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 400m
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_processing_buffer_sizeBytes: 50000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-coordinator-85599f58f6-69692
Name: druid-coordinator-85599f58f6-69692
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 09:50:26 +0800
Labels: app=druid
component=coordinator
pod-template-hash=85599f58f6
release=druid
Annotations: <none>
Status: Running
IP: 10.244.3.172
IPs:
IP: 10.244.3.172
Controlled By: ReplicaSet/druid-coordinator-85599f58f6
Containers:
druid:
Container ID: docker://5cc0cef4b5ee18c021299946d573086a18b96b40bc93d63b659e7fbc0a56cb32
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8081/TCP
Host Port: 0/TCP
Args:
coordinator
State: Running
Started: Sat, 17 Apr 2021 18:37:43 +0800
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Sat, 17 Apr 2021 18:36:13 +0800
Finished: Sat, 17 Apr 2021 18:37:40 +0800
Ready: True
Restart Count: 13
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 256m
DRUID_XMX: 256m
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 9m34s (x64 over 3h36m) kubelet Liveness probe failed: Get http://10.244.3.172:8081/status/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 94s (x97 over 3h36m) kubelet Readiness probe failed: Get http://10.244.3.172:8081/status/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
////////druid-historical-0
Name: druid-historical-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:49:33 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-0
Annotations: <none>
Status: Running
IP: 10.244.1.185
IPs:
IP: 10.244.1.185
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://3f4f35e5cf6cc10f4d5973238962e5e3b135bb50e63f0a588d31a47afae5f199
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:11 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:08 +0800
Finished: Sat, 17 Apr 2021 17:01:10 +0800
Ready: True
Restart Count: 6
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-0
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-1
Name: druid-historical-1
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/10.240.0.6
Start Time: Fri, 16 Apr 2021 11:53:00 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-1
Annotations: <none>
Status: Running
IP: 10.244.0.171
IPs:
IP: 10.244.0.171
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://09c4935a708b67e970b849eb6b737622166c1182a497aaeff6d6755ab442c895
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:02 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:23:54 +0800
Finished: Sat, 17 Apr 2021 17:01:01 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-1
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-10
Name: druid-historical-10
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/10.240.0.6
Start Time: Fri, 16 Apr 2021 11:58:21 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-10
Annotations: <none>
Status: Running
IP: 10.244.0.172
IPs:
IP: 10.244.0.172
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://905fae710ba62537c3c31e6f89cfa015bc846643d2218035c42e287225ae7b67
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:01 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-10
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-11
Name: druid-historical-11
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 19:57:45 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-11
Annotations: <none>
Status: Running
IP: 10.244.3.178
IPs:
IP: 10.244.3.178
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://33773b9e9718b40ed4a02fce9a3114e7331a1ea82c0eca9a28256790b887db8e
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:23 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:05 +0800
Finished: Sat, 17 Apr 2021 17:01:20 +0800
Ready: True
Restart Count: 8
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-11
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-12
Name: druid-historical-12
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 19:55:56 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-12
Annotations: <none>
Status: Running
IP: 10.244.4.173
IPs:
IP: 10.244.4.173
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://2b6d27a9fcc22d6887f652ab480e35c57fc3f40de6125162372b5921b47a6701
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:17 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:33 +0800
Finished: Sat, 17 Apr 2021 17:01:16 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-12
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-13
Name: druid-historical-13
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/10.240.0.6
Start Time: Fri, 16 Apr 2021 11:59:47 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-13
Annotations: <none>
Status: Running
IP: 10.244.0.170
IPs:
IP: 10.244.0.170
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://d787405655c02e6bd1173f8262a329abe434b91865fe3d85d55f351f6b0038a4
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:00:59 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:23:48 +0800
Finished: Sat, 17 Apr 2021 17:00:59 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-13
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-14
Name: druid-historical-14
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 19:52:34 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-14
Annotations: <none>
Status: Running
IP: 10.244.2.178
IPs:
IP: 10.244.2.178
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://1c0537939301008232fa62d580163239ab7e181378cce7dfbd79c13c302d18c2
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:09 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:11 +0800
Finished: Sat, 17 Apr 2021 17:01:07 +0800
Ready: True
Restart Count: 9
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-14
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-15
Name: druid-historical-15
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 19:51:16 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-15
Annotations: <none>
Status: Running
IP: 10.244.4.174
IPs:
IP: 10.244.4.174
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://962c9619d5742366c4dee5b53c24667b48c8ff11f24d10a8281263f2d2fc8190
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:16 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:33 +0800
Finished: Sat, 17 Apr 2021 17:01:14 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-15
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-16
Name: druid-historical-16
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 19:49:59 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-16
Annotations: <none>
Status: Running
IP: 10.244.8.251
IPs:
IP: 10.244.8.251
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://d2fb8682aac4ce5abd57a7c82825e5b90ad02e9742eb2b3f697b48575f330fca
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:09 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:12 +0800
Finished: Sat, 17 Apr 2021 17:01:08 +0800
Ready: True
Restart Count: 9
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-16
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-17
Name: druid-historical-17
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 19:48:45 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-17
Annotations: <none>
Status: Running
IP: 10.244.3.179
IPs:
IP: 10.244.3.179
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://1fec1fa621bdcd3aceb9cabce68c841b04e00f5bee179f0c651ac657a22e6f39
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:06 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:00 +0800
Finished: Sat, 17 Apr 2021 17:01:05 +0800
Ready: True
Restart Count: 8
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-17
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-18
Name: druid-historical-18
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 19:46:24 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-18
Annotations: <none>
Status: Running
IP: 10.244.4.171
IPs:
IP: 10.244.4.171
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://b2671747cb3c00c3360db0b9344ca0e797b7f40b29023854f24c594cad39c323
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:12 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:35 +0800
Finished: Sat, 17 Apr 2021 17:01:10 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-18
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-19
Name: druid-historical-19
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 19:44:12 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-19
Annotations: <none>
Status: Running
IP: 10.244.8.250
IPs:
IP: 10.244.8.250
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://5f8140b3bc4926da97b506020e120f389ebbed627f67de7896e04693e83666c5
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:16 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:33 +0800
Finished: Sat, 17 Apr 2021 17:01:15 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-19
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-2
Name: druid-historical-2
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 20:12:38 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-2
Annotations: <none>
Status: Running
IP: 10.244.2.177
IPs:
IP: 10.244.2.177
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://6ffbf9a1efe7f05ee7ac17e06300a0264120716cab511d994cd483de1d2b7344
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:04 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:05 +0800
Finished: Sat, 17 Apr 2021 17:01:04 +0800
Ready: True
Restart Count: 9
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-2
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-3
Name: druid-historical-3
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:55:14 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-3
Annotations: <none>
Status: Running
IP: 10.244.1.184
IPs:
IP: 10.244.1.184
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://68821933d463938abf1756b631970028cecaccbb873f68a3ffaf712bbf9c0477
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:08 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:09 +0800
Finished: Sat, 17 Apr 2021 17:01:06 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-3
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-4
Name: druid-historical-4
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 20:09:19 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-4
Annotations: <none>
Status: Running
IP: 10.244.8.249
IPs:
IP: 10.244.8.249
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://595368b0b4347a7b6ea2d11fad48ce4888212d1168d0e6241783f58ad7000ebf
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:11 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:32 +0800
Finished: Sat, 17 Apr 2021 17:01:10 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-4
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-5
Name: druid-historical-5
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 20:08:01 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-5
Annotations: <none>
Status: Running
IP: 10.244.3.177
IPs:
IP: 10.244.3.177
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://e4fcb7b1df325a8d412ac061f89dfa9ada090111885f8c50e04c372d74d53f80
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:09 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:05 +0800
Finished: Sat, 17 Apr 2021 17:01:07 +0800
Ready: True
Restart Count: 8
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-5
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-6
Name: druid-historical-6
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 20:06:06 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-6
Annotations: <none>
Status: Running
IP: 10.244.4.172
IPs:
IP: 10.244.4.172
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://af216f1593aabf0eaa25844525c0296f5230ccb3d6ca8360e1ea88be32df359d
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:19 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:25:29 +0800
Finished: Sat, 17 Apr 2021 17:01:18 +0800
Ready: True
Restart Count: 10
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-6
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-7
Name: druid-historical-7
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 20:03:59 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-7
Annotations: <none>
Status: Running
IP: 10.244.8.248
IPs:
IP: 10.244.8.248
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://b489703ca6b7a7cded1f28b27156a268de47c395529749cc68c1a5d0ba5f2f87
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:17 +0800
Ready: True
Restart Count: 11
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-7
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-8
Name: druid-historical-8
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 20:02:35 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-8
Annotations: <none>
Status: Running
IP: 10.244.2.176
IPs:
IP: 10.244.2.176
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://e094e2b3cc53849cfe92ea915ecf6b616e8bb3c71f1038a3e4d690d445983fd6
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:06 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:06 +0800
Finished: Sat, 17 Apr 2021 17:01:05 +0800
Ready: True
Restart Count: 9
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-8
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-historical-9
Name: druid-historical-9
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:56:50 +0800
Labels: app=druid
component=historical
controller-revision-hash=druid-historical-bd86bd886
release=druid
statefulset.kubernetes.io/pod-name=druid-historical-9
Annotations: <none>
Status: Running
IP: 10.244.1.186
IPs:
IP: 10.244.1.186
Controlled By: StatefulSet/druid-historical
Containers:
druid:
Container ID: docker://4d8dc7a01f1bb4707241b1128346d6bca009ddf05d942b4f9864ce8fe84e3000
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8083/TCP
Host Port: 0/TCP
Args:
historical
State: Running
Started: Sat, 17 Apr 2021 17:01:08 +0800
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 17 Apr 2021 15:24:14 +0800
Finished: Sat, 17 Apr 2021 17:01:07 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8083/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 5g
DRUID_XMS: 512m
DRUID_XMX: 512m
druid_historical_cache_populateCache: true
druid_historical_cache_useCache: true
druid_processing_buffer_sizeBytes: 200000000
druid_processing_numMergeBuffers: 1
druid_processing_numThreads: 3
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-historical-9
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-middle-manager-0
Name: druid-middle-manager-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000004/10.240.0.15
Start Time: Tue, 13 Apr 2021 09:51:40 +0800
Labels: app=druid
component=middle-manager
controller-revision-hash=druid-middle-manager-695966d695
release=druid
statefulset.kubernetes.io/pod-name=druid-middle-manager-0
Annotations: <none>
Status: Running
IP: 10.244.8.247
IPs:
IP: 10.244.8.247
Controlled By: StatefulSet/druid-middle-manager
Containers:
druid:
Container ID: docker://27c09b4cb8f2117c0a6f5baf10edc3cb0227a70f461558000038f03d9f059165
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8091/TCP
Host Port: 0/TCP
Args:
middleManager
State: Running
Started: Sat, 17 Apr 2021 15:22:26 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:26 +0800
Finished: Sat, 17 Apr 2021 15:22:06 +0800
Ready: True
Restart Count: 5
Liveness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 128m
DRUID_XMX: 128m
druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 25000000
druid_indexer_runner_javaOptsArray: ["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-XX:+UseG1GC", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-middle-manager-0
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-middle-manager-1
Name: druid-middle-manager-1
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/10.240.0.8
Start Time: Tue, 13 Apr 2021 09:54:58 +0800
Labels: app=druid
component=middle-manager
controller-revision-hash=druid-middle-manager-695966d695
release=druid
statefulset.kubernetes.io/pod-name=druid-middle-manager-1
Annotations: <none>
Status: Running
IP: 10.244.4.170
IPs:
IP: 10.244.4.170
Controlled By: StatefulSet/druid-middle-manager
Containers:
druid:
Container ID: docker://5fac279b3bd6c97683d9b92c66c1b1eafe27a5787dcf726fefea0e31c94e5708
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8091/TCP
Host Port: 0/TCP
Args:
middleManager
State: Running
Started: Sat, 17 Apr 2021 15:22:27 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:37 +0800
Finished: Sat, 17 Apr 2021 15:21:55 +0800
Ready: True
Restart Count: 6
Liveness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 128m
DRUID_XMX: 128m
druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 25000000
druid_indexer_runner_javaOptsArray: ["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-XX:+UseG1GC", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-middle-manager-1
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-middle-manager-2
Name: druid-middle-manager-2
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 20:08:34 +0800
Labels: app=druid
component=middle-manager
controller-revision-hash=druid-middle-manager-695966d695
release=druid
statefulset.kubernetes.io/pod-name=druid-middle-manager-2
Annotations: <none>
Status: Running
IP: 10.244.3.175
IPs:
IP: 10.244.3.175
Controlled By: StatefulSet/druid-middle-manager
Containers:
druid:
Container ID: docker://31582690b2ee484d5e47bd08f4d78708aa5ce0940dd96a8297de80e0af70b6c3
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8091/TCP
Host Port: 0/TCP
Args:
middleManager
State: Running
Started: Sat, 17 Apr 2021 15:22:17 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:34 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 128m
DRUID_XMX: 128m
druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 25000000
druid_indexer_runner_javaOptsArray: ["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-XX:+UseG1GC", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-middle-manager-2
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-middle-manager-3
Name: druid-middle-manager-3
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:50:43 +0800
Labels: app=druid
component=middle-manager
controller-revision-hash=druid-middle-manager-695966d695
release=druid
statefulset.kubernetes.io/pod-name=druid-middle-manager-3
Annotations: <none>
Status: Running
IP: 10.244.1.183
IPs:
IP: 10.244.1.183
Controlled By: StatefulSet/druid-middle-manager
Containers:
druid:
Container ID: docker://0037aa72bfc4d0ab24d817bc4f48e8c6462af9953064ac41727a3f1ae20c9956
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8091/TCP
Host Port: 0/TCP
Args:
middleManager
State: Running
Started: Sat, 17 Apr 2021 15:22:21 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:51:19 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Liveness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 128m
DRUID_XMX: 128m
druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 25000000
druid_indexer_runner_javaOptsArray: ["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-XX:+UseG1GC", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-middle-manager-3
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-middle-manager-4
Name: druid-middle-manager-4
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Tue, 13 Apr 2021 20:11:46 +0800
Labels: app=druid
component=middle-manager
controller-revision-hash=druid-middle-manager-695966d695
release=druid
statefulset.kubernetes.io/pod-name=druid-middle-manager-4
Annotations: <none>
Status: Running
IP: 10.244.2.175
IPs:
IP: 10.244.2.175
Controlled By: StatefulSet/druid-middle-manager
Containers:
druid:
Container ID: docker://ed9c897d7186360b52887fb0f1db135e27325b44a20cd931b880a58377a6238a
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8091/TCP
Host Port: 0/TCP
Args:
middleManager
State: Running
Started: Sat, 17 Apr 2021 15:22:26 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:26 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8091/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_XMS: 128m
DRUID_XMX: 128m
druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 25000000
druid_indexer_runner_javaOptsArray: ["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=2g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-XX:+UseG1GC", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
Mounts:
/opt/druid/var/druid/ from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-middle-manager-4
ReadOnly: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-29pqm
Name: druid-overlord-6f5f5fdf5b-29pqm
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:07 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-2cn5h
Name: druid-overlord-6f5f5fdf5b-2cn5h
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:52 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-4b7tc
Name: druid-overlord-6f5f5fdf5b-4b7tc
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-5wbdv
Name: druid-overlord-6f5f5fdf5b-5wbdv
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:28 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-6252v
Name: druid-overlord-6f5f5fdf5b-6252v
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:31 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-62kz5
Name: druid-overlord-6f5f5fdf5b-62kz5
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:53 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-6b55h
Name: druid-overlord-6f5f5fdf5b-6b55h
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:32 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-6njb7
Name: druid-overlord-6f5f5fdf5b-6njb7
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:29 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-6t4ml
Name: druid-overlord-6f5f5fdf5b-6t4ml
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/
Start Time: Fri, 16 Apr 2021 14:02:33 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container druid was using 6789756Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-78gmt
Name: druid-overlord-6f5f5fdf5b-78gmt
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-7fbjz
Name: druid-overlord-6f5f5fdf5b-7fbjz
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-7nntx
Name: druid-overlord-6f5f5fdf5b-7nntx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:07 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-7tdjf
Name: druid-overlord-6f5f5fdf5b-7tdjf
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:07 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-8bztx
Name: druid-overlord-6f5f5fdf5b-8bztx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:08 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-8gq6t
Name: druid-overlord-6f5f5fdf5b-8gq6t
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-8hpcq
Name: druid-overlord-6f5f5fdf5b-8hpcq
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:50 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-8pgfz
Name: druid-overlord-6f5f5fdf5b-8pgfz
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:50 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-9ckrj
Name: druid-overlord-6f5f5fdf5b-9ckrj
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:51 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-9h7lw
Name: druid-overlord-6f5f5fdf5b-9h7lw
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 11:52:03 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container druid was using 7711336Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-bmxv7
Name: druid-overlord-6f5f5fdf5b-bmxv7
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:05 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-clv78
Name: druid-overlord-6f5f5fdf5b-clv78
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:31 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-fb8gq
Name: druid-overlord-6f5f5fdf5b-fb8gq
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-fdkvm
Name: druid-overlord-6f5f5fdf5b-fdkvm
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-gpkss
Name: druid-overlord-6f5f5fdf5b-gpkss
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:28 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-h69tv
Name: druid-overlord-6f5f5fdf5b-h69tv
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-hjlsw
Name: druid-overlord-6f5f5fdf5b-hjlsw
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-hv8vp
Name: druid-overlord-6f5f5fdf5b-hv8vp
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:30 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-j9nxt
Name: druid-overlord-6f5f5fdf5b-j9nxt
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:28 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-jbx6c
Name: druid-overlord-6f5f5fdf5b-jbx6c
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:05 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-jjz8r
Name: druid-overlord-6f5f5fdf5b-jjz8r
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:05 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-jpw4k
Name: druid-overlord-6f5f5fdf5b-jpw4k
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:31 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-kxcq9
Name: druid-overlord-6f5f5fdf5b-kxcq9
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:30 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-mhpg7
Name: druid-overlord-6f5f5fdf5b-mhpg7
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-mlkn5
Name: druid-overlord-6f5f5fdf5b-mlkn5
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:06 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-nh68w
Name: druid-overlord-6f5f5fdf5b-nh68w
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:27 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-p622q
Name: druid-overlord-6f5f5fdf5b-p622q
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:04 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-p85vc
Name: druid-overlord-6f5f5fdf5b-p85vc
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:06 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-pbn29
Name: druid-overlord-6f5f5fdf5b-pbn29
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:04 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-pps79
Name: druid-overlord-6f5f5fdf5b-pps79
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:04 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-pvztt
Name: druid-overlord-6f5f5fdf5b-pvztt
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:27 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-qf5tv
Name: druid-overlord-6f5f5fdf5b-qf5tv
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-rbdcq
Name: druid-overlord-6f5f5fdf5b-rbdcq
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:04 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-rd4ft
Name: druid-overlord-6f5f5fdf5b-rd4ft
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:32 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-rsmzv
Name: druid-overlord-6f5f5fdf5b-rsmzv
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-sbhsm
Name: druid-overlord-6f5f5fdf5b-sbhsm
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Sat, 17 Apr 2021 18:39:08 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Running
IP: 10.244.2.179
IPs:
IP: 10.244.2.179
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Container ID: docker://17aedf1e61c1482b4da7ea22ab312cd58691681e8ce266bf60ae24adc7befb24
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
State: Running
Started: Sat, 17 Apr 2021 18:39:11 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-shx8r
Name: druid-overlord-6f5f5fdf5b-shx8r
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:53 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-t492b
Name: druid-overlord-6f5f5fdf5b-t492b
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 14:02:39 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container druid was using 8982980Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-tbdxd
Name: druid-overlord-6f5f5fdf5b-tbdxd
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-tbltp
Name: druid-overlord-6f5f5fdf5b-tbltp
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 18:39:04 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-vklhx
Name: druid-overlord-6f5f5fdf5b-vklhx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:49 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-vrnqk
Name: druid-overlord-6f5f5fdf5b-vrnqk
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:52 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-w4r4c
Name: druid-overlord-6f5f5fdf5b-w4r4c
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-w9lfs
Name: druid-overlord-6f5f5fdf5b-w9lfs
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Sat, 17 Apr 2021 15:29:31 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container druid was using 8595028Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-wslvb
Name: druid-overlord-6f5f5fdf5b-wslvb
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:51 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-zgrfx
Name: druid-overlord-6f5f5fdf5b-zgrfx
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000011/
Start Time: Wed, 14 Apr 2021 17:30:53 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-zr76f
Name: druid-overlord-6f5f5fdf5b-zr76f
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/
Start Time: Fri, 16 Apr 2021 14:02:26 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-overlord-6f5f5fdf5b-zs8tf
Name: druid-overlord-6f5f5fdf5b-zs8tf
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/
Start Time: Wed, 14 Apr 2021 17:30:54 +0800
Labels: app=druid
component=overlord
pod-template-hash=6f5f5fdf5b
release=druid
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container druid was using 9509364Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/druid-overlord-6f5f5fdf5b
Containers:
druid:
Image: apache/druid:0.20.2
Port: 8081/TCP
Host Port: 0/TCP
Args:
overlord
Liveness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-postgresql-0
Name: druid-postgresql-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 09:50:18 +0800
Labels: app=postgresql
chart=postgresql-8.6.4
controller-revision-hash=druid-postgresql-fbc98478c
heritage=Helm
release=druid
role=master
statefulset.kubernetes.io/pod-name=druid-postgresql-0
Annotations: <none>
Status: Running
IP: 10.244.3.176
IPs:
IP: 10.244.3.176
Controlled By: StatefulSet/druid-postgresql
Containers:
druid-postgresql:
Container ID: docker://e699fd020614e475fefef3b2ae25efbee6947e44e5c190cd4bb5ff05f0d5c6b3
Image: docker.io/bitnami/postgresql:11.7.0-debian-10-r9
Image ID: docker-pullable://bitnami/postgresql@sha256:f18ba80a1de4c8b93ff4bffa38f783c1e267c1e4a649e2b1296352a53fd12f1f
Port: 5432/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 17 Apr 2021 15:22:26 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:47 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 5
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [/bin/sh -c exec pg_isready -U "druid" -d "druid" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [/bin/sh -c -e exec pg_isready -U "druid" -d "druid" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
POSTGRESQL_PORT_NUMBER: 5432
POSTGRESQL_VOLUME_DIR: /bitnami/postgresql
PGDATA: /bitnami/postgresql/data
POSTGRES_USER: druid
POSTGRES_PASSWORD: <set to the key 'postgresql-password' in secret 'druid-postgresql'> Optional: false
POSTGRES_DB: druid
POSTGRESQL_ENABLE_LDAP: no
Mounts:
/bitnami/postgresql from data (rw)
/dev/shm from dshm (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-postgresql-0
ReadOnly: false
dshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 1Gi
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-router-57cb4475f6-4cjzj
Name: druid-router-57cb4475f6-4cjzj
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000010/10.240.0.7
Start Time: Tue, 13 Apr 2021 09:50:26 +0800
Labels: app=druid
component=router
pod-template-hash=57cb4475f6
release=druid
Annotations: <none>
Status: Running
IP: 10.244.3.173
IPs:
IP: 10.244.3.173
Controlled By: ReplicaSet/druid-router-57cb4475f6
Containers:
druid:
Container ID: docker://9e75eabae6f5f5f3deff99d11af69c82400014d0ded768ca94810330ef2844af
Image: apache/druid:0.20.2
Image ID: docker-pullable://apache/druid@sha256:5896dc7de59f933293b9def3960526db8eb0e3a1dc89dcd0ad8c28f49bd4aa31
Port: 8888/TCP
Host Port: 0/TCP
Args:
router
State: Running
Started: Sat, 17 Apr 2021 15:22:11 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:28 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 6
Liveness: http-get http://:8888/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8888/status/health delay=60s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
druid ConfigMap Optional: false
Environment:
DRUID_MAXDIRECTMEMORYSIZE: 128m
DRUID_XMS: 128m
DRUID_XMX: 128m
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-zookeeper-0
Name: druid-zookeeper-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:49:26 +0800
Labels: app=zookeeper
component=server
controller-revision-hash=druid-zookeeper-57c9ddf477
release=druid
statefulset.kubernetes.io/pod-name=druid-zookeeper-0
Annotations: <none>
Status: Running
IP: 10.244.1.182
IPs:
IP: 10.244.1.182
Controlled By: StatefulSet/druid-zookeeper
Containers:
zookeeper:
Container ID: docker://c8c8e88905c435d03a9361047138df8f8e27befa62833852ff38730efbaf863d
Image: zookeeper:3.5.5
Image ID: docker-pullable://zookeeper@sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9
Ports: 2181/TCP, 3888/TCP, 2888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/bash
-xec
/config-scripts/run
State: Running
Started: Sat, 17 Apr 2021 15:22:21 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:52 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Liveness: exec [sh /config-scripts/ok] delay=20s timeout=5s period=30s #success=1 #failure=2
Readiness: exec [sh /config-scripts/ready] delay=20s timeout=5s period=30s #success=1 #failure=2
Environment:
ZK_REPLICAS: 3
JMXAUTH: false
JMXDISABLE: false
JMXPORT: 1099
JMXSSL: false
ZK_HEAP_SIZE: 512M
ZK_SYNC_LIMIT: 10
ZK_TICK_TIME: 2000
ZOO_AUTOPURGE_PURGEINTERVAL: 0
ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
ZOO_INIT_LIMIT: 5
ZOO_MAX_CLIENT_CNXNS: 60
ZOO_PORT: 2181
ZOO_STANDALONE_ENABLED: false
ZOO_TICK_TIME: 2000
Mounts:
/config-scripts from config (rw)
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-zookeeper-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: druid-zookeeper
Optional: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-zookeeper-1
Name: druid-zookeeper-1
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000002/10.240.0.6
Start Time: Fri, 16 Apr 2021 11:50:44 +0800
Labels: app=zookeeper
component=server
controller-revision-hash=druid-zookeeper-57c9ddf477
release=druid
statefulset.kubernetes.io/pod-name=druid-zookeeper-1
Annotations: <none>
Status: Running
IP: 10.244.0.169
IPs:
IP: 10.244.0.169
Controlled By: StatefulSet/druid-zookeeper
Containers:
zookeeper:
Container ID: docker://a7fc55f6ea1748f369bc50d30a51538407f15c94844a8acf5f2b27d83a472b36
Image: zookeeper:3.5.5
Image ID: docker-pullable://zookeeper@sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9
Ports: 2181/TCP, 3888/TCP, 2888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/bash
-xec
/config-scripts/run
State: Running
Started: Sat, 17 Apr 2021 15:22:10 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:51:18 +0800
Finished: Sat, 17 Apr 2021 15:21:53 +0800
Ready: True
Restart Count: 1
Liveness: exec [sh /config-scripts/ok] delay=20s timeout=5s period=30s #success=1 #failure=2
Readiness: exec [sh /config-scripts/ready] delay=20s timeout=5s period=30s #success=1 #failure=2
Environment:
ZK_REPLICAS: 3
JMXAUTH: false
JMXDISABLE: false
JMXPORT: 1099
JMXSSL: false
ZK_HEAP_SIZE: 512M
ZK_SYNC_LIMIT: 10
ZK_TICK_TIME: 2000
ZOO_AUTOPURGE_PURGEINTERVAL: 0
ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
ZOO_INIT_LIMIT: 5
ZOO_MAX_CLIENT_CNXNS: 60
ZOO_PORT: 2181
ZOO_STANDALONE_ENABLED: false
ZOO_TICK_TIME: 2000
Mounts:
/config-scripts from config (rw)
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-zookeeper-1
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: druid-zookeeper
Optional: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////druid-zookeeper-2
Name: druid-zookeeper-2
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000000/10.240.0.4
Start Time: Mon, 12 Apr 2021 01:37:13 +0800
Labels: app=zookeeper
component=server
controller-revision-hash=druid-zookeeper-57c9ddf477
release=druid
statefulset.kubernetes.io/pod-name=druid-zookeeper-2
Annotations: <none>
Status: Running
IP: 10.244.2.174
IPs:
IP: 10.244.2.174
Controlled By: StatefulSet/druid-zookeeper
Containers:
zookeeper:
Container ID: docker://491dd775ac250bf28fc053f079fb324058d9512227c2768b068e694fb2e2ded6
Image: zookeeper:3.5.5
Image ID: docker-pullable://zookeeper@sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9
Ports: 2181/TCP, 3888/TCP, 2888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/bash
-xec
/config-scripts/run
State: Running
Started: Sat, 17 Apr 2021 15:22:25 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Apr 2021 11:49:35 +0800
Finished: Sat, 17 Apr 2021 15:22:02 +0800
Ready: True
Restart Count: 11
Liveness: exec [sh /config-scripts/ok] delay=20s timeout=5s period=30s #success=1 #failure=2
Readiness: exec [sh /config-scripts/ready] delay=20s timeout=5s period=30s #success=1 #failure=2
Environment:
ZK_REPLICAS: 3
JMXAUTH: false
JMXDISABLE: false
JMXPORT: 1099
JMXSSL: false
ZK_HEAP_SIZE: 512M
ZK_SYNC_LIMIT: 10
ZK_TICK_TIME: 2000
ZOO_AUTOPURGE_PURGEINTERVAL: 0
ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
ZOO_INIT_LIMIT: 5
ZOO_MAX_CLIENT_CNXNS: 60
ZOO_PORT: 2181
ZOO_STANDALONE_ENABLED: false
ZOO_TICK_TIME: 2000
Mounts:
/config-scripts from config (rw)
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hzw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-druid-zookeeper-2
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: druid-zookeeper
Optional: false
default-token-hzw78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hzw78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
////////prometheus-demo-prometheus-operator-prometheus-0
Name: prometheus-demo-prometheus-operator-prometheus-0
Namespace: default
Priority: 0
Node: aks-agentpool-62526533-vmss000001/10.240.0.5
Start Time: Fri, 16 Apr 2021 11:49:33 +0800
Labels: app=prometheus
controller-revision-hash=prometheus-demo-prometheus-operator-prometheus-86c884f8db
prometheus=demo-prometheus-operator-prometheus
statefulset.kubernetes.io/pod-name=prometheus-demo-prometheus-operator-prometheus-0
Annotations: <none>
Status: Running
IP: 10.244.1.181
IPs:
IP: 10.244.1.181
Controlled By: StatefulSet/prometheus-demo-prometheus-operator-prometheus
Containers:
prometheus:
Container ID: docker://d75b3c63e6c0d121f29730c264ae6f171ed97b4dc739cf97b853e46dad946116
Image: quay.io/prometheus/prometheus:v2.18.2
Image ID: docker-pullable://quay.io/prometheus/prometheus@sha256:4d3303d1eb424e345cf48969bb7575d4d58472ad783ac41ea07fba92686f7ef5
Port: 9090/TCP
Host Port: 0/TCP
Args:
--web.console.templates=/etc/prometheus/consoles
--web.console.libraries=/etc/prometheus/console_libraries
--config.file=/etc/prometheus/config_out/prometheus.env.yaml
--storage.tsdb.path=/prometheus
--storage.tsdb.retention.time=10d
--web.enable-lifecycle
--storage.tsdb.no-lockfile
--web.external-url=http://demo-prometheus-operator-prometheus.default:9090
--web.route-prefix=/
State: Running
Started: Sat, 17 Apr 2021 15:22:13 +0800
Last State: Terminated
Reason: Error
Message: .38592ms
level=info ts=2021-04-16T07:00:03.757Z caller=compact.go:495 component=tsdb msg="write block" mint=1618545600000 maxt=1618552800000 ulid=01F3CQND2D5GVCP6VQRDX4AQ79 duration=3.040198235s
level=info ts=2021-04-16T07:00:03.892Z caller=head.go:662 component=tsdb msg="Head GC completed" duration=55.909999ms
level=info ts=2021-04-16T07:00:07.150Z caller=head.go:734 component=tsdb msg="WAL checkpoint complete" first=0 last=1 duration=3.257995447s
level=info ts=2021-04-16T09:00:03.874Z caller=compact.go:495 component=tsdb msg="write block" mint=1618552800000 maxt=1618560000000 ulid=01F3CYH4AD1G8M1F2KS09SXQD0 duration=3.157810896s
level=info ts=2021-04-16T09:00:04.000Z caller=head.go:662 component=tsdb msg="Head GC completed" duration=50.990957ms
level=info ts=2021-04-16T09:00:05.725Z caller=head.go:734 component=tsdb msg="WAL checkpoint complete" first=2 last=3 duration=1.725309452s
level=info ts=2021-04-16T11:00:03.685Z caller=compact.go:495 component=tsdb msg="write block" mint=1618560000000 maxt=1618567200000 ulid=01F3D5CVJDJ7HR59328FE0WNG1 duration=2.968381383s
level=info ts=2021-04-16T11:00:03.830Z caller=head.go:662 component=tsdb msg="Head GC completed" duration=52.719697ms
level=info ts=2021-04-16T11:00:05.346Z caller=head.go:734 component=tsdb msg="WAL checkpoint complete" first=4 last=5 duration=1.516221766s
level=info ts=2021-04-16T11:00:09.052Z caller=compact.go:441 component=tsdb msg="compact blocks" count=2 mint=1618544993487 maxt=1618552800000 ulid=01F3D5D0695XM343YKM40D6R9N sources="[01F3CQ2W4TY0HQS0268WJ7VTBP 01F3CQND2D5GVCP6VQRDX4AQ79]" duration=3.602868536s
level=info ts=2021-04-16T13:00:03.710Z caller=compact.go:495 component=tsdb msg="write block" mint=1618567200000 maxt=1618574400000 ulid=01F3DC8JTCQSP9QV45Y678B5WQ duration=2.994094767s
level=info ts=2021-04-16T13:00:03.833Z caller=head.go:662 component=tsdb msg="Head GC completed" duration=37.707815ms
level=info ts=2021-04-16T13:00:05.268Z caller=head.go:734 component=tsdb msg="WAL checkpoint complete" first=6 last=7 duration=1.435687908s
Exit Code: 255
Started: Fri, 16 Apr 2021 11:50:06 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 2
Liveness: http-get http://:web/-/healthy delay=0s timeout=3s period=5s #success=1 #failure=6
Readiness: http-get http://:web/-/ready delay=0s timeout=3s period=5s #success=1 #failure=120
Environment: <none>
Mounts:
/etc/prometheus/certs from tls-assets (ro)
/etc/prometheus/config_out from config-out (ro)
/etc/prometheus/rules/prometheus-demo-prometheus-operator-prometheus-rulefiles-0 from prometheus-demo-prometheus-operator-prometheus-rulefiles-0 (rw)
/prometheus from prometheus-demo-prometheus-operator-prometheus-db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-prometheus-token-dzfll (ro)
prometheus-config-reloader:
Container ID: docker://c85fac09d9d3e6b5ba6a43d09294616521688c4b54b5405c90b89f94aa2a663b
Image: quay.io/coreos/prometheus-config-reloader:v0.38.1
Image ID: docker-pullable://quay.io/coreos/prometheus-config-reloader@sha256:d1cce64093d4a850d28726ec3e48403124808f76567b5bd7b26e4416300996a7
Port: <none>
Host Port: <none>
Command:
/bin/prometheus-config-reloader
Args:
--log-format=logfmt
--reload-url=http://127.0.0.1:9090/-/reload
--config-file=/etc/prometheus/config/prometheus.yaml.gz
--config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
State: Running
Started: Sat, 17 Apr 2021 15:22:15 +0800
Last State: Terminated
Reason: Error
Message: ts=2021-04-16T03:50:03.096151187Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.38.1'."
level=error ts=2021-04-16T03:50:03.097321994Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://127.0.0.1:9090/-/reload: dial tcp 127.0.0.1:9090: connect: connection refused"
level=info ts=2021-04-16T03:50:08.173917802Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=
level=info ts=2021-04-16T03:50:08.173985203Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=
Exit Code: 255
Started: Fri, 16 Apr 2021 11:50:03 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Limits:
cpu: 100m
memory: 25Mi
Requests:
cpu: 100m
memory: 25Mi
Environment:
POD_NAME: prometheus-demo-prometheus-operator-prometheus-0 (v1:metadata.name)
Mounts:
/etc/prometheus/config from config (rw)
/etc/prometheus/config_out from config-out (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-prometheus-token-dzfll (ro)
rules-configmap-reloader:
Container ID: docker://591e1da5dc1891e7e1aa7ff0322789229a2021c76fedae0101a3dffd1b2cab13
Image: docker.io/jimmidyson/configmap-reload:v0.3.0
Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2
Port: <none>
Host Port: <none>
Args:
--webhook-url=http://127.0.0.1:9090/-/reload
--volume-dir=/etc/prometheus/rules/prometheus-demo-prometheus-operator-prometheus-rulefiles-0
State: Running
Started: Sat, 17 Apr 2021 15:22:18 +0800
Last State: Terminated
Reason: Error
Message: 2021/04/16 03:50:03 Watching directory: "/etc/prometheus/rules/prometheus-demo-prometheus-operator-prometheus-rulefiles-0"
Exit Code: 255
Started: Fri, 16 Apr 2021 11:50:03 +0800
Finished: Sat, 17 Apr 2021 15:21:58 +0800
Ready: True
Restart Count: 1
Limits:
cpu: 100m
memory: 25Mi
Requests:
cpu: 100m
memory: 25Mi
Environment: <none>
Mounts:
/etc/prometheus/rules/prometheus-demo-prometheus-operator-prometheus-rulefiles-0 from prometheus-demo-prometheus-operator-prometheus-rulefiles-0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from demo-prometheus-operator-prometheus-token-dzfll (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-demo-prometheus-operator-prometheus
Optional: false
tls-assets:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-demo-prometheus-operator-prometheus-tls-assets
Optional: false
config-out:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
prometheus-demo-prometheus-operator-prometheus-rulefiles-0:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-demo-prometheus-operator-prometheus-rulefiles-0
Optional: false
prometheus-demo-prometheus-operator-prometheus-db:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
demo-prometheus-operator-prometheus-token-dzfll:
Type: Secret (a volume populated by a Secret)
SecretName: demo-prometheus-operator-prometheus-token-dzfll
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
NAME STATUS AGE
namespace/default Active 24d
namespace/kube-node-lease Active 24d
namespace/kube-public Active 24d
namespace/kube-system Active 24d
namespace/kubernetes-dashboard Active 11d
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/demo-grafana-67dd6c996b 1 1 1 4d1h
default replicaset.apps/demo-kube-state-metrics-6f676dfddc 1 1 1 4d1h
default replicaset.apps/demo-prometheus-operator-operator-79c5f74b5f 1 1 1 4d1h
default replicaset.apps/druid-broker-75bf8774cc 2 2 2 5d2h
default replicaset.apps/druid-broker-7f6c469b5 0 0 0 5d19h
default replicaset.apps/druid-coordinator-85599f58f6 1 1 1 5d19h
default replicaset.apps/druid-overlord-6f5f5fdf5b 1 1 1 5d2h
default replicaset.apps/druid-router-57cb4475f6 1 1 1 5d19h
kube-system replicaset.apps/coredns-748cdb7bf4 2 2 2 24d
kube-system replicaset.apps/coredns-autoscaler-868b684fd4 1 1 1 24d
kube-system replicaset.apps/metrics-server-58fdc875d5 1 1 1 24d
kube-system replicaset.apps/omsagent-rs-58f4cc75f9 0 0 0 24d
kube-system replicaset.apps/omsagent-rs-68f87fc555 1 1 1 3d11h
kube-system replicaset.apps/tunnelfront-5c44b74745 1 1 1 24d
kube-system replicaset.apps/tunnelfront-65686785cf 0 0 0 14d
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-78f5d9f487 1 1 1 11d
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-577bd97bc 1 1 1 11d
NAMESPACE NAME TYPE DATA AGE
default secret/alertmanager-demo-prometheus-operator-alertmanager Opaque 1 4d1h
default secret/default-token-hzw78 kubernetes.io/service-account-token 3 24d
default secret/demo-grafana Opaque 3 4d1h
default secret/demo-grafana-test-token-mrksg kubernetes.io/service-account-token 3 4d1h
default secret/demo-grafana-token-n7df9 kubernetes.io/service-account-token 3 4d1h
default secret/demo-kube-state-metrics-token-dxdln kubernetes.io/service-account-token 3 4d1h
default secret/demo-prometheus-node-exporter-token-kqd9h kubernetes.io/service-account-token 3 4d1h
default secret/demo-prometheus-operator-admission Opaque 3 4d1h
default secret/demo-prometheus-operator-alertmanager-token-q4vp5 kubernetes.io/service-account-token 3 4d1h
default secret/demo-prometheus-operator-operator-token-d8ps4 kubernetes.io/service-account-token 3 4d1h
default secret/demo-prometheus-operator-prometheus-token-dzfll kubernetes.io/service-account-token 3 4d1h
default secret/druid-postgresql Opaque 1 5d19h
default secret/prometheus-demo-prometheus-operator-prometheus Opaque 1 4d1h
default secret/prometheus-demo-prometheus-operator-prometheus-tls-assets Opaque 0 4d1h
default secret/sh.helm.release.v1.demo.v1 helm.sh/release.v1 1 4d1h
default secret/sh.helm.release.v1.druid.v1 helm.sh/release.v1 1 5d19h
default secret/sh.helm.release.v1.druid.v2 helm.sh/release.v1 1 5d2h
default secret/sh.helm.release.v1.druid.v3 helm.sh/release.v1 1 4d23h
default secret/sh.helm.release.v1.druid.v4 helm.sh/release.v1 1 4d9h
default secret/sh.helm.release.v1.druid.v5 helm.sh/release.v1 1 4d6h
default secret/sh.helm.release.v1.druid.v6 helm.sh/release.v1 1 4d4h
default secret/sh.helm.release.v1.druid.v7 helm.sh/release.v1 1 4d2h
default secret/sh.helm.release.v1.druid.v8 helm.sh/release.v1 1 4d1h
default secret/sh.helm.release.v1.druid.v9 helm.sh/release.v1 1 4d
default secret/sh.helm.release.v1.webfrontend.v1 helm.sh/release.v1 1 23d
kube-node-lease secret/default-token-lfr4z kubernetes.io/service-account-token 3 24d
kube-public secret/default-token-bjfnw kubernetes.io/service-account-token 3 24d
kube-system secret/attachdetach-controller-token-4q9ch kubernetes.io/service-account-token 3 24d
kube-system secret/azure-cloud-provider-token-jgb25 kubernetes.io/service-account-token 3 24d
kube-system secret/bootstrap-signer-token-8flp6 kubernetes.io/service-account-token 3 24d
kube-system secret/certificate-controller-token-65mdk kubernetes.io/service-account-token 3 24d
kube-system secret/clusterrole-aggregation-controller-token-hczbb kubernetes.io/service-account-token 3 24d
kube-system secret/coredns-autoscaler-token-z4pgc kubernetes.io/service-account-token 3 24d
kube-system secret/coredns-token-mblbn kubernetes.io/service-account-token 3 24d
kube-system secret/cronjob-controller-token-l44tm kubernetes.io/service-account-token 3 24d
kube-system secret/daemon-set-controller-token-4f8qv kubernetes.io/service-account-token 3 24d
kube-system secret/default-token-dg45c kubernetes.io/service-account-token 3 24d
kube-system secret/deployment-controller-token-lxmmx kubernetes.io/service-account-token 3 24d
kube-system secret/disruption-controller-token-xd94d kubernetes.io/service-account-token 3 24d
kube-system secret/endpoint-controller-token-fddwv kubernetes.io/service-account-token 3 24d
kube-system secret/endpointslice-controller-token-fm2fv kubernetes.io/service-account-token 3 24d
kube-system secret/expand-controller-token-h4s4x kubernetes.io/service-account-token 3 24d
kube-system secret/generic-garbage-collector-token-7vff5 kubernetes.io/service-account-token 3 24d
kube-system secret/horizontal-pod-autoscaler-token-r74pf kubernetes.io/service-account-token 3 24d
kube-system secret/job-controller-token-njskw kubernetes.io/service-account-token 3 24d
kube-system secret/kube-proxy-token-twb8g kubernetes.io/service-account-token 3 24d
kube-system secret/metrics-server-token-gwtt5 kubernetes.io/service-account-token 3 24d
kube-system secret/namespace-controller-token-24qjn kubernetes.io/service-account-token 3 24d
kube-system secret/node-controller-token-jtb4h kubernetes.io/service-account-token 3 24d
kube-system secret/omsagent-secret Opaque 2 24d
kube-system secret/omsagent-token-dwf4f kubernetes.io/service-account-token 3 24d
kube-system secret/persistent-volume-binder-token-2fm8w kubernetes.io/service-account-token 3 24d
kube-system secret/pod-garbage-collector-token-lkgkc kubernetes.io/service-account-token 3 24d
kube-system secret/pv-protection-controller-token-whrnw kubernetes.io/service-account-token 3 24d
kube-system secret/pvc-protection-controller-token-plxwh kubernetes.io/service-account-token 3 24d
kube-system secret/replicaset-controller-token-dxc24 kubernetes.io/service-account-token 3 24d
kube-system secret/replication-controller-token-jvv7t kubernetes.io/service-account-token 3 24d
kube-system secret/resourcequota-controller-token-g55sh kubernetes.io/service-account-token 3 24d
kube-system secret/route-controller-token-rgpfp kubernetes.io/service-account-token 3 24d
kube-system secret/service-account-controller-token-c5szb kubernetes.io/service-account-token 3 24d
kube-system secret/service-controller-token-6hwtt kubernetes.io/service-account-token 3 24d
kube-system secret/statefulset-controller-token-bs4ql kubernetes.io/service-account-token 3 24d
kube-system secret/token-cleaner-token-hfqk7 kubernetes.io/service-account-token 3 24d
kube-system secret/ttl-controller-token-72b5h kubernetes.io/service-account-token 3 24d
kube-system secret/tunnelend Opaque 2 24d
kube-system secret/tunnelfront Opaque 2 24d
kube-system secret/tunnelfront-tls Opaque 2 24d
kube-system secret/tunnelfront-token-d7f47 kubernetes.io/service-account-token 3 24d
kubernetes-dashboard secret/admin-user-token-nkl2l kubernetes.io/service-account-token 3 11d
kubernetes-dashboard secret/default-token-d4xn4 kubernetes.io/service-account-token 3 11d
kubernetes-dashboard secret/kubernetes-dashboard-certs Opaque 0 11d
kubernetes-dashboard secret/kubernetes-dashboard-csrf Opaque 1 11d
kubernetes-dashboard secret/kubernetes-dashboard-key-holder Opaque 2 11d
kubernetes-dashboard secret/kubernetes-dashboard-token-9nrxv kubernetes.io/service-account-token 3 11d
NAMESPACE NAME STATUS ROLES AGE VERSION
node/aks-agentpool-62526533-vmss000000 Ready agent 24d v1.18.14
node/aks-agentpool-62526533-vmss000001 Ready agent 24d v1.18.14
node/aks-agentpool-62526533-vmss000002 Ready agent 24d v1.18.14
node/aks-agentpool-62526533-vmss000004 Ready agent 11d v1.18.14
node/aks-agentpool-62526533-vmss000010 Ready agent 8d v1.18.14
node/aks-agentpool-62526533-vmss000011 Ready agent 8d v1.18.14
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
default daemonset.apps/demo-prometheus-node-exporter 6 6 6 6 6 <none> 4d1h
kube-system daemonset.apps/kube-proxy 6 6 6 6 6 beta.kubernetes.io/os=linux 24d
kube-system daemonset.apps/omsagent 6 6 6 6 6 <none> 24d
kube-system daemonset.apps/omsagent-win 0 0 0 0 0 <none> 24d
NAMESPACE NAME READY AGE
default statefulset.apps/alertmanager-demo-prometheus-operator-alertmanager 1/1 4d1h
default statefulset.apps/druid-historical 20/20 5d19h
default statefulset.apps/druid-middle-manager 5/5 5d19h
default statefulset.apps/druid-postgresql 1/1 5d19h
default statefulset.apps/druid-zookeeper 3/3 5d19h
default statefulset.apps/prometheus-demo-prometheus-operator-prometheus 1/1 4d1h
NAMESPACE NAME DATA AGE
default configmap/demo-grafana 1 4d1h
default configmap/demo-grafana-config-dashboards 1 4d1h
default configmap/demo-grafana-test 1 4d1h
default configmap/demo-prometheus-operator-apiserver 1 4d1h
default configmap/demo-prometheus-operator-cluster-total 1 4d1h
default configmap/demo-prometheus-operator-controller-manager 1 4d1h
default configmap/demo-prometheus-operator-etcd 1 4d1h
default configmap/demo-prometheus-operator-grafana-datasource 1 4d1h
default configmap/demo-prometheus-operator-k8s-coredns 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-cluster 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-namespace 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-node 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-pod 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-workload 1 4d1h
default configmap/demo-prometheus-operator-k8s-resources-workloads-namespace 1 4d1h
default configmap/demo-prometheus-operator-kubelet 1 4d1h
default configmap/demo-prometheus-operator-namespace-by-pod 1 4d1h
default configmap/demo-prometheus-operator-namespace-by-workload 1 4d1h
default configmap/demo-prometheus-operator-node-cluster-rsrc-use 1 4d1h
default configmap/demo-prometheus-operator-node-rsrc-use 1 4d1h
default configmap/demo-prometheus-operator-nodes 1 4d1h
default configmap/demo-prometheus-operator-persistentvolumesusage 1 4d1h
default configmap/demo-prometheus-operator-pod-total 1 4d1h
default configmap/demo-prometheus-operator-prometheus 1 4d1h
default configmap/demo-prometheus-operator-proxy 1 4d1h
default configmap/demo-prometheus-operator-scheduler 1 4d1h
default configmap/demo-prometheus-operator-statefulset 1 4d1h
default configmap/demo-prometheus-operator-workload-total 1 4d1h
default configmap/druid 20 5d19h
default configmap/druid-zookeeper 3 5d19h
default configmap/prometheus-demo-prometheus-operator-prometheus-rulefiles-0 26 4d1h
default configmap/scripts-cm 1 16d
kube-system configmap/container-azm-ms-aks-k8scluster 1 24d
kube-system configmap/coredns 1 24d
kube-system configmap/coredns-autoscaler 1 24d
kube-system configmap/coredns-custom 0 24d
kube-system configmap/extension-apiserver-authentication 6 24d
kube-system configmap/omsagent-rs-config 1 24d
kube-system configmap/tunnelfront-kubecfg 1 24d
kubernetes-dashboard configmap/kubernetes-dashboard-settings 1 11d
NAMESPACE NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-00dd5abf-5dea-4b69-bb48-d482aa5e968c 4Gi RWO Delete Bound default/data-druid-historical-49 managed-premium 4d2h
persistentvolume/pvc-02b00f44-e961-4317-9848-9f2b917f2734 4Gi RWO Delete Bound default/data-druid-historical-10 managed-premium 5d19h
persistentvolume/pvc-0584be1a-1c69-40a0-9dca-233af7f6b4e3 4Gi RWO Delete Bound default/data-druid-historical-7 managed-premium 5d19h
persistentvolume/pvc-0586766c-70b4-4f72-ae3e-8fb5ff9364a6 4Gi RWO Delete Bound default/data-druid-historical-34 managed-premium 4d9h
persistentvolume/pvc-06ca825c-c558-4cab-a7a8-6fdd65d649f2 4Gi RWO Delete Bound default/data-druid-historical-48 managed-premium 4d2h
persistentvolume/pvc-0735a68d-bb82-4181-9c59-cef90893a9ed 4Gi RWO Delete Bound default/data-druid-historical-45 managed-premium 4d2h
persistentvolume/pvc-156cf48f-70ce-4eec-9846-94a0b2adf720 4Gi RWO Delete Bound default/data-druid-historical-30 managed-premium 4d9h
persistentvolume/pvc-157c3d56-620c-43bd-8cf3-2f36b6d66373 4Gi RWO Delete Bound default/data-druid-historical-44 managed-premium 4d2h
persistentvolume/pvc-1980f674-0169-427d-9916-c7132ebf7d75 4Gi RWO Delete Bound default/data-druid-historical-47 managed-premium 4d2h
persistentvolume/pvc-1d4f3491-313b-4255-9a0f-14c4096724ce 4Gi RWO Delete Bound default/data-druid-historical-13 managed-premium 5d18h
persistentvolume/pvc-2ab2c5c9-4609-476c-9a13-0675bf4e6d2f 4Gi RWO Delete Bound default/data-druid-historical-37 managed-premium 4d2h
persistentvolume/pvc-2d59d1ff-00ed-44b5-8a4e-bc824dd30de5 4Gi RWO Delete Bound default/data-druid-middle-manager-4 managed-premium 4d
persistentvolume/pvc-33ccb103-c0f1-4af1-bc46-c8811813c0bd 4Gi RWO Delete Bound default/data-druid-historical-24 managed-premium 4d22h
persistentvolume/pvc-3d242a13-6828-45f7-a100-051b8d884d69 4Gi RWO Delete Bound default/data-druid-middle-manager-3 managed-premium 4d
persistentvolume/pvc-3e670caa-9a9d-49eb-ac07-5cd128298998 5Gi RWO Delete Bound default/data-druid-zookeeper-0 default 5d19h
persistentvolume/pvc-3e9c75aa-7fa9-4695-ac4a-5519eabeb276 4Gi RWO Delete Bound default/data-druid-historical-38 managed-premium 4d2h
persistentvolume/pvc-43338c4a-8f0b-4704-9e07-51b21cec6a1f 5Gi RWO Delete Bound default/data-druid-zookeeper-1 default 5d19h
persistentvolume/pvc-454f99e3-dbce-48da-96e3-cd1efa100f98 4Gi RWO Delete Bound default/data-druid-historical-18 managed-premium 4d23h
persistentvolume/pvc-4d1fabc4-12bf-40de-9122-a40e7bd80c97 4Gi RWO Delete Bound default/data-druid-historical-42 managed-premium 4d2h
persistentvolume/pvc-5041d6fa-4dc5-4f00-9f0e-08d6c14c5d09 8Gi RWO Delete Bound default/data-druid-postgresql-0 default 5d19h
persistentvolume/pvc-5d9f6cb5-431e-4fab-94c7-5210f8c91959 4Gi RWO Delete Bound default/data-druid-historical-17 managed-premium 4d23h
persistentvolume/pvc-5f640567-c9cc-4c5e-97d0-7f35ec636305 4Gi RWO Delete Bound default/data-druid-historical-6 managed-premium 5d19h
persistentvolume/pvc-748c090c-9c75-47a2-ace4-77762cfe7aff 4Gi RWO Delete Bound default/data-druid-historical-28 managed-premium 4d9h
persistentvolume/pvc-7893e39f-68d1-4c4f-aead-9bf4ae4dec8b 4Gi RWO Delete Bound default/data-druid-historical-5 managed-premium 5d19h
persistentvolume/pvc-790a102b-332d-4a84-96c0-032f81b9bf40 4Gi RWO Delete Bound default/data-druid-historical-8 managed-premium 5d19h
persistentvolume/pvc-7922a71d-4f21-4d8b-bd07-3a01088dc254 4Gi RWO Delete Bound default/data-druid-historical-23 managed-premium 4d22h
persistentvolume/pvc-7b44a738-65f4-483c-9c19-d1430ba88c06 4Gi RWO Delete Bound default/data-druid-historical-27 managed-premium 4d22h
persistentvolume/pvc-7c463b7d-e54c-4ad3-957e-ff04aaa1fc5e 4Gi RWO Delete Bound default/data-druid-historical-41 managed-premium 4d2h
persistentvolume/pvc-7e7b0e81-8e9e-4fec-9ee0-db744dc195b3 4Gi RWO Delete Bound default/data-druid-historical-2 managed-premium 5d19h
persistentvolume/pvc-84e4e10a-ecd3-4d8d-a3af-997699dbc668 4Gi RWO Delete Bound default/data-druid-historical-33 managed-premium 4d9h
persistentvolume/pvc-8613cbb2-eeef-4699-b433-c31ae73d17d5 4Gi RWO Delete Bound default/data-druid-middle-manager-1 managed-premium 5d19h
persistentvolume/pvc-868d53cd-169e-457f-b8c2-6ff2962c2e28 4Gi RWO Delete Bound default/data-druid-historical-14 managed-premium 4d23h
persistentvolume/pvc-89617246-2946-4818-aafe-56fcb7dd439d 4Gi RWO Delete Bound default/data-druid-historical-0 managed-premium 5d19h
persistentvolume/pvc-8cddb1b6-904e-45b5-964e-212194acafee 4Gi RWO Delete Bound default/data-druid-historical-16 managed-premium 4d23h
persistentvolume/pvc-8dc46c3b-a171-4b7c-8c76-ab474fb496ff 4Gi RWO Delete Bound default/data-druid-historical-15 managed-premium 4d23h
persistentvolume/pvc-908d8a85-6bec-4360-bd1f-6da438412e96 4Gi RWO Delete Bound default/data-druid-historical-4 managed-premium 5d19h
persistentvolume/pvc-91050816-18a4-45e0-b550-d68e39edce93 4Gi RWO Delete Bound default/data-druid-historical-22 managed-premium 4d22h
persistentvolume/pvc-97175fd3-01ad-49a5-a924-67b42db096a5 5Gi RWO Delete Bound default/data-druid-zookeeper-2 default 5d19h
persistentvolume/pvc-99e490d9-f90e-4de4-abf7-4a7877fef376 4Gi RWO Delete Bound default/data-druid-historical-19 managed-premium 4d23h
persistentvolume/pvc-9bce7eb9-b41c-4cc2-a615-ad6de6ea251a 4Gi RWO Delete Bound default/data-druid-historical-21 managed-premium 4d22h
persistentvolume/pvc-9f4806ef-c4ae-4751-94b8-ee16d80f70be 4Gi RWO Delete Bound default/data-druid-historical-3 managed-premium 5d19h
persistentvolume/pvc-a159525b-caa4-4290-aa11-4673b9113ec2 4Gi RWO Delete Bound default/data-druid-historical-25 managed-premium 4d22h
persistentvolume/pvc-a68c53b0-28f3-431b-8e52-2433c231a86b 4Gi RWO Delete Bound default/data-druid-historical-32 managed-premium 4d9h
persistentvolume/pvc-abc5d43e-8dc7-4eb2-a900-14926fe7dfcd 4Gi RWO Delete Bound default/data-druid-historical-39 managed-premium 4d2h
persistentvolume/pvc-b98eb9c4-bf18-46d0-8119-8327077c493a 4Gi RWO Delete Bound default/data-druid-historical-26 managed-premium 4d22h
persistentvolume/pvc-bdabcee8-845d-4287-a202-b3c464baae94 4Gi RWO Delete Bound default/data-druid-historical-11 managed-premium 5d19h
persistentvolume/pvc-bee66bbe-deda-4860-a4e0-ad40dfdab693 4Gi RWO Delete Bound default/data-druid-historical-43 managed-premium 4d2h
persistentvolume/pvc-d15ed1c2-76a3-47ef-bf0d-0a400cb101e4 4Gi RWO Delete Bound default/data-druid-middle-manager-2 managed-premium 4d
persistentvolume/pvc-d3c67a68-f408-48f0-a737-e2fce3f5628e 4Gi RWO Delete Bound default/data-druid-historical-20 managed-premium 4d23h
persistentvolume/pvc-d5afd790-7cc6-482e-a32c-d982f7f7d058 4Gi RWO Delete Bound default/data-druid-historical-35 managed-premium 4d2h
persistentvolume/pvc-d61e58a6-15e0-4ddc-8538-866d9432a2a4 4Gi RWO Delete Bound default/data-druid-historical-36 managed-premium 4d2h
persistentvolume/pvc-e90fedbb-7ac6-473a-a412-6854d5b3a7d0 4Gi RWO Delete Bound default/data-druid-historical-46 managed-premium 4d2h
persistentvolume/pvc-e9a6d215-bace-4d5c-9cb1-277481194767 4Gi RWO Delete Bound default/data-druid-historical-29 managed-premium 4d9h
persistentvolume/pvc-ead7f7ca-ef40-4418-919f-4d99b8b75c20 4Gi RWO Delete Bound default/data-druid-middle-manager-0 managed-premium 5d19h
persistentvolume/pvc-f1d0a366-e521-4306-a86e-04ec276774b5 4Gi RWO Delete Bound default/data-druid-historical-12 managed-premium 5d19h
persistentvolume/pvc-f6007800-6fd3-4772-87c7-28f8f7124d19 4Gi RWO Delete Bound default/data-druid-historical-31 managed-premium 4d9h
persistentvolume/pvc-f92f2ffb-0e41-4d86-97b8-652c14c3e1bb 4Gi RWO Delete Bound default/data-druid-historical-1 managed-premium 5d19h
persistentvolume/pvc-fbaf85c8-3955-4c0e-aa6d-c0fa26322222 4Gi RWO Delete Bound default/data-druid-historical-9 managed-premium 5d19h
persistentvolume/pvc-feaf0ec9-18d3-438d-93aa-7f055d388234 4Gi RWO Delete Bound default/data-druid-historical-40 managed-premium 4d2h
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/data-druid-historical-0 Bound pvc-89617246-2946-4818-aafe-56fcb7dd439d 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-1 Bound pvc-f92f2ffb-0e41-4d86-97b8-652c14c3e1bb 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-10 Bound pvc-02b00f44-e961-4317-9848-9f2b917f2734 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-11 Bound pvc-bdabcee8-845d-4287-a202-b3c464baae94 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-12 Bound pvc-f1d0a366-e521-4306-a86e-04ec276774b5 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-13 Bound pvc-1d4f3491-313b-4255-9a0f-14c4096724ce 4Gi RWO managed-premium 5d18h
default persistentvolumeclaim/data-druid-historical-14 Bound pvc-868d53cd-169e-457f-b8c2-6ff2962c2e28 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-15 Bound pvc-8dc46c3b-a171-4b7c-8c76-ab474fb496ff 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-16 Bound pvc-8cddb1b6-904e-45b5-964e-212194acafee 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-17 Bound pvc-5d9f6cb5-431e-4fab-94c7-5210f8c91959 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-18 Bound pvc-454f99e3-dbce-48da-96e3-cd1efa100f98 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-19 Bound pvc-99e490d9-f90e-4de4-abf7-4a7877fef376 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-2 Bound pvc-7e7b0e81-8e9e-4fec-9ee0-db744dc195b3 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-20 Bound pvc-d3c67a68-f408-48f0-a737-e2fce3f5628e 4Gi RWO managed-premium 4d23h
default persistentvolumeclaim/data-druid-historical-21 Bound pvc-9bce7eb9-b41c-4cc2-a615-ad6de6ea251a 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-22 Bound pvc-91050816-18a4-45e0-b550-d68e39edce93 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-23 Bound pvc-7922a71d-4f21-4d8b-bd07-3a01088dc254 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-24 Bound pvc-33ccb103-c0f1-4af1-bc46-c8811813c0bd 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-25 Bound pvc-a159525b-caa4-4290-aa11-4673b9113ec2 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-26 Bound pvc-b98eb9c4-bf18-46d0-8119-8327077c493a 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-27 Bound pvc-7b44a738-65f4-483c-9c19-d1430ba88c06 4Gi RWO managed-premium 4d22h
default persistentvolumeclaim/data-druid-historical-28 Bound pvc-748c090c-9c75-47a2-ace4-77762cfe7aff 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-29 Bound pvc-e9a6d215-bace-4d5c-9cb1-277481194767 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-3 Bound pvc-9f4806ef-c4ae-4751-94b8-ee16d80f70be 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-30 Bound pvc-156cf48f-70ce-4eec-9846-94a0b2adf720 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-31 Bound pvc-f6007800-6fd3-4772-87c7-28f8f7124d19 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-32 Bound pvc-a68c53b0-28f3-431b-8e52-2433c231a86b 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-33 Bound pvc-84e4e10a-ecd3-4d8d-a3af-997699dbc668 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-34 Bound pvc-0586766c-70b4-4f72-ae3e-8fb5ff9364a6 4Gi RWO managed-premium 4d9h
default persistentvolumeclaim/data-druid-historical-35 Bound pvc-d5afd790-7cc6-482e-a32c-d982f7f7d058 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-36 Bound pvc-d61e58a6-15e0-4ddc-8538-866d9432a2a4 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-37 Bound pvc-2ab2c5c9-4609-476c-9a13-0675bf4e6d2f 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-38 Bound pvc-3e9c75aa-7fa9-4695-ac4a-5519eabeb276 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-39 Bound pvc-abc5d43e-8dc7-4eb2-a900-14926fe7dfcd 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-4 Bound pvc-908d8a85-6bec-4360-bd1f-6da438412e96 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-40 Bound pvc-feaf0ec9-18d3-438d-93aa-7f055d388234 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-41 Bound pvc-7c463b7d-e54c-4ad3-957e-ff04aaa1fc5e 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-42 Bound pvc-4d1fabc4-12bf-40de-9122-a40e7bd80c97 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-43 Bound pvc-bee66bbe-deda-4860-a4e0-ad40dfdab693 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-44 Bound pvc-157c3d56-620c-43bd-8cf3-2f36b6d66373 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-45 Bound pvc-0735a68d-bb82-4181-9c59-cef90893a9ed 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-46 Bound pvc-e90fedbb-7ac6-473a-a412-6854d5b3a7d0 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-47 Bound pvc-1980f674-0169-427d-9916-c7132ebf7d75 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-48 Bound pvc-06ca825c-c558-4cab-a7a8-6fdd65d649f2 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-49 Bound pvc-00dd5abf-5dea-4b69-bb48-d482aa5e968c 4Gi RWO managed-premium 4d2h
default persistentvolumeclaim/data-druid-historical-5 Bound pvc-7893e39f-68d1-4c4f-aead-9bf4ae4dec8b 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-6 Bound pvc-5f640567-c9cc-4c5e-97d0-7f35ec636305 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-7 Bound pvc-0584be1a-1c69-40a0-9dca-233af7f6b4e3 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-8 Bound pvc-790a102b-332d-4a84-96c0-032f81b9bf40 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-historical-9 Bound pvc-fbaf85c8-3955-4c0e-aa6d-c0fa26322222 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-middle-manager-0 Bound pvc-ead7f7ca-ef40-4418-919f-4d99b8b75c20 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-middle-manager-1 Bound pvc-8613cbb2-eeef-4699-b433-c31ae73d17d5 4Gi RWO managed-premium 5d19h
default persistentvolumeclaim/data-druid-middle-manager-2 Bound pvc-d15ed1c2-76a3-47ef-bf0d-0a400cb101e4 4Gi RWO managed-premium 4d
default persistentvolumeclaim/data-druid-middle-manager-3 Bound pvc-3d242a13-6828-45f7-a100-051b8d884d69 4Gi RWO managed-premium 4d
default persistentvolumeclaim/data-druid-middle-manager-4 Bound pvc-2d59d1ff-00ed-44b5-8a4e-bc824dd30de5 4Gi RWO managed-premium 4d
default persistentvolumeclaim/data-druid-postgresql-0 Bound pvc-5041d6fa-4dc5-4f00-9f0e-08d6c14c5d09 8Gi RWO default 5d19h
default persistentvolumeclaim/data-druid-zookeeper-0 Bound pvc-3e670caa-9a9d-49eb-ac07-5cd128298998 5Gi RWO default 5d19h
default persistentvolumeclaim/data-druid-zookeeper-1 Bound pvc-43338c4a-8f0b-4704-9e07-51b21cec6a1f 5Gi RWO default 5d19h
default persistentvolumeclaim/data-druid-zookeeper-2 Bound pvc-97175fd3-01ad-49a5-a924-67b42db096a5 5Gi RWO default 5d19h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 4d1h
default service/demo-grafana ClusterIP 10.0.160.190 <none> 80/TCP 4d1h
default service/demo-kube-state-metrics ClusterIP 10.0.116.2 <none> 8080/TCP 4d1h
default service/demo-prometheus-node-exporter ClusterIP 10.0.184.237 <none> 9100/TCP 4d1h
default service/demo-prometheus-operator-alertmanager ClusterIP 10.0.135.51 <none> 9093/TCP 4d1h
default service/demo-prometheus-operator-operator ClusterIP 10.0.97.172 <none> 8080/TCP,443/TCP 4d1h
default service/demo-prometheus-operator-prometheus ClusterIP 10.0.106.238 <none> 9090/TCP 4d1h
default service/druid-broker ClusterIP 10.0.138.196 <none> 8082/TCP 5d19h
default service/druid-coordinator ClusterIP 10.0.250.252 <none> 8081/TCP 5d19h
default service/druid-historical ClusterIP 10.0.68.184 <none> 8083/TCP 5d19h
default service/druid-middle-manager ClusterIP 10.0.71.192 <none> 8091/TCP 5d19h
default service/druid-overlord ClusterIP 10.0.150.48 <none> 8081/TCP 5d2h
default service/druid-postgresql ClusterIP 10.0.142.116 <none> 5432/TCP 5d19h
default service/druid-postgresql-headless ClusterIP None <none> 5432/TCP 5d19h
default service/druid-router ClusterIP 10.0.114.58 <none> 8888/TCP 5d19h
default service/druid-zookeeper ClusterIP 10.0.105.222 <none> 2181/TCP 5d19h
default service/druid-zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 5d19h
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 24d
default service/prometheus-operated ClusterIP None <none> 9090/TCP 4d1h
kube-system service/demo-prometheus-operator-coredns ClusterIP None <none> 9153/TCP 4d1h
kube-system service/demo-prometheus-operator-kube-controller-manager ClusterIP None <none> 10252/TCP 4d1h
kube-system service/demo-prometheus-operator-kube-etcd ClusterIP None <none> 2379/TCP 4d1h
kube-system service/demo-prometheus-operator-kube-proxy ClusterIP None <none> 10249/TCP 4d1h
kube-system service/demo-prometheus-operator-kube-scheduler ClusterIP None <none> 10251/TCP 4d1h
kube-system service/demo-prometheus-operator-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 4d1h
kube-system service/healthmodel-replicaset-service ClusterIP 10.0.219.108 <none> 25227/TCP 24d
kube-system service/kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 24d
kube-system service/metrics-server ClusterIP 10.0.125.17 <none> 443/TCP 24d
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.76.185 <none> 8000/TCP 11d
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.0.184.161 <none> 443/TCP 11d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/demo-grafana 1/1 1 1 4d1h
default deployment.apps/demo-kube-state-metrics 1/1 1 1 4d1h
default deployment.apps/demo-prometheus-operator-operator 1/1 1 1 4d1h
default deployment.apps/druid-broker 2/2 2 2 5d19h
default deployment.apps/druid-coordinator 1/1 1 1 5d19h
default deployment.apps/druid-overlord 1/1 1 1 5d2h
default deployment.apps/druid-router 1/1 1 1 5d19h
kube-system deployment.apps/coredns 2/2 2 2 24d
kube-system deployment.apps/coredns-autoscaler 1/1 1 1 24d
kube-system deployment.apps/metrics-server 1/1 1 1 24d
kube-system deployment.apps/omsagent-rs 1/1 1 1 24d
kube-system deployment.apps/tunnelfront 1/1 1 1 24d
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 11d
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 11d
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/alertmanager-demo-prometheus-operator-alertmanager-0 2/2 Running 2 33h
default pod/demo-grafana-67dd6c996b-8pjjn 2/2 Running 6 4d1h
default pod/demo-kube-state-metrics-6f676dfddc-nqxwn 1/1 Running 2 3d1h
default pod/demo-prometheus-node-exporter-9p8bx 1/1 Running 3 4d1h
default pod/demo-prometheus-node-exporter-c5vbh 1/1 Running 3 4d1h
default pod/demo-prometheus-node-exporter-nvc92 1/1 Running 3 4d1h
default pod/demo-prometheus-node-exporter-pndng 1/1 Running 3 4d1h
default pod/demo-prometheus-node-exporter-s5bfk 1/1 Running 3 4d1h
default pod/demo-prometheus-node-exporter-v2bps 1/1 Running 3 4d1h
default pod/demo-prometheus-operator-operator-79c5f74b5f-w6svn 2/2 Running 7 4d1h
default pod/druid-broker-75bf8774cc-m74fb 1/1 Running 9 5d2h
default pod/druid-broker-75bf8774cc-ncsmx 1/1 Running 6 4d17h
default pod/druid-coordinator-85599f58f6-69692 1/1 Running 13 4d17h
default pod/druid-historical-0 1/1 Running 6 33h
default pod/druid-historical-1 1/1 Running 3 33h
default pod/druid-historical-10 1/1 Running 3 32h
default pod/druid-historical-11 1/1 Running 8 4d
default pod/druid-historical-12 1/1 Running 10 4d
default pod/druid-historical-13 1/1 Running 3 32h
default pod/druid-historical-14 1/1 Running 9 4d1h
default pod/druid-historical-15 1/1 Running 10 4d1h
default pod/druid-historical-16 1/1 Running 9 4d1h
default pod/druid-historical-17 1/1 Running 8 4d1h
default pod/druid-historical-18 1/1 Running 10 4d1h
default pod/druid-historical-19 1/1 Running 10 4d1h
default pod/druid-historical-2 1/1 Running 9 4d
default pod/druid-historical-3 1/1 Running 3 33h
default pod/druid-historical-4 1/1 Running 10 4d
default pod/druid-historical-5 1/1 Running 8 4d
default pod/druid-historical-6 1/1 Running 10 4d
default pod/druid-historical-7 1/1 Running 11 4d
default pod/druid-historical-8 1/1 Running 9 4d
default pod/druid-historical-9 1/1 Running 3 32h
default pod/druid-middle-manager-0 1/1 Running 5 4d11h
default pod/druid-middle-manager-1 1/1 Running 6 4d11h
default pod/druid-middle-manager-2 1/1 Running 3 4d
default pod/druid-middle-manager-3 1/1 Running 1 33h
default pod/druid-middle-manager-4 1/1 Running 3 4d
default pod/druid-overlord-6f5f5fdf5b-29pqm 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-2cn5h 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-4b7tc 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-5wbdv 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-6252v 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-62kz5 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-6b55h 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-6njb7 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-6t4ml 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-78gmt 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-7fbjz 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-7nntx 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-7tdjf 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-8bztx 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-8gq6t 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-8hpcq 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-8pgfz 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-9ckrj 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-9h7lw 0/1 Evicted 0 33h
default pod/druid-overlord-6f5f5fdf5b-bmxv7 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-clv78 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-fb8gq 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-fdkvm 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-gpkss 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-h69tv 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-hjlsw 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-hv8vp 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-j9nxt 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-jbx6c 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-jjz8r 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-jpw4k 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-kxcq9 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-mhpg7 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-mlkn5 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-nh68w 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-p622q 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-p85vc 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-pbn29 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-pps79 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-pvztt 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-qf5tv 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-rbdcq 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-rd4ft 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-rsmzv 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-sbhsm 1/1 Running 0 136m
default pod/druid-overlord-6f5f5fdf5b-shx8r 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-t492b 0/1 Evicted 0 3d6h
default pod/druid-overlord-6f5f5fdf5b-tbdxd 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-tbltp 0/1 Evicted 0 136m
default pod/druid-overlord-6f5f5fdf5b-vklhx 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-vrnqk 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-w4r4c 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-w9lfs 0/1 Evicted 0 5h25m
default pod/druid-overlord-6f5f5fdf5b-wslvb 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-zgrfx 0/1 Evicted 0 3d3h
default pod/druid-overlord-6f5f5fdf5b-zr76f 0/1 Evicted 0 30h
default pod/druid-overlord-6f5f5fdf5b-zs8tf 0/1 Evicted 0 3d3h
default pod/druid-postgresql-0 1/1 Running 5 4d11h
default pod/druid-router-57cb4475f6-4cjzj 1/1 Running 6 4d17h
default pod/druid-zookeeper-0 1/1 Running 1 33h
default pod/druid-zookeeper-1 1/1 Running 1 33h
default pod/druid-zookeeper-2 1/1 Running 11 5d19h
default pod/prometheus-demo-prometheus-operator-prometheus-0 3/3 Running 4 33h
kube-system pod/coredns-748cdb7bf4-7675j 1/1 Running 1 3d1h
kube-system pod/coredns-748cdb7bf4-zcqb8 1/1 Running 4 4d10h
kube-system pod/coredns-autoscaler-868b684fd4-dfnf6 1/1 Running 4 4d10h
kube-system pod/kube-proxy-5474q 1/1 Running 13 8d
kube-system pod/kube-proxy-g6h5x 1/1 Running 30 24d
kube-system pod/kube-proxy-kfvdv 1/1 Running 21 11d
kube-system pod/kube-proxy-q7ncc 1/1 Running 30 24d
kube-system pod/kube-proxy-rr2cw 1/1 Running 28 24d
kube-system pod/kube-proxy-snqgj 1/1 Running 13 8d
kube-system pod/metrics-server-58fdc875d5-vdp5d 1/1 Running 4 4d10h
kube-system pod/omsagent-5smgt 1/1 Running 2 3d9h
kube-system pod/omsagent-5xflw 1/1 Running 2 3d9h
kube-system pod/omsagent-76fnq 1/1 Running 2 3d9h
kube-system pod/omsagent-d8xcj 1/1 Running 2 3d9h
kube-system pod/omsagent-gv6dh 1/1 Running 3 3d9h
kube-system pod/omsagent-rs-68f87fc555-nvm7c 1/1 Running 2 3d11h
kube-system pod/omsagent-zck7m 1/1 Running 2 3d9h
kube-system pod/tunnelfront-5c44b74745-6l94x 1/1 Running 2 3d11h
kubernetes-dashboard pod/dashboard-metrics-scraper-78f5d9f487-c88c4 1/1 Running 1 3d1h
kubernetes-dashboard pod/kubernetes-dashboard-577bd97bc-t8gtt 1/1 Running 1 3d1h
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment