Skip to content

Instantly share code, notes, and snippets.

@matzew
Created October 5, 2018 12:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save matzew/495f6b1253e2fede5d67c1657db1b1ec to your computer and use it in GitHub Desktop.
Save matzew/495f6b1253e2fede5d67c1657db1b1ec to your computer and use it in GitHub Desktop.
Name: my-cluster-zookeeper-0
Namespace: myproject
Priority: 0
PriorityClassName: <none>
Node: localhost/192.168.122.217
Start Time: Fri, 05 Oct 2018 14:26:25 +0200
Labels: controller-revision-hash=my-cluster-zookeeper-9b495c99c
statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: openshift.io/scc=anyuid
operator.strimzi.io/statefulset-generation=0
Status: Running
IP: 172.17.0.28
Controlled By: StatefulSet/my-cluster-zookeeper
Containers:
zookeeper:
Container ID: docker://7a0a7a84eb21c4530be89d8a45a4476f4aed96dea7d8404b961ef21d625b3035
Image: strimzi/zookeeper:0.7.0
Image ID: docker-pullable://docker.io/strimzi/zookeeper@sha256:650dc7caab858bd70f8c82ca0dd0119ee98bd361bcbac1b473fe1ad7695b0ac0
Port: 9404/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 05 Oct 2018 14:29:48 +0200
Finished: Fri, 05 Oct 2018 14:29:50 +0200
Ready: False
Restart Count: 5
Liveness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
ZOOKEEPER_METRICS_ENABLED: true
DYNAMIC_HEAP_FRACTION: 0.75
DYNAMIC_HEAP_MAX: 2147483648
ZOOKEEPER_CONFIGURATION: timeTick=2000
autopurge.purgeInterval=1
syncLimit=2
initLimit=5
Mounts:
/opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h6zlp (ro)
tls-sidecar:
Container ID: docker://7f24635bb974634c22457992042e5181f16d27bb7c34255fccf23cbc0de3479b
Image: strimzi/zookeeper-stunnel:0.7.0
Image ID: docker-pullable://docker.io/strimzi/zookeeper-stunnel@sha256:4b94c679afd3d1a2db72c5e9858eecd8bd0012efc98307b75b99188bbdf673c4
Ports: 2888/TCP, 3888/TCP, 2181/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Fri, 05 Oct 2018 14:26:51 +0200
Ready: True
Restart Count: 0
Environment:
ZOOKEEPER_NODE_COUNT: 1
Mounts:
/etc/tls-sidecar/certs/ from tls-sidecar-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h6zlp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-zookeeper-0
ReadOnly: false
zookeeper-metrics-and-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-zookeeper-config
Optional: false
tls-sidecar-certs:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-nodes
Optional: false
default-token-h6zlp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h6zlp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m (x2 over 4m) default-scheduler pod has unbound PersistentVolumeClaims
Normal Scheduled 4m default-scheduler Successfully assigned myproject/my-cluster-zookeeper-0 to localhost
Normal Pulling 4m kubelet, localhost pulling image "strimzi/zookeeper:0.7.0"
Normal Pulled 3m kubelet, localhost Successfully pulled image "strimzi/zookeeper:0.7.0"
Normal Pulling 3m kubelet, localhost pulling image "strimzi/zookeeper-stunnel:0.7.0"
Normal Pulled 3m kubelet, localhost Successfully pulled image "strimzi/zookeeper-stunnel:0.7.0"
Normal Created 3m kubelet, localhost Created container
Normal Started 3m kubelet, localhost Started container
Normal Created 2m (x4 over 3m) kubelet, localhost Created container
Normal Started 2m (x4 over 3m) kubelet, localhost Started container
Normal Pulled 2m (x3 over 3m) kubelet, localhost Container image "strimzi/zookeeper:0.7.0" already present on machine
Warning BackOff 2m (x8 over 3m) kubelet, localhost Back-off restarting failed container
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment