Skip to content

Instantly share code, notes, and snippets.

@tshak
Created October 2, 2018 13:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tshak/2bda042e2705e7f1fc43719eb306da08 to your computer and use it in GitHub Desktop.
Save tshak/2bda042e2705e7f1fc43719eb306da08 to your computer and use it in GitHub Desktop.
Voyager stuck pod repro: kube describe po
Name: voyager-repro-5c65d99685-f84xr
Namespace: default
Node: ip-172-20-119-215.us-east-2.compute.internal/172.20.119.215
Start Time: Tue, 02 Oct 2018 15:07:51 +0200
Labels: origin=voyager
origin-api-group=voyager.appscode.com
origin-name=repro
pod-template-hash=1721855241
Annotations: ingress.appscode.com/last-applied-annotation-keys=
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container haproxy
Status: Terminating (lasts 6m)
Termination Grace Period: 30s
IP:
Controlled By: ReplicaSet/voyager-repro-5c65d99685
Containers:
haproxy:
Container ID:
Image: appscode/haproxy:1.8.9-7.2.0-alpine
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
Args:
--enable-analytics=false
--burst=1000000
--cloud-provider=aws
--ingress-api-version=voyager.appscode.com/v1beta1
--ingress-name=repro
--qps=1e+06
--logtostderr=false
--alsologtostderr=false
--v=3
--stderrthreshold=0
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
APPSCODE_ANALYTICS_CLIENT_ID:
Mounts:
/etc/ssl/private/haproxy from voyager-certdir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from voyager-repro-token-4bgsz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
voyager-certdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
voyager-repro-token-4bgsz:
Type: Secret (a volume populated by a Secret)
SecretName: voyager-repro-token-4bgsz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned voyager-repro-5c65d99685-f84xr to ip-172-20-119-215.us-east-2.compute.internal
Normal SuccessfulMountVolume 7m kubelet, ip-172-20-119-215.us-east-2.compute.internal MountVolume.SetUp succeeded for volume "voyager-certdir"
Warning FailedMount 6m (x5 over 7m) kubelet, ip-172-20-119-215.us-east-2.compute.internal MountVolume.SetUp failed for volume "voyager-repro-token-4bgsz" : secrets "voyager-repro-token-4bgsz" not found
Warning FailedMount 4m kubelet, ip-172-20-119-215.us-east-2.compute.internal Unable to mount volumes for pod "voyager-repro-5c65d99685-f84xr_default(2af49e72-c644-11e8-9d29-0664d33a443a)": timeout expired waiting for volumes to attach or mount for pod "default"/"voyager-repro-5c65d99685-f84xr". list of unmounted volumes=[voyager-certdir voyager-repro-token-4bgsz]. list of unattached volumes=[voyager-certdir voyager-repro-token-4bgsz]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment