-
Star
(277)
You must be signed in to star a gist -
Fork
(37)
You must be signed in to fork a gist
-
-
Save ipedrazas/9c622404fb41f2343a0db85b3821275d to your computer and use it in GitHub Desktop.
kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod |
Why doesn't Kubernetes clean up Evicted pods by itself? I only notice it happen sometimes.
kubectl get pod -A | grep Evicted | awk '{print $2 " --namespace=" $1}' | xargs -n 2 kubectl delete pod
i think the most simple command is
kubectl delete pods -A --field-selector=status.phase=Failed
best solution! 👍
i think the most simple command is
kubectl delete pods -A --field-selector=status.phase=Failed
works for me
kgpa | grep Evicted | awk '{print $2}' | xargs kubectl delete pod --force
kgpa | grep -v Running | awk '{print $2}' | xargs kubectl delete pod --force
Did you know that you can use the --field-selector option for kubectl delete as well?
kubectl delete pod --field-selector="status.phase==Failed"
The original question is about to delete "Evicted" pods, which is a subset of "Failed". Unfortunately there is no state field for pods that are in the Running state.
kubectl get po -A --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
Thank you!
i think the most simple command is
kubectl delete pods -A --field-selector=status.phase=Failed
Works for me too, for deleting failed pods in all namespaces
Did you know that you can use the --field-selector option for kubectl delete as well?
kubectl delete pod --field-selector="status.phase==Failed"
Great answer, thanks!
Why doesn't Kubernetes clean up Evicted pods by itself? I only notice it happen sometimes.
My understanding is that there is a threshold for cleaning up - when the number of failed hits that threshold then clean up will happen - the challenge is that the default for that is 12500 (twelve thousand five hundred). The purpose of the threshold is to allow for review of the reasons for failure and I can see that in a large system that might almost be a reasonable number.
That threshold can be changed - I'm not sure what a sensible number would look like for a small cluster.
kubectl get pod -A | grep Evicted | awk '{print $2 " --namespace=" $1}' | xargs -n 2 kubectl delete pod
work for me
This works for me
kubectl delete pods $(kubectl get pods | grep [pod name] | grep Evicted | awk '{print $1}')