Resources for https://stackoverflow.com/questions/54439356/node-has-no-available-volume-zone-in-aws-eks
The eks-pod-no-avail-volume-zone.yml
file is nearly the same as the one we use in prod, only difference is hostname and "prod" / "stage" so this file is most likely not the source of the problem.
The storage is defined and AWS console shows I do have a gp2 volume attached to each worker node.
One difference between stage (broken) and prod (working): the former runs Kubernetes 1.11, the latter 1.10.
I see from kubectl describe pv
that in prod I have 2 persistent volumes while in stage I only have one. Moreover, the one in stage shows in AWS EC2 console as "available", not as "in-use".
I tried and failed to delete the PV: kubectl delete pv <id>
says persistentvolume "" deleted but hangs after that and if I kill it and get pv
it is still listed (notice that I have also manually deleted the Volume in AWS EC2):
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c95f0952-f160-11e8-9107-02fe87a39e2e 1Gi RWO Retain Terminating common/demo-db-storage-demo-db-deployment-0 gp2 156d
Deleting the StatefulSet that created did not help either.
Finally I succeeded by first deleting the pesisten volume claim (PVC) for it - I mistakenly expected that deleting the StatefulSet that used it would delete it too.