I encountered a mysterious issue when attempting to move an app from heroku to k8s, my celery worker, started with this command:
worker: celery worker --app=<my app> --without-gossip --events -Ofair
would appear to "kind of" start, as in the warning from celery about running as root would print, and then after a few seconds
the pod would enter CrashLoopBackOff state and print no other logs.
The same thing would happen when I would kubectl exec
into the pod and try to manually start my worker that way. Doing it this
way I was able to see a little bit more logging because an issue with how docker captures logs, more description here: Celery, docker, and the missing startup banner
This way was able to see celery's startup banner saying it connected to redis and then the simple message "Killed".
It turns out whats happening is Kubernetes was killing my pod because it was starting up a bunch of new processes. After reading