I encountered a mysterious issue when attempting to move an app from heroku to k8s, my celery worker, started with this command:
worker: celery worker --app=<my app> --without-gossip --events -Ofair
would appear to "kind of" start, as in the warning from celery about running as root would print, and then after a few seconds
the pod would enter CrashLoopBackOff state and print no other logs.
The same thing would happen when I would kubectl exec
into the pod and try to manually start my worker that way. Doing it this
way I was able to see a little bit more logging because an issue with how docker captures logs, more description here: Celery, docker, and the missing startup banner
This way was able to see celery's startup banner saying it connected to redis and then the simple message "Killed".
It turns out whats happening is Kubernetes was killing my pod because it was starting up a bunch of new processes. After reading
more about how celery workers work at lower level (https://www.distributedpython.com/2018/10/26/celery-execution-pool/) I found the solution:
Running celery worker single threadedly using --pool solo
option.
You can still effectively support concurrency without crashing the pod by using one of the simulated threading options like
gevent
. I have a container running with--pool gevent
and it is much more performant for my use case than--pool solo
.This requires your worker container has the
gevent
package -pip install gevent
.