Skip to content

Instantly share code, notes, and snippets.

View chris-w-jarvis's full-sized avatar

Christopher Jarvis chris-w-jarvis

View GitHub Profile
@chris-w-jarvis
chris-w-jarvis / shell_to_exec_form.md
Last active January 15, 2021 17:55
convert a shell command to exec form for docker CMD

echo "<COMMAND TO TRANSLATE WITH $'s ESCAPED>" | sed 's/ /","/g' this will replace each inner space with ","

@chris-w-jarvis
chris-w-jarvis / kubernetes_celery_worker.md
Created January 15, 2021 16:17
kubernetes is killing my celery worker pod with no logging

I encountered a mysterious issue when attempting to move an app from heroku to k8s, my celery worker, started with this command: worker: celery worker --app=<my app> --without-gossip --events -Ofair would appear to "kind of" start, as in the warning from celery about running as root would print, and then after a few seconds the pod would enter CrashLoopBackOff state and print no other logs.

The same thing would happen when I would kubectl exec into the pod and try to manually start my worker that way. Doing it this way I was able to see a little bit more logging because an issue with how docker captures logs, more description here: Celery, docker, and the missing startup banner This way was able to see celery's startup banner saying it connected to redis and then the simple message "Killed".

It turns out whats happening is Kubernetes was killing my pod because it was starting up a bunch of new processes. After reading

Keybase proof

I hereby claim:

  • I am chris-w-jarvis on github.
  • I am chrisjarv (https://keybase.io/chrisjarv) on keybase.
  • I have a public key ASDFSfGE2O858wqvfd1BwvHNarO_Xr7xgQiVw6AZmxyX_Qo

To claim this, I am signing this object: