Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Rails Kubernetes Manifests
apiVersion: v1
kind: ConfigMap
metadata:
name: example
namespace: default
data:
APPLICATION_HOST: example.com
LANG: en_US.UTF-8
PIDFILE: /tmp/server.pid
PORT: "3000"
RACK_ENV: production
RAILS_ENV: production
RAILS_LOG_TO_STDOUT: "true"
RAILS_SERVE_STATIC_FILES: "true"
apiVersion: batch/v1
kind: Job
metadata:
generateName: db-migrate-
labels:
app.kubernetes.io/name: example
namespace: default
spec:
template:
metadata:
labels:
app.kubernetes.io/name: example
spec:
containers:
- command:
- rails
- db:migrate
envFrom:
- configMapRef:
name: env
- secretRef:
name: env
image: docker.io/mycompany/myapplication:abcd123
imagePullPolicy: IfNotPresent
name: main
restartPolicy: Never
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: example
process: web
name: example-web
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: example
process: web
template:
metadata:
labels:
app.kubernetes.io/name: example
process: web
spec:
containers:
- env:
- name: PORT
value: "3000"
envFrom:
- configMapRef:
name: example
- secretRef:
name: example
image: docker.io/mycompany/myapplication:abcd123
imagePullPolicy: IfNotPresent
name: main
ports:
- containerPort: 3000
name: http
protocol: TCP
readinessProbe:
httpGet:
httpHeaders:
- name: Host
value: example.com
path: /robots.txt
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
initContainers:
- command:
- rake
- db:abort_if_pending_migrations
envFrom:
- configMapRef:
name: example
- secretRef:
name: example
image: docker.io/mycompany/myapplication:abcd123
imagePullPolicy: IfNotPresent
name: migrations
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: example
name: example
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-web
servicePort: 3000
tls:
- hosts:
- example.com
secretName: example-tls
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: example
process: web
name: example-web
namespace: default
spec:
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: example
process: web
type: ClusterIP
@gee-forr
Copy link

gee-forr commented Jan 19, 2021

OMG THANK YOU!!!!! This is gold.

@jgrau
Copy link

jgrau commented Feb 10, 2021

Yes, this is awesome. I'm running into the issue of kubectl apply -f ... not working with generateName in the Job while kubectl create -f ... does work but generates warnings for each of the existing objects. Is there a good way to solve this?

@jferris
Copy link
Author

jferris commented Feb 10, 2021

@jgrau - you'll need to do something differently depending on how your CD pipeline works.

If you deploy with something like ArgoCD or Helm, you can add a hook annotation and generateName will work fine.

If you're applying your manifests in a CD pipeline manually, you can use apply for most of the manifests and then run create separately for the job definition.

I've been using ArgoCD, which lets me use hooks and generateName, but will also let you see the effective diff before you apply it and supports tools like Helm and Kustomize as well.

@jgrau
Copy link

jgrau commented Feb 11, 2021

@jferris Thank you! That makes sense. I kept chasing a way to make it all work with kubectl apply but running kubectl create -f <job.yaml> and then kubectl apply -f <the rest of the manifests> is easy and simple and it works.

@lauer
Copy link

lauer commented Apr 21, 2021

should the docker image have already the assets precompiled before pushing it to docker registry?

That is how we do it.

...
CMD [ "bin/rails", "server", "--binding=0.0.0.0" ]
RUN RAILS_ENV=production bundle exec rake assets:precompile

But thanks to @jferris. Specially the

        readinessProbe:
          httpGet:
            httpHeaders:
            - name: Host
              value: example.com

Was something I had problems with missing. Because the prod requires a specific hostname to be set.

@kurt-mueller-osumc
Copy link

kurt-mueller-osumc commented May 3, 2021

https://gist.github.com/jferris/1aba7433f5318715bda66b98c1d953f0#file-db-migrate-yaml-L18

I think this indentation is wrong? My apologies if it isn't.

@kaka-ruto
Copy link

kaka-ruto commented May 24, 2021

@lauer, I've had to compile assets with the Rails master key (on CI)

I use Github Actions and store the secret there, then use this

RAILS_MASTER_KEY=`cat /run/secrets/master_key` RAILS_ENV=production bin/rails assets:precompile

It fails if I do not send it the master key. Is this the case for you?

@jandudulski
Copy link

jandudulski commented Nov 4, 2021

hi @jferris , why do you actually use generateName for the job instead of just name?

@jandudulski
Copy link

jandudulski commented Nov 4, 2021

Also, I struggle to use this approach with google_cloud_proxy. initContainer runs just a single container and the only solution I found is "build your own image with proxy binary inside". Do you have a better solution for that?

And the job is not marked as completed while proxy is running inside.

@jferris
Copy link
Author

jferris commented Nov 4, 2021

Why do you actually use generateName for the job instead of just name?

You can use name or generateName, but if you use name your CI process will need to delete the previous job first or migrations won't run.

And the job is not marked as completed while proxy is running inside.

Kubernetes itself doesn't have any special knowledge of essential containers or sidecars so if you use a sidecar container in a job you'll need to manually kill it in the main container once the job is complete.

initContainer runs just a single container and the only solution I found is "build your own image with proxy binary inside". Do you have a better solution for that?

If your initContainer needs a sidecar to connect to the database, that approach might be difficult. A few ideas you could try:

  • Run the cloud proxy as a separate deployment or daemonset rather than a sidecar so that it's available before the pod starts.
  • Replace the initContainer with a readiness probe so the sidecar is running by the time you need to connect. If you do this, you may need to configure it to restart the container if the probe fails so that Rails will cache the latest database definitions.

@jandudulski
Copy link

jandudulski commented Nov 4, 2021

@jferris thanks for your reply and suggestions.

In the meantime I found why generateName makes sense - jobs are immutable so you cannot patch them to redeploy.

so if you use a sidecar container in a job you'll need to manually kill it in the main container once the job is complete.

that's not trivial on its own but this solution worked for me: https://stackoverflow.com/a/64650086

@kwstannard
Copy link

kwstannard commented Feb 1, 2022

To improve the speed of this, create a route specifically for the readinessProbe or even better, use a startupProbe (I am using the health_check gem). That route should, among other things, return 500 if migrations are pending. Then you can get rid of the initContainer. What will then happen is the app pods will start concurrently with the migration job and sit there returning 500's to k8s until the migrations finish, at which point they will return 200 and k8s will mark them as ready for traffic. This will cut deployment time in more than half compared to initContainers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment