Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save YoucefGuichi/e025478e76b99e77c8e282a2713b6573 to your computer and use it in GitHub Desktop.
Save YoucefGuichi/e025478e76b99e77c8e282a2713b6573 to your computer and use it in GitHub Desktop.
title date description author_image author image
Kubernetes Native Support For Sidecar Containers
Mar 13, 2024
In this article will go through the classical implementation of sidecars and how it is used, What are the problem with it and the workarounds that the community was usig, then we try the new native sidecar feature introduced on k8s 1.28 and how it tackled the problem!
/profile.jpg
Youcef Guichi
/native-sidecar-containers.jpeg

In this article will go through the classical implementation of sidecars and how it is used, What are the problem with it and the workarounds that the community was usig, then we try the new native sidecar feature introduced on k8s 1.28 and how it tackled the problem!

Happy reading!

Sidecars vs Init Containers

In Kubernetes, both sidecars and init containers serve distinct roles within pods. Sidecar containers operate alongside the main application container, providing supplementary functionality such as logging, monitoring, or proxying. On the other hand, init containers execute initialization tasks before the main application container starts. They run to completion independently of the main container, ensuring that prerequisites like configuration setup or data population are fulfilled before the primary application begins.

The Problem before k8s 1.28

For the sidecar containers as before 1.28, k8s does not know which one you are using as a sidecar and which one as a main container, that being said the responsibility of distenguishing and managing sidecares was held to us.

Let's say your main container runs a job and in order for this job to run, it needs another service to be ready so it can communicate to it through localhost, How we would implement that in this senario? Your main container job should have a health check for the sidecare service, if the healthcheck passes then that means the service is ready and the main container's job can start.

And now let's say that the job has been finished successfully, would that lead the pod to go to completion state? Unfortunately NO!

Remember we still have the sidecar container running a service for us and it has an independent lifecycle from the main container, What we would do in this case? the work around here is to notify the sidecar container that the main container job is a success maybe writing the info to a file in a shared volume, and then the sidecar can check for it periodically if it is a success then it will exit. Since all containers finished sucessfully the pod now can go to a completed state.

Init containers in the other hand can ensure the sequencing for us, however the init containers run till completion before the main container starts, so the main container can't access the init container.

So What's next?

Native sidecare containers support in k8s 1.28

How k8s maintainers tackle the issue in 1.28?

they added one extra field to the init container definition restartPolicy: Always if it is set then kuberenetes will handle the init container as a normal sidecar, in addition to that the sidecare will run first just as a regular init container, once ready the main container will start.

let's do some experiments as follow!

We have to enable SidecarContainers: true in order to experiment with this feature.

#cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
  SidecarContainers: true

Spin up a cluster using kind

kind create cluster --name sidecars --config=cluster-config.yaml

We have a sample job that have an initcontainer with restartPolicy set to Always the init container is serving an api at 0.0.0.0:80.

The main container is an nginx image that will curl the api defined by the image in the init container.

apiVersion: batch/v1
kind: Job
metadata:
  name: 
spec:
  template:
    spec:
      initContainers:
      - name: podbee
        image: ghcr.io/biznesbees/podbee:v0.1.1
        restartPolicy: Always
      containers:
      - name: nginx
        image: nginx
        command:
        - sh 
        - -c
        - curl 0.0.0.0:80 # podbee's api 
      restartPolicy: Never
  backoffLimit: 4

As we see below the init container intilizied first as expected, then we have pod Initialization, the intersting part is that both containers were running at the same time.

podbee-gf9x8 2/2 Running 0 2m21s

➜  root git:(main) ✗ kubectl get po -w
NAME           READY   STATUS     RESTARTS   AGE
podbee-gf9x8   0/2     Init:0/1   0          71s
podbee-gf9x8   1/2     PodInitializing   0          2m16s
podbee-gf9x8   2/2     Running           0          2m21s
podbee-gf9x8   1/2     Completed         0          2m30s

if we describe the pod and we check we see that after the main container finishes kubernetes killed the sidecar container afterwards. the lifecycyle become the same as the pod and the main container, if the main container succedded kubenetes will kill the sidecar container automatically for us.

Events:
  Normal  Created    98s    kubelet            Created container debian
  Normal  Started    98s    kubelet            Started container debian
  Normal  Killing    89s    kubelet            Stopping container podbee

Also as we see here in the logs, the main container 27.0.0.1:52756 which is our nginx was able to curl the sidecar at 0.0.0.0:80

➜ root git:(main) ✗ kubectl logs podbee-nggdg -c podbee
INFO:     Started server process [13]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
INFO:     127.0.0.1:52756 - "GET / HTTP/1.1" 200 OK
➜ root git:(main) ✗ kubectl logs podbee-nggdg -c nginx 
{"message":"Thank you for using PodBee!"}

Take Aways

  • The feature looks promising as it takes the overhead from the developer shoulders and leverage a native solution.
  • For now it is still an experiemntal feature and not recommended for production use
  • For more details you may take a look at the official announcment at kubernetes website
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment