Skip to content

Instantly share code, notes, and snippets.

@FaridLU
Last active April 28, 2023 11:49
Show Gist options
  • Save FaridLU/749ac544b699deb130442ea2be8b9c99 to your computer and use it in GitHub Desktop.
Save FaridLU/749ac544b699deb130442ea2be8b9c99 to your computer and use it in GitHub Desktop.
Highly Scalable Django Backend (Docker + Kubernates)

Django Dockerfile

Sure, here's an example Dockerfile for building a Django project:

  # Use an official Python runtime as a parent image
  FROM python:3.8-slim-buster

  # Set the working directory to /app
  WORKDIR /app

  # Copy the requirements file into the container at /app
  COPY requirements.txt /app/

  # Install any needed packages specified in requirements.txt
  RUN pip install --trusted-host pypi.python.org -r requirements.txt

  # Copy the current directory contents into the container at /app
  COPY . /app/

  # Set environment variables
  ENV DJANGO_SETTINGS_MODULE=myproject.settings.prod
  ENV PYTHONUNBUFFERED=1

  # Expose port 8000 for the Django app
  EXPOSE 8000

  # Start the Django app with Gunicorn
  CMD ["gunicorn", "myproject.wsgi:application", "-w", "4", "-b", "0.0.0.0:8000"]

This Dockerfile assumes that you have a requirements.txt file in your project root directory that lists all of your Python dependencies. You can modify the DJANGO_SETTINGS_MODULE environment variable to specify the settings file for your Django project, and you can adjust the number of Gunicorn workers (-w) and the bind address (-b) to suit your needs.

Make sure that this Dockerfile is in the same directory as your project code and requirements.txt file, and you can use the docker build command to build a Docker image for your Django project:

docker build -t myproject .

This will create a Docker image with the tag myproject that includes your Django app and its dependencies. You can then use this image to run your app locally or to deploy it to a Kubernetes cluster.

Docker-Compose (Django, Celery, Traefik, Redis, DB)

here's an updated summary of the solution for scaling a Django application using Docker and Kubernetes, including the Docker Compose file and Kubernetes deployment file:

Use Docker to create a scalable and efficient setup for your Django application with Celery, Redis, PostgreSQL, Traefik, and Gunicorn. Here's an example Docker Compose file that includes all of these components:

  version: '3'
  services:
    web:
      build: .
      command: gunicorn myproject.wsgi:application -w 4 -b 0.0.0.0:8000
      expose:
        - "8000"
      volumes:
        - .:/code
      env_file:
        - .env
      labels:
        - "traefik.enable=true"
        - "traefik.http.routers.myproject.rule=Host(`myproject.example.com`)"
        - "traefik.http.routers.myproject.entrypoints=web"
        - "traefik.http.services.myproject.loadbalancer.server.port=8000"
    db:
      image: postgres
      environment:
        POSTGRES_DB: postgres
        POSTGRES_USER: postgres
        POSTGRES_PASSWORD: postgres
      volumes:
        - postgres_data:/var/lib/postgresql/data/
    redis:
      image: redis
      volumes:
        - redis_data:/data
    celery:
      build: .
      command: celery -A myproject worker -l info
      depends_on:
        - db
        - redis
      volumes:
        - .:/code
      env_file:
        - .env
    celerybeat:
      build: .
      command: celery -A myproject beat -l info
      depends_on:
        - db
        - redis
      volumes:
        - .:/code
      env_file:
        - .env
    traefik:
      image: traefik:v2.5
      command:
        - "--providers.docker=true"
        - "--providers.docker.exposedbydefault=false"
        - "--entrypoints.web.address=:80"
      ports:
        - "80:80"
        - "8080:8080"
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock:ro
  volumes:
    postgres_data:
    redis_data:

In your Docker Compose file, allocate enough resources to your pods (e.g. CPU and memory) to ensure that they can handle the expected traffic.

Kubernates

Use Kubernetes to deploy your application and enable horizontal scaling. To do this, create a Kubernetes deployment file and update the replicas field to a higher number to create multiple replicas of your application. Here's an example Kubernetes deployment file:

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: myproject
    labels:
      app: myproject
  spec:
    replicas: 3
    selector:
      matchLabels:
        app: myproject
    template:
      metadata:
        labels:
          app: myproject
      spec:
        containers:
        - name: web
          image: myproject:latest
          ports:
          - containerPort: 8000
          env:
          - name: DJANGO_SETTINGS_MODULE
            value: "myproject.settings.prod"
          - name: PYTHONUNBUFFERED
            value: "1"
        - name: celery
          image: myproject:latest
          command: ["celery", "-A", "myproject", "worker", "-l", "info"]
          env:
          - name: DJANGO_SETTINGS_MODULE
            value: "myproject.settings.prod"
          - name: PYTHONUNBUFFERED
            value: "1"
        - name: celerybeat
          image: myproject:latest
          command: ["celery", "-A", "myproject", "beat", "-l", "info"]
          env:
          - name: DJANGO_SETTINGS_MODULE
            value: "myproject.settings.prod"
          - name: PYTHONUNBUFFERED
            value: "1"

This deployment file creates three replicas of your application and sets up containers for your Django app, Celery worker, and Celery beat.

Kubernates HPA (horizontal pod autoscaler)

Use Kubernetes' horizontal pod autoscaler (HPA) to automatically scale the number of replicas of your pods based on the CPU or memory usage. Here's an example Kubernetes HPA file:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: myproject-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myproject
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

This HPA file sets the minimum number of replicas to two, the maximum number of replicas to ten, and the target CPU utilization percentage to 50%.

Monitor the resource usage of your pods using tools like kubectl top and adjust the resource allocation as needed.

Optimize your application and infrastructure for efficiency to minimize costs while using autoscaling. This can include things like using caching to reduce database load, optimizing your code for performance, and using efficient networking and storage solutions.

Overall, by using Docker and Kubernetes to create a scalable and efficient setup for your Django application and optimizing your resources and infrastructure for efficiency, you can ensure that your application can handle the expected traffic and provide a good user experience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment