Skip to content

Instantly share code, notes, and snippets.

@douglasmiranda
Last active January 30, 2024 14:22
Show Gist options
  • Star 19 You must be signed in to star a gist
  • Fork 8 You must be signed in to fork a gist
  • Save douglasmiranda/9b899c748e915173c8f19d948bbdc69c to your computer and use it in GitHub Desktop.
Save douglasmiranda/9b899c748e915173c8f19d948bbdc69c to your computer and use it in GitHub Desktop.
Notes on Gitlab CI/CD

Gitlab CI/CD

Useful links:

General

How to make my jobs execute in parallel?

When you define your stages all jobs of the same stage are executed in parallel.

How to execute a job only when some files were changed?

What about interactive jobs?

  • Gitlab.com doesn't support interactive web terminals for now (last I checked 2019/02/20), follow this issue for more.

Extending/Templating jobs

You have two options:

When you're templating/extending keep in my mind that is better to avoid some simplified syntaxes because when merging the values, Gitlab CI will not merge lists for example.

Let's say you have something like:

deploy:
  only:
    - master

now you want to extend and add:

  only:
    # ...
    changes:
      - ./**/*.py

In order to avoid having to repeat the first bit in the extended form, you use from the beginning, like this:

deploy:
  only:
    refs:
      - master

Then when you extend, you'll have the result you expect.

deploy:
  only:
    refs:
      - master

+

deploy:
  only:
    changes:
      - ./**/*.py

=

deploy:
  only:
    refs:
      - master
    changes:
      - ./**/*.py

Running locally

Run your jobs locally to avoid to commit and push just to see if you're writing correct "CI code".

There are some limitations, but for basic checks, it's good enough.

So, install: https://docs.gitlab.com/runner/

And you'll be running something like:

gitlab-runner exec docker my_awesome_job

Docker-in-Docker doesn't work in gitlab-runner exec docker

I faced a problem with recent versions (19.*) of Docker when using DinD.

It turns out Docker generates certificates and enforce connection using TLS for DinD.

This is security by default, so people don't make the mistake of deploying Docker-in-Docker open to the world without authentication.

In GitlabCI, I think that may not be a problem. (please correct me if I'm wrong)

Try for yourself:

stages:
  - Test

testing:
  stage: Test
  image: docker:19
  services:
    - docker:19-dind
    - postgres:11-alpine
  variables:
    DOCKER_TLS_CERTDIR: ""
  script:
    - docker version
    - docker info
gitlab-runner exec docker --docker-privileged testing

Accessing a service container from another container

A service available during a job runs in a container, but it's not available for you to link to another container.

My solution at the moment is:

stages:
  - Test

testing:
  stage: Test
  image: docker:19
  services:
    - docker:19-dind
    - name: postgres:11-alpine
      alias: postgres
  variables:
    # https://gist.github.com/douglasmiranda/9b899c748e915173c8f19d948bbdc69c#docker-in-docker-doesnt-work-in-gitlab-runner-exec-docker
    DOCKER_TLS_CERTDIR: ""
  script:
    # Let's get the IP for postgres service
    # We need that in order to add as a host available in our container
    - POSTGRES_IP=$(cat /etc/hosts | awk '{if ($2 == "ip6-localne") print $1;}')
    # Just checking that the IP is reachable from outside the container
    - ping -w 2 $POSTGRES_IP
    # Now we add/map our Postgres service IP inside the container
    # The hostname will be "postgres"
    - docker run --rm --add-host="postgres:$POSTGRES_IP" alpine sh -c "ping -w 5 postgres"

Real world example:

stages:
  - Build/Test

django:
  stage: Build/Test
  image: docker:19
  services:
    - docker:19-dind
    - name: postgres:11-alpine
      alias: postgres
  variables:
    # https://gist.github.com/douglasmiranda/9b899c748e915173c8f19d948bbdc69c#docker-in-docker-doesnt-work-in-gitlab-runner-exec-docker
    DOCKER_TLS_CERTDIR: ""
  script:
    # Let's get the IP for postgres service
    - POSTGRES_IP=$(cat /etc/hosts | awk '{if ($2 == "postgres") print $1;}')
    # Build
    - docker build --target=production -t ubit/django .
    - docker run --rm --add-host="postgres:$POSTGRES_IP" --env="DJANGO_SETTINGS_MODULE=ubit_ads.config.test" --entrypoint="" ubit/django sh -c "pip install --user -r requirements/test.txt && pytest"

Note: it may be better just do build/test/release as separated jobs, like I do here.

Fail if the environment variable is not defined

job:
  script:
    - '[[ -z "$MY_PASSWORD" ]] && echo "You must set the variable: MY_PASSWORD" && exit 1;'

Of course, you have a built-in way of executing jobs only if variable == to something:

Docker

You can use the image you've built in the previous job as your current job

This can be useful for testing, like in a Build > Test > Release Scenario.

Let's see a complete example of how that would be:

services:
  - docker:dind

stages:
  - Build
  - Test
  - Release

variables:
  DJANGO_IMAGE_TEST: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_REF_SLUG
  DJANGO_IMAGE: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_SHA

django_build:
  image: docker:stable
  stage: Build
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
    # So we can use as cache (`|| true` means that even if the pull fails, wel'll try to build it)
    - docker pull $DJANGO_IMAGE_TEST || true

    # Using --cache-from we make sure that if nothing is changed here we use what's cached
    # BUILD TEST IMAGE:
    - docker build --target=production --cache-from=$DJANGO_IMAGE_TEST -t $DJANGO_IMAGE_TEST .

    # push so we can use in subsequent jobs
    - docker push $DJANGO_IMAGE_TEST

django_test:
  image: $DJANGO_IMAGE_TEST
  stage: Test
  services:
    - postgres:11-alpine
  variables:
    POSTGRES_DB: postgres
    POSTGRES_USER: postgres
    POSTGRES_PASSWORD: ""
    POSTGRES_PORT: "5432"
    # Using the test settings, instead of actual production
    DJANGO_SETTINGS_MODULE: myapp.config.test
  script:
    # Install some packages to run tests
    # Execute pytest
    - pip install --user -r requirements/test.txt
    - pytest

django_release:
  image: docker:stable
  stage: Release
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $DJANGO_IMAGE_TEST
    - docker tag $DJANGO_IMAGE_TEST $DJANGO_IMAGE
    - docker push $DJANGO_IMAGE

Notes on using services

Services are Docker containers with long-running services that you can access from your jobs.

For example the Postgres: https://docs.gitlab.com/ce/ci/services/postgres.html

  • The host address will be available to conenct at postgres (not localhost).
  • The default database, username and password are the default from the official image
  • You can customize some things

IMPORTANT:

You may want export the variables so you can see what variables Gitlab CI will inject by default.

This can cause some weird behaviors, maybe you're expecting POSTGRES_PORT to be 5432, but if you export the variables you'll see that it's actually something like: tcp://172.17.0.3:5432.

So you probably want to define some variables, like:

variables:
  POSTGRES_DB: postgres
  POSTGRES_USER: postgres
  POSTGRES_PASSWORD: ""
  POSTGRES_PORT: "5432"

How to login on my Gitlab Registry and stay logged in between jobs?

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY

Validate a Docker Compose/Stack file syntax

image: docker:stable
services:
  - docker:dind

stages:
  - Linters

test_docker_compose_files:
  stage: Linters
  script:
    # Download and install docker-compose
    - wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
    - chmod +x /usr/local/bin/docker-compose
    # Validating the main Docker Compose file used in development environment
    - docker-compose -f docker-compose.yml config
    # Validating deployment docker stack files
    - docker-compose -f deployment/docker-stack.django.yml config

Snippets

Create secrets from environment variables before deploy

deploy:
  image: docker:latest
  stage: Deployment
  script:
    # Fist let's check if our variables exists:
    - '[[ -z "$MY_SECRET" ]] && echo "You must set the variable: MY_SECRET" && exit 1;'
    # step two is to check if MY_SECRET is stored in Docker Secrets
    # if not, we create one
    - docker secret inspect MY_SECRET || echo $MY_SECRET | docker secret create MY_SECRET -
    # and then we deploy to our swarm:
    - docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
  when: manual

Create secrets with openssl before deploy

deploy:
  image: docker:latest
  stage: Deployment
  script:
    - apk add --no-cache openssl
    - docker secret inspect MY_SECRET || openssl rand -base64 50 | docker secret create MY_SECRET -
    # and then we deploy to our swarm:
    - docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
  when: manual

Check if my Docker Compose and Docker Stack files are valid

validate_stack_files:
  stage: Validate
  image: docker:stable
  script:
    - wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
    - chmod +x /usr/local/bin/docker-compose
    # Validating the main Docker Compose file used in development environment
    - docker-compose -f docker-compose.yml config
    # Validating the deployment docker stack files
    - docker-compose -f deployment/docker-stack.django.yml config
  only:
    changes:
      - docker-compose.*
      - deployment/docker-stack.*

Docker TLS remote connection

  • Configure your Docker host to accept remote connections with TLS.
  • Genereate your client certificates.
  • In your Gitlab Environment Variables:
    • $TLSCACERT
    • $TLSCERT
    • $TLSKEY
remote-docker-template-job:
  image: docker:stable
  variables:
    DOCKER_HOST: tcp://YOUR-DOCKER-HOST-IP-HERE:2376
    DOCKER_TLS_VERIFY: 1
  before_script:
    - mkdir -p ~/.docker
    - echo "$TLSCACERT" > ~/.docker/ca.pem
    - echo "$TLSCERT" > ~/.docker/cert.pem
    - echo "$TLSKEY" > ~/.docker/key.pem
    - docker login -u $DEPLOY_USER -p $DEPLOY_TOKEN $CI_REGISTRY
    # Now you are able to run commands in your remote docker from Gitlab CI.
    - docker stack deploy ...

Get ID of ONE Docker replicated (service) container that is running and is healthy

Let's say you want to run an one-off command inside a replicated (service) container. For example a DB migration job.

Django DB migration example:

docker exec $(docker ps -q -f name=mystack_django -f health=healthy -n 1) django-admin migrate
django_dbmigrate:
  # You probably have some configurations for remote Docker here
  <<: *remote_docker_template
  stage: Deployment
  script:
    # $(docker ps -q -f name=$STACK_NAME_$DJANGO_SERVICE_NAME -f health=healthy -n 1): Get the id of ONE container
    # from $STACK_NAME_django service that is running and is healthy.
    - DJANGO_CONTAINER_ID=$(docker ps -q -f name=$STACK_NAME_$DJANGO_SERVICE_NAME -f health=healthy -n 1)
    # docker-secrets-to-env-var.sh: will get postgres credentials available in Docker Secrets and
    # expose as environment variables
    - DJANGO_MIGRATE_CMD="django-admin migrate"
    # Sometimes you have an additional step before the migrate command, like export environment variables, or something.
    # - DJANGO_MIGRATE_CMD="source export-secrets.sh && django-admin migrate"
    - docker exec $DJANGO_CONTAINER_ID sh -c "$DJANGO_MIGRATE_CMD"
  when: manual

Python

Snippets

Check code style with Black

code_style:
  stage: Quality
  # It is simply to official Python image + Black
  image: douglasmiranda/black
  script:
    - black --check --diff my_project/
  only:
    changes:
      - ./**/*.py
  allow_failure: true
  when: on_success
@dmpvost
Copy link

dmpvost commented May 27, 2020

Your doc is perfect! thank for sharing!

On my case, searching for POSTGRES_IP doesn't work with your bash script.
After searching, I would like to share maybe more stable solution;

POSTGRES_IP=$(cat /etc/hosts | grep postgres | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment