Skip to content

Instantly share code, notes, and snippets.

@hc2p
Last active November 13, 2020 04:26
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save hc2p/9e284cee3d585eefbc59454e44cc247a to your computer and use it in GitHub Desktop.
Save hc2p/9e284cee3d585eefbc59454e44cc247a to your computer and use it in GitHub Desktop.
This is a prove of concept for leveraging the travis directory caching for speeding up docker builds. It works by configuring the docker deamon to use a folder under current user's (travis) control. That way you have the privileges to use the caching feature of travis ci.
sudo: false
services:
- docker
before_script:
- sudo service docker stop
- if [ "$(ls -A /home/travis/docker)" ]; then echo "/home/travis/docker already set"; else sudo mv /var/lib/docker /home/travis/docker; fi
- sudo bash -c "echo 'DOCKER_OPTS=\"-H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -g /home/travis/docker\"' > /etc/default/docker"
- sudo service docker start
- docker build -f Dockerfile-testenv -t testenv .
script:
- docker run testenv
before_cache:
- sudo service docker stop
- sudo chown -R travis ~/docker
cache:
directories:
~/docker
@rmcdaniel
Copy link

This is a cool idea but there is a limit to how much you can cache and so this might not work for large builds. In my case, I got the error "running casher push took longer than 180 seconds and has been aborted" when I tried to use this. Thank you for sharing!

@hc2p
Copy link
Author

hc2p commented Sep 21, 2016

right. I don't know how much of cache storage travis provides, but yes, it's limited.

Another way of cutting down on build time and making sure images only differ in actual changes: pre-warm the cache by doing docker pull before doing docker build. This might be slow for you as you have big images and depends on where your docker registry is. You want to have it close to travis (which is afaik on AWS in us-east-1).

@promiseofcake
Copy link

@rmcdaniel it's possible to specify a longer cache timeout in .travis.yml if this indeed speeds up your process. At some point the amount of time it takes to tar up the files and push / pull ends up being more than just pulling down fresh each time though. (We discovered on caches > 1GB).

@itdove
Copy link

itdove commented May 23, 2017

+1

@casperdcl
Copy link

casperdcl commented Feb 9, 2018

  • Instead, cache the output of docker save and load from cache using docker load.
  • Or docker push/pull to/from hub.docker.com (after securely setting $DOCKER_PASSWORD in travis)
  • but only do this if (re)building layers is really slower than downloading
services:
  - docker

before_script:
  - mkdir -p ~/docker-images
  - [[ -f ~/docker-images/testenv.tgz ]] && docker load -i ~/docker-images/testenv.tgz
  # or: - docker pull casperdcl/testenv
  - docker build -f Dockerfile-testenv -t casperdcl/testenv .
  - docker save casperdcl/testenv | gzip -c > ~/docker-images/testenv.tgz
  # or: - echo "$DOCKER_PASSWORD" | docker login -u casperdcl --password-stdin
  # - docker push casperdcl/testenv

script:
  - docker run casperdcl/testenv

cache:
  directories:
    ~/docker-images

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment