our base operating system is Mac OSX which we will simply call osx
on top of osx we run a hypervisor (e.g. xhyve, vmware, virtualbox)
on top of our hypervisor we are running a virtual machine with CoreOS as our docker_engine (with the help of or without vagrant)
on top of our docker_engine we run an arbitrary set of docker_container instances most of which are comprised of a "bespoke" docker_image of our own; a certain "microservice" in development
How do we:
a) map a ("microservice") project's source code directory located on our osx file system onto the currently instantiated development docker_image for our current project's development docker_container
osx => hypervisor => docker_engine => docker_container
b) and pass any file changes from osx down to the docker_container level as well (i.e. inotify, lsyncd, etc)
To expose files into VM that runs docker engine there are at least 2 options.
First one can expose the source tree into VM directly using hypervisor-specific mechanism. For example, VirtualBox allows to expose any osx folder into VM. The drawback is that permissions/ownership on that folder may not be suitable for a particular case. Another drawback is that the OS in VM may not support the drivers for such hypervisor-provided file-system, as is the case on CoreOS. And even when the drivers are there, the extra mounts imply that the development VM deviates from the production in yet another way.
The second option is to run a synchronization software on the host to copy any changes from the host into the VM. One can either invoke that directly, like running rsync manually, or use a tool like lsyncd that monitors the host filesystem for changes and copies them into VM. In my setup I use lsyncd. The only catch is that by default lsyncd waits 10 seconds to accumulate changes on the host before sending them to VM so for a convenient development I changed that to the minimal 1 second. Here is my lsyncd config:
--chmodensures the permissions that I need in VM.
To expose files into a container the standard approach is to embed them into the image. However, this is very inconvenient during the development, especially when changing PHP/JS/HTML code. First, a new image with updated files has to be build. Second, the container has to be restarted. Docker tries to makes this fast using extensive caching, but still it can take some time. So the alternative is to mount the directory with source files directly into the container as a read-only host volume:
This what I use in my setup. So when I change a file on the host, 1 second later lsyncd copies it into the VM where it is immediately available for the application. One can do such exposure either temporarily during development or with the production setup as well. This way updating HTML or image files on the production machine does not require restarting the container.
The only thing to watch is that permissions on the tree must allow to read files using the account that is used to run the container. On CoreOS this is trivial - my lsyncd setup above is enough. However, on RedHat or Fedora that runs SELinux one has to update the security content on the tree exposed into the container and run something like:
Fortunately this has to be done only once when path-in-vm-to-source-files is created.