To run something messy with complicated dependencies that was designed for a big distro like Ubuntu and doesn't have a Nix package built for it yet, Docker provides a sledgehammer approach that is simple and reliable.
For things like CI servers, it seems like a good lightweight alternative to VMs.
Non-repeatability of image builds that do network I/O, and no cache for those builds (e.g. for dockerfiles using apt-get).
The tooling around it could use more work. I felt some mismatch between what Compose what trying to do and what I wanted it be for.
I started using Docker out of a desire for a single-command no-fuss way for developers to get our app running. To that aim, it worked, but it introduced as many problems as it solved. One team member couldn't get volume mounting to work in his ecryptfs home directory, and didn't have enough disk space allocated to his root filesystem for all of the images. Using an SSH agent within containers was a mess. I had a hard time keeping disk space free and tried several third-party garbage collectors. I spent a lot of time waiting for images to build and containers to restart.
I then started using the Nix package manager, and I've realized that it solves all of the problems that I had wanted Docker to solve, without introducing its problems. I have true repeatable builds because all downloads have their hashes verified. All downloads and package builds are cached in /nix/store (which is immutable and hash-addressed). Garbage collection is built-in. I have isolation only in the way that really matters - packages and their dependencies, so it's trivial to have multiple versions of something running side-by-side. I can use nix-shell to be up and running in an environment with any set of dependencies nearly instantaneously. I don't use Docker anymore.
I think my takeaway lesson here is that, if you have a sufficiently powerful package manager, containers generally aren't the best tool for development.