[since the source blog doesn't support comments and the margins of a tweet are too narrow]
Hmm, interesting distinction. But I don't see why your way is better than having the CI build the image.
-
It is more flexible in allowing me to build the image any ad-hoc way I want, e.g. modify files inside the container and take a snapshot.
But my primary Dev instinct is that I don't want this. The last thing I want in prod is an image I don't have sources for. I'm not thinking in terms "test this image before deploying" but in terms of "test this source code". Which means build an image from Dockerfile in a clean env, test this "clean" image (and yes, push this same image to prod). -
Does your way improve dev/prod parity?
After all, while developing I was probably doing some manual "informal testing" on my dev image. Shouldn't then prod image == dev image? Perhaps, but- I'd take a clean build reproducible over bit-for-bit dev/prod parity any day.
- If one really insists, one could use CI builds in local developement! For big images that might be painfully slow [^1], and docker gives me enough confidence that local builds ~= CI builds to skip it.
[^1] BTW, network speed is a problem for your design. For big images and users on typical internet connections, uploading just sources and having them built and deployed in the cloud will be faster that building a big image locally and uploading that. rsync and especially bup can optimize upload of similar images.