I'd like to offer a counterargument to the claim that Deis should use a Monorepo. Monorepos are not the right fit for what we're trying to do. For a system composed of numerous loosely coupled discrete services, it is far better to use multiple repositories.
To frame the overall argument, I would like to point out that each of the following Deis components is designed to run on its own outside of a greater Deis environment:
- etcd
- Minio
- RiakCS
- Postgres (though it requires an etcd, not necessarily our etcd, and also S3 storage)
- Slugrunner (which requires only S3-style storage)
- Slugbuilder (which requires only S3-style strorage)
- Registry (which only optionally requires S3-style storage)
- Logger
- Router mesh (which is currently not done, but which should be runnable separately)
- Git Gateway (aka Builder) ideally only requires the Slugbuilder.
Only Controller and the Deis Client require a substantial subset of the above.
And, of course, Helm stands on its own by design, as it is not part of Deis.
A huge goal of Deis v2 is that anyone who uses Kubernetes should be able to grab any of our components and run just the ones they want, without having to buy into the great Deis PaaS model at all. Only the users that install Controller/CLI will need to install substantial portions.
A secondary goal is that we can provide alternative implementations of various pieces (proof of concept: Minio and RiakCS). And a tertiary goal is that we become purveyors of components in general, of which PaaS is only one.
Already, this is not realistic if we put everything in a monorepo. Doing so will at least require our users to download and build all of the dependencies for all of the things above. But there are many other reasons why monorepos are not the right fit.
If the idea behind a microservice is that it is a small standalone component that has well-defined entry and exit points, and a well-defined behavioral contract, it seems odd that we'd manage a whole bunch of these in one monorepo.
- A microservice should be deployable independently, and not require the entire repo
- A microservice should be buildable in isolation, and not require building all other microservices
- A microservice should be independently versioned, relying on API contracts
- Microservices should be loosely coupled, and should interact on high-level API agreements (e.g. major version numbers). Storing them in a monorepo tends to dissuade people from developing this way.
Go does not play well with monorepos unless you also control all of the dependencies. Go has a feature-poor dependency system which lends toward gross version conflicts. And monorepos demonstrably exacerbate this problem.
- Large Go dependency trees perform badly during build and test phases.
- Because Go has no strong versioning, large repos tend to end up with conflicting versions of the same dependency packages. As of this writing, Kubernetes has 42 such conflicts, and they have no idea what to do about this. Dies v1 has lots as well, but partly because it uses Kubernetes.
- It becomes difficult for external projects to re-use our code. Case in point: Kubernetes is a mono-repo. In order to use a few of their packages, we have to import all 108 of their dependencies.
- It is poor open source practice, as it diminishes the ability of other OS projects to use our code in their projects. (to wit, we'd foist on our users the exact problem we have with Kubernetes: vast swaths of dependencies that they don't actually care about.)
- Independent pieces of code should be allowed to use different versions of the same library. There is no reason why a cluster of etcd servers should have to use the same print library as a slug builder when the two do not interact via the printer, but via an API contract. this is very important when upstream APIs change radically, like Etcd did only a few months ago. So it should be acceptable for one repo to use
v1.1.1
ofdeis/pkg
while another usesv1.2.1
. - Monorepos prevent independently versioned library components.
Building and testing is unequivocally more complicated in a monorepo. Proponents of monorepos almost always claim that this is "easily circumvented" by just writing "more advanced builds". But this argument is making a weird claim: we should knowingly take on more work just to make the monorepo idea function. In practice large projects like this end up with custom build tools because regular tools cannot do the job.
- Monorepos have to be tested in one go, which means one small commit requires building/testing the entire thing. The work around is... more one-off tooling.
- Test suites take a long time. To work around... one-off tooling.
- Small breaks in the build system halt large portions of the team. The work around... running old code on dev boxes so that we don't have to deal with broken HEAD.
- Builds take a long time, even if you're only working on small changes (again, ref: Kubernetes). The work around... one-off tooling.
- A particularly bad coding practice tends to happen with monorepos: different things get mistakenly interdependent or overly tightly coupled. Shared code is good when the code that is shared is library code. Shared code is bad when it's unintended cross-referencing. That happens frequently in a monorepo.
Small projects with distinct and discrete build cycles are good. They take few resources, they are easier to debug, and one project's build failure does not halt development for everything. Small repos tend not to require frequent and gigantic test runs that consume large amounts of resources.
Subversion was a great VCS for monorepos, but Git is not. And GitHub revolves around Git, which means that our workflow regarding issues, reviews, milestones, and releases all boil down to Git.
- Monorepos require frequent rebases and often track too many branches to keep straight
- Monorepos accelerate the pace at which Git repos grow. As a result, network operations get slower.
- Tags are repo-wide, which means independently versioning or labeling things cannot be done (see the experimental part below)
- It falls to Git commit messages to explain exactly what is being changed, which pushes more of a burden to developers, and makes it harder to make programatic decisions
- The "major rewrites across the codebase" argument for monorepos has a problematic side: Large PRs with major refactors are reviewed by people who may understand only part of what they are reviewing. And the process of making sure that the reviewers are qualified to review all of a large patch is undefined.
- Monorepos are complex and often organized ad hoc. It can become very difficult for a newcomer to navigate
- All issues get jumbled into one issue queue, and important things get lost or ignored
- Community cannot easily use just one or two of our things. They have to take the whole thing.
- Social conventions around code reviews become complicated: Is this person really qualified to review Builder if they've only ever worked on controller?
- Because of the tight coupling of versions, a small but critical update to one component requires version, releasing, and deploying the entire thing.
External libraries like the deis/pkg
library are a good thing, and benefit from being outside of a monorepo. Here are several reasons why:
- Clean separation: Libraries built to be stand-alone libraries are cleanly separated, and don't accidentally build up dependencies on other things. An example from Kubernetes: The package that handles parsing label queries also mysteriously defines a set of CLI
flag
types (like-l
), which is simply bizarre. But it's a result of a confusion over what is library-like and what is part of a particular program. - Counterintuitively, monorepos tend to produce repetitive non-library code. Deis v1 had at least 6 different Go functions spread out through the repo to get a value from Etcd. Stand-alone shared libraries encourage people to put common code in the same place.
- Stand-alone libraries can be re-used in projects outside of the main project. For example, Helm uses the
deis/pkg
library even though it was originally conceived as a standalone tool that is not part of Deis. - Stand-alone libraries are easier for the community to use.
- Stand-alone libraries tend to control dependency bloat more effectively. Again, a huge issue with Kubernetes is that to use it as a library, you get 108 dependencies, most of which are not directly used.
By SemVer, I'm referring to the guidelines for Semantic Versioning (v2).
One thing that very clearly emerged with the Deis v1 project is that in a monorepo, stable components are freely and indistinguishably linked with unstable components. Furthermore, stale components often persist for long periods of time, but cause confusion because nobody knows why they are there. (Example: Redis in Deis v1, which is completely unused, and may actually even break some things if enabled.)
- Monorepos mix experimental code with stable code. Swarm, K8s, and Mesos are all experimental code in the main repo of Deis v1
- As a direct result, build systems are often much more complex, as we require lots of switches to enable or disable based on stability markers
- Semver does not apply cleanly across a monorepo. There is no way to delineate an unstable and new component with versioning separate from a stable one. As a result, early changes to an experimental component lock the project into backward compatibility issues because the experimental components cannot be reworked without breaking the version contract that Semver requires.
- Old projects languish, but cannot be easily deleted because doing so requires a platform version bump (a la semver). Deis Redis was an example.
- Outsiders (and often even insiders) have no way of distinguishing experimental features from stable features. As a result, unstable experimental code often gets used when it should not.
- The general risk-aversion principle tends to prevent experimentation because any experiment either has to live in isolation on a branch (which people rarely use) or it puts the entire project at risk because of its instability.
- "All components always share the same dependency versions." This is an arbitrary constraint that often leads to problems.
- "One commit can touch all components." Why is this a good thing? Why is it better than multiple commits? More often than not, this is a dangerous practice.
- "Search all code at once." This is actually easy for Go. Just search at
github.com/deis
instead ofgithub.com/deis/deis
. It is not a goal of product design because it is easily solved with existing tooling. - "See everything at a glance." In fact, the distributed repo goal is to not see everything at a glance. It's to separate logically discrete things into their own bins.
Unfailingly, the number one issue I hear about Helm and the various Deis components is that there is this huge dependency tree. There is one and only one reason for this huge dependency tree: Kubernetes.
Kubernetes is a monolothic repository. Libraries, servers, clients, and support code are all bundled into one gigantic repository.
- Kubernetes has a tremendously complex build and deployment system. The testing system is even more complex. And these are all custom-crafted special-purpose tools (bash scripts, salt, etc.) designed to deal with the complexity of building a monorepo.
- At any given time, there is a good chance that HEAD will not build. This means that even updating a single tool like
kubectl
may be unachievable on a daily basis even if its (direct) code has not changed. - There is no way to cleanly use Kubernetes libraries without also sucking in all of the rest of the code
- For simple library usage (e.g. importing just the
api/v1
package), the dependency graph will add at least two seconds to build time. - Kubernetes has many places where libraries are mixed with non-library code. As an exercise, trace the YAML parsing routines from kubectl, and you will traverse vast portions of the API across many packages that don't seem to have any bearing on parsing YAML.
- The Kubernetes codebase is very difficult to grok. In spite of the fact that it was a well-designed system based on another previously implemented system (Borg), the codebase is vast and complex.
- The Kubernetes issue queue is tremendously chatty, and results in many, many duplicated issues, a lot of effort cross-linking, and many issues that are simply lost because project leads can't keep track.
- Huge amounts of (human) process have grown up around k8s in order to stay on top of issues, milestones, documentation, and so on.
- Kubernetes plugins also exist inside of the Kubernetes tree (because they have to be built at compile time), but not all of these are production ready. This is confusing, because there is no marker to tell what is experimental and what is stable (with the one exception of the
apis/experimental
package).
If organized as smaller self-container projects Kubernetes would not have had these issues. Smaller teams could have focused on specific things. We wouldn't end up with string parsing libraries that declare CLI flags or YAML parsers that include the runtime package. External projects (like ours) could have used just a few small pieces and not have to require the entire thing. And more than anything, social pressure would encourage developers to not tightly couple all of the pieces.
Deis used to have a monorepo, but we redesigned to use smaller repositories. Our design was intentional. There are two classes of things: libraries and applications. Deis Workflow is a snowflake in the sense that it has dependencies on many applications, but code-wise it follows the same patterns as all the others.
- Libraries provide common functionality, but are not stand-alone.
deis/pkg
is currently the only shared library. It is very small, but we figure that as it grows it can be broken into stand-alone libraries where that makes sense. Libraries have their own unit tests. - An application has a runnable unit, not a library. Applications should never require the code of other applications. They can only require libraries. (It was a mistake that the
deis/deis
code was ever imported intodeis/minio
. That should never have happened.) Each of these comes with its own functional and unit tests. - Deis Workflow is a special project that also contains the documentation for the PaaS, as well as integration tests for other components. It is "special" because it makes use of the other projects, while all of the other projects should be able to stand alone with no other required Deis components.
I am not suggesting that monorepos are bad in all situations. There are two places where a monorepo makes sense:
- A project that contains only tightly-coupled components and is not designed to be used by external projects (100% inward facing).
- Internal enterprise software (100% private).
Deis is no longer either of these things. Rolling it back into a monorepo will be a long-term negative move.
To respond to your first list of objections to mono repo:
A microservice should be deployable independently, and not require the entire repo
Do you mean that - when creating the build environment for your microservice - it's wrong to get more files than you need? Or that it's not wrong but just inconvenient? In either case, I would ask: why? What's wrong about getting too many files? Or: what's inconvenient about getting too many files? And if you find it inconvenient, then I would ask: how does this inconvenience weigh against the inconvenience of having to track dependencies between several git repos? Or the inconvenience of reviewing changes across several pull requests?
A microservice should be buildable in isolation, and not require building all other microservices
If you have the code that the microservice needs, then you can build it. The fact that you also have other code in your repo (unrelated to the microservice) does not change this (you don't have to build these parts, you just need a makefile that builds the microservice).
A microservice should be independently versioned, relying on API contracts
You can do that, just add a VERSION.txt file to the microservice directory (with a version number that you can bump)
Microservices should be loosely coupled, and should interact on high-level API agreements (e.g. major version numbers). Storing them in a monorepo tends to dissuade people from developing this way.
IMO if devs are able to stop at repo-boundaries, then they should be able to stop at module-directory boundaries. It's essential that devs understand the concept of modules. If they don't, then this needs to be addressed and not worked around by using many repos.