Spurred by recent events (https://news.ycombinator.com/item?id=8244700), this is a quick set of jotted-down thoughts about the state of "Semantic" Versioning, and why we should be fighting the good fight against it.
For a long time in the history of software, version numbers indicated the relative progress and change in a given piece of software. A major release (1.x.x) was major, a minor release (x.1.x) was minor, and a patch release was just a small patch. You could evaluate a given piece of software by name + version, and get a feeling for how far away version 2.0.1 was from version 2.8.0.
But Semantic Versioning (henceforth, SemVer), as specified at http://semver.org/, changes this to prioritize a mechanistic understanding of a codebase over a human one. Any "breaking" change to the software must be accompanied with a new major version number. It's alright for robots, but bad for us.
SemVer tries to compress a huge amount of information — the nature of the change, the percentage of users that will be affected by the change, the severity of the change (Is it easy to fix my code? Or do I have to rewrite everything?) — into a single number. And unsurprisingly, it's impossible for that single number to contain enough meaningful information.
If your package has a minor change in behavior that will "break" for 1% of your users, is that a breaking change? Does that change if the number of affected users is 10%? or 20? How about if instead, it's only a small number of users that will have to change their code, but the change for them will be difficult? — a common event with deprecated unpopular features. Semantic versioning treats all of these scenarios in the same way, even though in a perfect world the consumers of your codebase should be reacting to them in quite different ways.
Breaking changes are no fun, and we should strive to avoid them when possible. To the extent that SemVer encourages us to avoid changing our public API, it's all for the better. But to the extent that SemVer encourages us to pretend like minor changes in behavior aren't happening all the time; and that it's safe to blindly update packages — it needs to be re-evaluated.
Some pieces of software are like icebergs: a small surface area that's visible, and a mountain of private code hidden beneath. For those types of packages, something like SemVer can be helpful. But much of the code on the web, and in repositories like npm, isn't code like that at all — there's a lot of surface area, and minor changes happen frequently.
Ultimately, SemVer is a false promise that appeals to many developers — the promise of pain-free, don't-have-to-think-about-it, updates to dependencies. But it simply isn't true. Node doesn't follow SemVer, Rails doesn't do it, Python doesn't do it, Ruby doesn't do it, jQuery doesn't (really) do it, even npm doesn't follow SemVer. There's a distinction that can be drawn here between large packages and tiny ones — but that only goes to show how inappropriate it is for a single number to "define" the compatibility of any large body of code. If you've ever had trouble reconciling your npm dependencies, then you know that it's a false promise. If you've ever depended on a package that attempted to do SemVer, you've missed out on getting updates that probably would have been lovely to get, because of a minor change in behavior that almost certainly wouldn't have affected you.
If at this point you're hopping on one foot and saying — wait a minute, Node is 0.x.x — SemVer allows pre-1.0 packages to change anything at any time! You're right! And you're also missing the forest for the trees! Keeping a system that's in heavy production use at pre-1.0 levels for many years is effectively the same thing as not using SemVer in the first place.
The responsible way to upgrade isn't to blindly pull in dependencies and assume that all is well just because a version number says so — the responsible way is to set aside five or ten minutes, every once in a while, to go through and update your dependencies, and make any minor changes that need to be made at that time. If an important security fix happens in a version that also contains a breaking change for your app — you still need to adjust your app to get the fix, right?
SemVer is woefully inadequate as a scheme that determines compatibility between two pieces of code — even a textual changelog is better. Perhaps a better automated compatibility scheme is possible. One based on matching type signatures against a public API, or comparing the runs of a project's public test suite — imagine a package manager that ran the test suite of the version you're currently using against the code of the version you'd like to upgrade to, and told you exactly what wasn't going to work. But SemVer isn't that. SemVer is pretty close to the most reductive compatibility check you would be able to dream up if you tried.
If you pretend like SemVer is going to save you from ever having to deal with a breaking change — you're going to be disappointed. It's better to keep version numbers that reflect the real state and progress of a project, use descriptive changelogs to mark and annotate changes in behavior as they occur, avoid creating breaking changes in the first place whenever possible, and responsibly update your dependencies instead of blindly doing so.
Basically, Romantic Versioning, not Semantic Versioning.
All that said, okay, okay, fine — Underscore 1.7.0 can be Underscore 2.0.0. Uncle.
(typed in haste, excuse any grammar-os, will correct later)
@pmonks thanks for the response. Rich Hickey's presentation is one of his most controversial, some of it I even find confusing. Often, the real world and other dependants get in the way of the "accretion without breakage" philosophy. Rich doesn't talk about it in his presentation, but he does mention other examples with solutions.
For example, if your software is not used by a third party (e.g. a client hitting the endpoint), you don't have to apply any of Rich's principles since you're free to modify in a fixed environment. However, if it is, it depends on how it's used. Are you running a public web server, or providing a public library? You can still apply the principles in both environments, but there are some cases where deletion is the only option (last case scenario).
Software should be designed to support these changes by not coupling too tightly. If you hit
POST /articles
with a new article and a new field is required, a new endpointPOST /articles2
could be created while sharing logic in the presentation layer (GET articles/...
). However, if an external force prohibits you from maintaining your stack, deprecation arrives. In serious cases, deletion arrives (e.g. you were storing something but now you can't). In a library, when a breaking change is necessary, deprecation or even a new namespace can be a solution. When the whole library needs to be refactored, a new library may satisfy.In Datomic, for example, you have excision, which is only used when you absolutely have to delete data (e.g. the GDPR's "right to be forgotten"). But often, these measures are inefficient. Not only for performance, but in actual deletion of the data. If you delete the data but have a backup, you've failed to acknowledge the user's request to delete the data. One solution is crypto-shredding where the data is encrypted and the decryption key is deleted so the data is locked forever.
Semantic versioning is popular since it models the real world but the cost to consumers can be very detrimental. In my opinion, like the author, it's not a great way to identify classes of change. Like Rich said, with patch and minor, you don't care. With a major change, you're screwed. What do versions 1.5.0 and 2.0.0 convey? Is it a small change or a complete overhaul? Will it take Sam 10 seconds to fix and Bob a week? Changelogs are answers to that, but the version itself doesn't say much. Accretion over breakage is not immune to the real world but is one solution to the problem. It does work in many applications but has mixed feelings. HTML, Unix, and Java to name a few. It requires a lot of discipline.