Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Your cyclomatic complexity is so 1.9,76


Your cyclomatic complexity is so 1.9,76


In 1976 Thomas J. McCabe, Sr. developed metrics to determine the complexity of the code we write. One year later Maurice Howard Halstead formulated the so called Halstead metric to achieve something similar. 30 years later, we still rely on those abstract numbers that describe the complexity of our code, but do these naked numbers really tell us the truth about our code? Do they really give us the best advice on how to manage our code over its lifetime?

I believe not. Thanks to our modern, sophisticated toolchain we have many metrics at hand that Mr. McCabe & Mr. Halstead could only dream of. In this talk, I´ll explain how the combination can give us much better & "more human" advice about the flaws of our codebase - not as abstract numbers, but as concrete pointers to the parts of our code that really need our love & attention.


A common "How to" for measuring the complexity of your code looks like this (see *1): "Install it - Create the report - Watch the report - And now go try make your code better!". This is not far from the famous "How to draw an owl" (see *2) & only slightly worse than trying to understand this complex topic from wikipedia and the related formulas (see *3)

Aside from understanding these abstract numbers, we need to ask ourselves if looking at those metrics isolated is still the way to go more than 30 years after its invention. Our ecosystem has grown and so has the available data about our code.

Besides explaining the basics of the code complexity metrics, I´d like to evaluate & explore the questions:

  • Are those metrics still relevant today?
  • Is it really worth spending time looking at those numbers as isolated metrics?
  • Is this really the best way of signaling to a developer that something seems to be wrong with their code?
  • What other metrics can be included, to make them useful & tell us something about our code & coding beahviour?

As an example, you might have a very complex bit of code (for whatever reason) that the metrics are always pushing you to refactor. However, there was no other clear reason for needing to touch this code for ages - it runs without problems, there are tests for it, the developer who initially wrote it is still regurlarly commiting to the project, etc.

Should this piece of code really be at the top of your list for refactoring? Seems not.

I´d like to explain how we can get a clearer picture of our codebase by identifying the parts that really need some love (hotspots), or are easy targets for refactoring (low hanging fruits). Further, I´d like to introduce an automated way to identify these, based on metrics like:

  • The git commit history (number of people working on that code)
  • Level of detail in commit messages
  • Number of bugs over time measured (with tools like Sentry or Bugsnag)
  • Code coverage metrics
  • Number of code & comments added to files over time
  • Linting errors introduced/solved over time


Pitch (Anything else you want us to know about you or your talk?)

I was once a number fanatic - a developer that introduced each and every metric into the CI system I could get my hands on. I didn't always fully understand what the numbers meant, but I still drew (wrong) conclusions. I think many developers do the same - introducing code complexity metrics, but then lowering the bar of those, so that they don't raise anymore warnings. In other words, introducing tools just for the sake of producing metrics & more numbers.

I got rid of my number obsession. I reflected upon my behaviour & wrong conclusions. And, I think that others can benefit from what I learned. There are new ways to explore how to look at your codebase. Helpful, practical ways - not ones full of abstract numbers & metrics.

What will the audience learn from it

  • History of code metrics
  • What these metrics are trying to tell us
  • How we often fail to use them
  • Why I believe we need to do better to get meaningful data out of metric tools
  • Evaluation of what data we actually have at hand (often without knowing about it)
  • How tools can & should be combined to give us concrete advice
  • How those tools should output their conclusions to provide meaningful advice


  • McCabe & Halstead, pioneers of code metrics
  • 30 years after: Have we learned a thing?
  • Cyclomatic complexity - the programmers worst enemy explained with code
  • "The way to hell is paved with good intentions", or "How we fail to interpret our metrics"
  • What else is there? Metrics we have, but don't know about
  • Making sense of it all: Combining metrics to change the way we look at our code's flaws
  • A better way to present the analysis to developers
  • Demo: How a prototype of such a tool might look

Who is this presentation for?

Web developers who are like I once was. Obsessed with numbers, obsessed with metrics, but without proper ways to interpret them. Web developers who need to work in teams and who need to maintain a large codebase over years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment