Skip to content

Instantly share code, notes, and snippets.

Last active October 28, 2023 10:19
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Morendil/f9c2e9f3f450d3a76de8aeee7cf2bd00 to your computer and use it in GitHub Desktop.
Save Morendil/f9c2e9f3f450d3a76de8aeee7cf2bd00 to your computer and use it in GitHub Desktop.

The citations game: Wolverton Ratios

Rubric: Software Engineering : Factual Claims : Defect Cost Increase : Wolverton Ratios


See previous note on the IBM Systems Sciences Institute

In absolute numbers, the Wolverton are as follows: 139:455:977:7136:14102, claimed dollar costs of fixing an "average" defect. (Itself an absurd claim, see Leprechauns, I should perhaps write more on that.)

Normalizing to "if it costs one unit to fix at the requirements stage", these work out to 1:3:7:50:100 (requirements, design, coding, testing, maintenance)

It pops up in many books and articles and in various forms, for instance a very 1990s-looking Excel 3D bar chart.

The Big Puzzle is that a bunch of later article and books attribute these ratios to a paper by Boehm and Basili "Software Defect Reduction Top 10 List" which, it is easy to verify, does not contain these numbers. (It's a whole two pages long.)

Ergo, these later authors who are citing Boehm and Basili actually HAVE NOT READ that paper are have just copied and pasted a citation which flattered their existing biases.

What is investigated here is "what exactly happened", a forensic investigation. The crime is how little attention we are paying, as a profession, to the question "what process of empirical investigation generated the data we are looking at, and how reliable was that process".

Listing the works chronologically kind of spoils it as storytelling, since the investigation actually happened in reverse: coming across the claim in relatively recent articles, asking "where did this came from" - and asking it again, and again, and again. This document is a reference, not a write-up.

PDFs of "Top 10 List" paper:


1977, Wolverton, RW

"Tutorial, Quantitative Management: Software Cost Estimating"

This was a tutorial given at the inaugural COMPSAC conference. A valued friend with access to a university library holding a copy of the book was kind enough to send me a photo of the relevant page:

COMPSAC '77 proceedings, p236

Apparently the origin of the "data" is some software portion of the Safeguard anti-missile program:

The text credits a "W. E. Stephenson" as having collected data on Safeguard, but the chart itself cites a "R. O. Lewis" as the source of the error cost data specifically. These should therefore more properly be called the "Lewis ratios" - Wolverton's name was the one I found in an initial and slightly less tenacious investigation.

We know this is the source because it next appears cited by:

1981, Radatz, Jane

"Analysis of IV&V data"

Emphasizing the importance of early detection, Wolverton (Reference 30) cites figures stating that a design change costs, on the average, $977 to correct during code and checkout and $7136 during test and integration (1975 figures).

p87: $195:$489:$977:$7136 or 1:2.5:5:36.5

p89 extrapolates this to add a phase @ $14655

Exact same method of calculating a ROI for investment in IV&V that we'll see later on in NASA docs (which I've blasted as being "Flaubert math", cf

1981, Boehm

The name "Safeguard" appears (without citation or explanation, it seems) in Boehm's "Software Engineering Economics", providing some of the data points on this chart:

Boehm's chart

1992, R.O. Lewis

The Safeguard data and some of the process for collecting it is described (over 15 years later!) in Robert O. Lewis' book "Independent Verification and Validation: A Life Cycle Engineering Process for Quality Software"

Google Books Preview of p275

2003, Cigital

Case Study: Finding Defects Earlier Yields Enormous Savings

Seems to be "patient 0" for the numbers and the Excel chart and attributing the costs to Boehm & Basili (or possibly, Capers Jones, or Humphrey)

The attribution to Basili & Boehm is obviously bogus.

The attribution to Jones is bogus when you know the slightest thing about Jones, in particular that he's virulently opposed to cost per defect metrics.

The attribution to Humphrey is quickly disproven by looking inside the book (which admittedly people would have needed to buy the book for, today I can use Google Books).

The numbers are off from the Radatz numbers but clearly no coincidence, I suspect they were fudged a bit to avoid the appearance of "round" ratios

2004, Drabick, Roger

"Best Practices for the Formal Software Testing Process"

Page 21/22: "data from a study by J.W. Radatz", this is how I found Radatz in the first place

The numbers are a bit off, 194:489:997:7136, possibly honest transcription errors

2004, Gage et al.

"We did nothing wrong"

Influential article? Redrawn version of the Cigital chart

2006, Wagner, Stefan

A negative result: comprehensive survey of cost factors in the literature, no mention of the Wolverton ratios

2006-12-07, Everett, Gerald D.

Software Testing: Testing Across the Entire Software Development Life Cycle

p14 - ugly curve "The numbers first published in 1996 were revalidated in 2001"

2008-06, Everett, Gerald D.

No chart but a table of the "cost factors"; same Everett as the 2006 book

2008-08-01, Dallas, Andrew

References Capers Jones "Software Assessments, Benchmarks, and Best Practices", Humphrey, "Introduction to the Personal Software Process" in addition to the usual Top 10 - a semi-honest, "covering all bases" way of citing Cigital indirectly

2008-04, Strickler, John

Blog, crediting Capers Jones "Software Assessments, Benchmarks, and Best Practices"

2008-10, Golze, Andreas

"Reduce Project risk through early defect detection", conference presentation

Excel-style chart

2009, KPMG

More modern-looking chart

2010, Pressman, R.

Figure 14.2, based on data collected by Boehm and Basili [Boe01b] and illustrated by Cigital Inc. [Cig07], illustrates this phenomenon. The industry average cost to correct a defect during code generation is approximately $977 per error.

This is the Pressman of the "Pressman ratios", now in its seventh edition. Boe01b is the "Top 10" article.

It is baffling that the editorial process for possibly the foremost book in the field let this through for the 7th edition. It is apparently gone from the 8th edition, without a retraction as far as I can tell.

2011, Shamieh, Cathleen

p 49. "for dummies" means we round them out…

2011-06, Akella et al.

2012-07, Typemock

This is notable for corrupting the $977 into $937. So if someone is quoting "$937 during coding" at you, they're most likely referencing this Typemock infographic.

2012-05-12, Oehl, Catherine

slide 11, the Excel-style chart

2015-02, Al-lawatiya et al.

2016, Menzies et al.

"Are Delayed Issues Harder to Resolve?"

Solid negative result, still largely ignored (*)

We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction.

(*) detailed citation analysis needed, but early results not hopeful, see below

2017-06-08, Routh, Jim

"Source: me."

2017-07-14, Ivers, Jim

Haha. Jim 2 quotes Jim 1 and adds "If anyone is credible on this, he is… we didn't have empirical data, but now we do."

2017-11-3, McMorrow, Dermot

Excel style chart

2017-11-23, Madou, Matias

"Actual data from Routh, Aetna"

2018-01, Hovorushchenko, Tetiana

Cites the "for dummies" book

2018, Potencier, F.

PHP Code Performance Explained (book)

Classic example of quoting the Typemock $937 figure but attributing it to Boehm and Basili.

2019, Agrawal A.

On the Nature of Software Engineering Data (Implications of ε-Dominance in Software Engineering)

It is also useful to be able to predict issue lifetime specifically when the issue is created, since it is found earlier that delaying to resolve issues can become harder and costlier [Men17].

So here we have a PhD student working under the direction of the author of the one negative result, claiming it as his source for the positive version! I despair.

2019 Marques Pereira, H

"Automatização de testes para plataformas Oracle - Xstore"

Through Figure 1.1, it is possible to verify that the error correction cost has a growth quite pronounced as a project progresses between the various phases. With this in mind, effective and efficient quality control is essential from the earliest stages and fundamentally in the phase before the start of production. [Translated from Portuguese by Google Translate]

Yet another. Feels awful to work in an industry where someone can disprove a result yet be cited a few years later as having proved it.

(Unreliable source: undated, no name)

Copy link

digitalmacgyver commented Oct 26, 2023

Thank you for this writeup - it was a fun read. I thought you might be interested to hear about a similar oft-repeated software engineering figure which is nonsense when traced to it's roots.

Sometimes people will claim an exponential growing cost to fix defects based on the development stage where they are detected of:

  • Design: 1x
  • Coding: 5x
  • Integration Testing: 10x
  • Acceptance Testing: 15x
  • Post-release: 30x

The origin of these figures is table 5-1 on page 5-4 of the 2002 NIST Planning Report 02-3 "The Economic Impacts of Inadequate Infrastructure for Software Testing" which is an example table with completely made up data for the purposes of illustrating how later on in the report they will attribute costs.

This made up data was cited in a number of places, including in this paper from IBM / Rational Software which uses in tin this introduction as a source apparently without understanding that the numbers were not supposed to be indicative of reality.

It shows up frequently in various blog posts etc., often without citation: 1, 2, 3

Copy link

@digitalmacgyver Yes, the NIST "study" is a train wreck. The above are not far from the Pressman ratios covered here and the comments had some discussion of the NIST document as well. I've also exhumed my own critique of the NIST report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment