Skip to content

Instantly share code, notes, and snippets.

@Morendil
Last active June 21, 2023 18:14
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save Morendil/ebfa32d10528af04e2ccb8995e3cb4a7 to your computer and use it in GitHub Desktop.
Save Morendil/ebfa32d10528af04e2ccb8995e3cb4a7 to your computer and use it in GitHub Desktop.
The IBM Systems Science Institute

The IBM Systems Science Institute

Rubric: Software Engineering : Factual Claims : Defect Cost Increase : Pressman Ratios

Context

Background: I have been researching quantity and quality of empirical evidence underlying claims in software engineering. What do we know, and how well-established is that? See in particular https://leanpub.com/leprechauns which concludes that the answer is in (too) many cases "not much, and poor".

This applies in particular to the "Defect Cost Increase" claim, which is poorly supported by evidence. The claim states that the longer a defect stays undiscovered after being introduced in a software system's artefacts, the more expensive it is to correct.

The "Pressman Ratios" are a specific quantitative assessment of this claimed effect:

"Assume that an error uncovered during design will cost 1.0 monetary unit to correct. Relative to this cost, the same error uncovered just before testing commences will cost 6.5 units; during testing, 15 units; and after release, between 60 and 100 units."

We can compress this to "1:6.5:15:60-100", for ease of distinguishing with other variants of the same claim that differ only in the precise ratios.

The reference cited by Pressman is initially formatted as follows:

[IBM81] "Implementing Software Inspections," course notes, IBM Systems Sciences Institute, IBM Corporation, 1981

Evidence for the existence of the Institute

Evidence that the Systems Sciences Institute even existed is scant and hard to come by. A search of the IBM corporate web site yields nothing, for instance.

Using Google Scholar with search terms "IBM Systems Sciences Institute" (Scholar) yields a short list of people who seem to have been affiliated with the organization:

I am using middle names and middle initials where available to be allow specific Google searches and disambiguations, the usual citation style only has the last name in full, e.g. "A. O. Allen".

What was the IBM Systems Science Institute ?

Quoting Claude W. Burrill:

The Systems Science Institute is an educational organization and is a unit of IBM. We offer a variety of advanced courses for management and for professionals in the computer field.

The Insitute was, in essence, an internal training program for employees at IBM.

Link to the IBM Systems Research Institute

Confusingly, several biographies of the people listed above associate them with the "IBM Systems Research Institute" (for instance, Burrill's obituary). A New York Times article describes it as follows:

(...) virtually all I.B.M. employees receive some kind of company-financed education or training each year beyond basic job training. The education might range from attendance at special lectures to full-fledged courses, inside or outside corporate facilities. Despite its other programs, I.B.M. believes it needs a graduate-level school for its own use. The Systems Research Institute, founded in 1960, is the closest thing to an academic center at I.B.M.. (...) But the Systems Research Institute is not an academic place. ''It's all business,'' said its associate director, Joseph E. Flanagan.

The IBM Systems Research Institute was based in the state of New York, with offices in Manhattan and Thornwood. (Giant, Origins, Thornwood)

Location, history and details

The IBM Systems Science Institute was based in Los Angeles, at 3550 Wilshire Boulevard. (Address).

It started operations as early as 1967 (according to an IBM ad referring to its post-1982 name, see next paragraph). Started

It ceased to operate under that name before 1982, according to a different ad. (Rename) It became the "Information Systems Management Institute".

Incidence on the evaluation of the Defect Cost Increase claim

Another article will deal in more detail with the credibility of the Pressman Ratios, and how they are widely but generally inappropriately cited throughout the literature on software engineering.

The main findings of the present entry on the "IBM Systems Science Institute" are as follows:

  • the Institute was a corporate training program, not a research body; as such it is inappropriate to cite the source of the ratios as "an IBM study" or "a study by the IBM Systems Science Institute", in the total absence of any claim that the Institute was the primary source;
  • the original project data, if any exist, are not more recent than 1981, and probably older; and could be as old as 1967.

References

@trdrake
Copy link

trdrake commented Aug 22, 2019

Hi! Hope you don't mind me chiming in - I found your note while trying to run down the same exact IBM paper and failing. Found this paper instead, which is addressing the same basic question: is this "defect cost increase" thing completely apocryphal? Short answer: seems to be, and recent studies don't show it.

https://arxiv.org/pdf/1609.04886.pdf

@Morendil
Copy link
Author

@trdrake Not at all, it's such a relief to know I'm not the only one wondering :) Thanks for the link!

@Morendil
Copy link
Author

Morendil commented Aug 23, 2019

@david-a-wheeler
Copy link

There's a real paper from a similar timeframe that does show this effect of exponential growth in cost of correction over time.

That paper is Boehm, Barry W., "Software Engineering", IEEE Transactions on Computers, December 1976, pp. 1226-1241, vol. 25
DOI Bookmark: 10.1109/TC.1976.1674590, https://www.computer.org/csdl/journal/tc/1976/12/01674590/13rRUzp02n5 At the time Barry W. Boehm was at the TRW Systems and Energy Group.

It says "Fig. 3 shows a summary of current experience at IBM[4], GTE[5], and TRW on the relative cost of correcting software errors as a function of the phase in which they are corrected." and indeed figure 3 (page 1228) shows exponential growth. It only shows averages or ranges for each data source, and that's a legitimate critique. That said, it does show them for multiple companies, and then presents a trend line that plausibly follows from the data provided. Boehm has a good reputation, I expect that this really was a reasonable observation from real data.

It's legitimate to question whether or not that is still true. Computer "science" is notorious for having almost no science - experiments are almost non-existent. I would love to see this & many other experiments conducted to see what's true today.

@Morendil
Copy link
Author

Morendil commented Jul 22, 2021

Yep, that's one of the ur-papers that you get to if you chase citations around long enough, the "telephone game" pattern I explore at length in Leprechauns.

But it has its problems - some of the data was obtained from student projects Boehm was supervising, for instance. Another problem is that you still have to check the papers cited by Boehm to follow the "chain of data custody" - Boehm's towering reputation notwithstanding.

To pick just one (I have a more thorough survey in the book) the source of the GTE data is a paper by Daly (paywalled by the usual suspects, but Sci-Hub is your friend), the only relevant portions of which are shown below:

Screenshot 2021-07-22 at 22 03 03

The development cost required to detect an error by reading code is approximately equal to 25 percent of that required to detect the same error via machine debugging. This statistic was gathered from two projects, a 111 000 instructions project and a 12000 instructions project.

As you can see from the figure, the categories of interest are labeled differently. Daly was comparing "reading code" to "machine debugging", not phases. Also importantly, Daly was talking about the cost incurred to find a bug, not the time to fix one. And in the 1970s the primary cost of "machine debugging" would have been the cost of running the computer, which tended to dwarf the labor costs of the human programmers.

Anyway, it's not clear on what basis it was decided that "Eyeball" corresponds to "Requirements", "Prototype system" to "Development test", and "Post cutover" to "Operation" - other than Boehm wanting to shoehorn the data into his model. (And what does "eyeball" even mean in this context ? Not clear.)

Finally, it's somewhat obvious that Boehm made up the error bars around the log-graph data point which stands for "fifteen" - why would he do that, other than to lend an air of statistical legitimacy to what was in fact a wet-finger-in-the-air approximation from Daly? The intellectually honest thing to do here isn't to draw fake error bars, it's just to plot the damn points, quote the small paragraph above from Daly, and warn the reader about the approximate nature of the equivalence being made. Yes, that adds half a page to the paper, but that's how scientific publications are supposed to be, long and painstakingly detailed and rigorous.

But Boehm's paper is not science, it's a marketing brochure, selling a new-fangled concept called "Software Engineering". So it has to be short and convincing, and scientific accuracy is a secondary concern.

@ecmonsen
Copy link

Thank you for this analysis and much appreciated source links. I just spent about half an hour trying to track down the original cost ratio chart I am seeing everywhere, finally got to here by searching, in DuckDuckGo (not google) '"IBM System Science Institute Relative Cost of Fixing Defects" original'.

@Morendil
Copy link
Author

Morendil commented Jun 1, 2022

@ecmonsen Very glad this was helpful !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment