Skip to content

Instantly share code, notes, and snippets.

@practicingruby
Created May 1, 2014 03:17
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save practicingruby/305cfdd2dae5ce7f85ac to your computer and use it in GitHub Desktop.
Save practicingruby/305cfdd2dae5ce7f85ac to your computer and use it in GitHub Desktop.

There's no general rule you can follow here, because it's always going to depend on context. In my experience the kind of feedback loops you create, and the kind of safety nets you need are defined entirely by the domain, the organization, and the team culture.

Here are a few examples:

  1. I do a bit of work for a medium-sized dental clinic. The business manager there is really fun to work with, but has the tendency of changing his mind six times before he settles his ideas. So when he asks for a report, I don't put any effort at all into writing tests or worrying about minor bugs even, because my only goal is to flesh out in code something vaguely resembling what he asked for.

Often times, this means doing a handful of 30 minute prototypes until the requirements settle, each of which would have taken me 2 hours if I drove them via TDD. When things finally cool down, I evaluate the complexity and maintainability of the resulting code and either leave it untested, add some acceptance tests, backfill unit tests, or even just chuck the whole thing out and reimplement from scratch using TDD.

  1. Whenever I'm doing something that has a well-defined algorithm to it, I almost always use TDD and unit tests. For example, I had to build a barcode validator for a few different barcode formats. The nominal and off-nominal cases were very well known and would never, ever change. It'd be crazy not to use TDD here, because there's no guess work involved.

  2. In the case of Practicing Ruby, I learned about a week before I was about to relaunch the business that the publishing platform I intended to use was critically flawed, and I needed to come up with a replacement from scratch because I didn't want to delay the relaunch and lose the faith of my customers. Needless to say, no tests were written, corners were cut, and it was the definition of "dirty coding". But later on we gradually paid down those debts, and though we don't quite use a TDD flow now, we do pretty much make sure that every feature that gets merged has good test coverage.

  3. In open source projects (like the Prawn PDF project I maintain), I absolutely insist on having high quality tests before merging anything. It'd be crazy not to, because the knowledge of the system is so distributed, it has to live on for years and years, and it is hard to remove things from libraries without frustrating people. When doing bug fixes, I almost exclusively use TDD, too.

I could go on, but you get the idea. The important thing behind all of this is that context matters. Also, you can probably ignore the "testability" and "maintainability" bogeymen that extreme TDD advocates rely on for their arguments. A system made up of easily replaceable parts is maintainable even if those parts are absolute garbage, and the idea of building a "testable" system is really more about following the SOLID design principles than it is about practicing TDD. This is not to ignore the contributions that TDD has made to both of these fronts, but to say these things are complementary.

You don't necessarily give up testability and maintainability by not practicing TDD, and you might gain any number of things in return for relaxing those standards, including speed, reduced development costs, less code to maintain, etc. To become a mature developer is to recognize these tradeoffs, and use them effectively. I'm still learning how to do that, myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment