Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
# Thoughts on Testing #
Last night I watched this video rant from last October by [Hampton Catlin][hcatlin] (creator of HAML), in which he makes the argument that test-driven code is often buggier and more fragile than un-tested code:
<catlin video>
Shocking, right? Arguing against testing is the sort of thing that gets one thrown out of the Ruby country club, since we're supposed to be [testing all the fucking time][tatft], and most Ruby job descriptions these days are not only aimed at programmers who test but at ones who test using [the latest, hottest tools.][cucumberjobs]
I, for one, have been sipping more of the testing Kool-Aid in the last several months (thanks to the excellent Shoulda and Factory Girl libraries), but I'm also a testing skeptic. My first programming language was PHP, and my background is more in traditional design and media production than computer science. That's to say: I came to programming from a tradition where quantifiable, provable successes are rare, and knowing (and being able to explain) *why* you did the work can be more important than doing the work in the first place. I personally think that's a harder skill to master, which is why so many of my art school classmates are working at Trader Joe's (or, for that matter, developing software) instead of selling paintings.
But that's one reason why I had such a hard time accepting the necessity of writing tests: because while a huge majority of the developer community has accepted as an article of faith that one should test, I've never heard a compelling argument for *why* one should test.
Sure, sure—there are lots of *potential* benefits to having a good test suite. But many of those presume that your test suite is actually good, and not just busywork. A lot of folks have come to take it on faith that there are two kinds of programmers: lazy ones, and ones who test. Believe me or not if you want, but there are such things as lazy programmers _who test_, just as there are talented, hardworking programmers who think testing is a waste of time.
I'm not sure yet whether I agree with Hampton that test-driven coding provides _only_ a false sense of security. In the video he describes situations where programmers will refuse to look at a problem because the tests pass, and they refuse to believe in the possibility that tests could pass with broken code. I've dealt with enough lazy, deluded minds to know that simply taking away these programmers' tests won't stop them being lazy or deluded, just change the vector of laziness.
The more crucial thing is to get people to start asking *why* they're testing. Tests are an abstraction, and one reason I was so resistant to testing is because I fancy myself an artist, and artists make things. You might think that because art folks know about color and emotion and subtext, we must be totally nuts for abstraction. If you think that, you've obviously never had to try explaining anything to an art student: artists are visual, tactile people, and many of us are so talented with images because we absolutely suck at words, and at math and science we're even worse.
Thing is, artists—the good ones, anyway—are also obsessive perfectionists. It's not that designers-turned-programmers like myself have avoided testing because we're lazy, or don't care about writing quality code. It's that testing everything, all the fucking time, doesn't seem like a useful abstraction.
Ryan Singer is a designer at 37signals who (if memory serves) went to school for philosophy, knows a few things about code, and has learned about programming in Ruby from working with some of the smartest test-driven developers on the planet. Singer sketches his user interfaces on paper before he starts on the code, and has said that any sketch detailed enough to make sense to you the next day is _too_ detailed. That's to say, the purpose of the sketch isn't to be a rough version of the design, it's to organize your thought process so that you can get to work on a _real_ version of the design.
The conventional wisdom about test suites is that they're like the blueprints for a building: you write the test first to specify what the code _should_ do, then write the code that does it. In my world, we call that a sketch. Sketches are an incredibly valuable tool for clarifying your thinking about what you're doing, a way of looking at your ideas somewhere other than in your head to make sure the thing you want to make bears some resemblance to the thing you're actually making. Used in this way, tests make a lot of sense. This is how I use them. I would speculate that a _lot_ of test-first code is written from this standpoint, either before any real code is written (as features or specs) or hacked together in tandem with the actual stuff. Please don't think I'm saying this kind of test isn't valuable, because it is.
What I'm saying is that the kind of problem Catlin describes in his video creeps in when people confuse _specification tests_, or "sketch" tests, with _acceptance tests_. The job of the former is to help the developer understand what they're writing and provide a faster, more efficient way to get things done. The latter exists to provide an ongoing insurance policy by confirming that stuff that was working yesterday continues to work days, weeks or months down the line.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment