Instantly share code, notes, and snippets.

@adamwathan /0.md Secret
Last active May 12, 2017

Embed
What would you like to do?
The Isolated Testing Challenge

The Isolated Testing Challenge

Background

Yesterday at the Laracon Online conference, I gave a presentation where I argued that collaborator isolation in tests handcuffs you to a specific implementation and cripples your ability to refactor.

Since refactoring is a core activity in Test-Driven Development, I claimed that isolated unit testing is therefore incompatible with TDD (🌶🌶🌶)

It's easy to find claims (whether true or not) that many seasoned professionals believe it's important to always test your code in complete isolation from all collaborators, and that anyone who follows the "mockist" school of TDD would never use a real collaborator in a test.

My conversations with experienced "mockist" testers have taught me otherwise (they use real collaborators in their tests regularly), but nevertheless these ideas continue to be parroted across Twitter, Reddit, and other online discussion forums.

The accompanying attitude is usually something like this:

"If testing something in isolation prevents you from refactoring, you're either doing it wrong, or your tests are trying to tell you that your code is poorly designed."

The Challenge

Below are three implementations of the same class that would all produce the exact same outcome as observed from the outside.

Write a unit test for this class that tests it in isolation from its collaborators, and passes with all three of these implementations.

<?php
class ProductsController extends Controller
{
private $auth;
private $commandBus;
private $redirector;
public function __construct(Guard $auth, CommandBus $commandBus, Redirector $redirector)
{
$this->auth = $auth;
$this->commandBus = $commandBus;
$this->redirector = $redirector;
}
public function store(Request $request)
{
$command = new AddProductCommand(
$this->auth->id(),
$request->input('name'),
$request->input('description'),
$request->input('price')
);
$this->commandBus->dispatch($command);
return $this->redirector->route('products.index');
}
}
<?php
class ProductsController extends Controller
{
private $auth;
private $commandBus;
private $redirector;
public function __construct(Guard $auth, CommandBus $commandBus, Redirector $redirector)
{
$this->auth = $auth;
$this->commandBus = $commandBus;
$this->redirector = $redirector;
}
public function store(Request $request)
{
$command = new AddProductCommand(
$this->auth->user()->id(),
$request['name'],
$request['description'],
$request['price']
);
$this->commandBus->dispatch($command);
return $this->redirector->to('/products');
}
}
<?php
class ProductsController extends Controller
{
private $auth;
private $commandBus;
private $redirector;
public function __construct(Guard $auth, CommandBus $commandBus, Redirector $redirector)
{
$this->auth = $auth;
$this->commandBus = $commandBus;
$this->redirector = $redirector;
}
public function store(Request $request)
{
$command = new AddProductCommand(
$request->user()->id(),
$request->name,
$request->description,
$request->price
);
$this->commandBus->dispatch($command);
return $this->redirector->route('products.index');
}
}
@martindilling

This comment has been minimized.

Show comment
Hide comment
@martindilling

martindilling Mar 9, 2017

But it wouldn't make sense to attempt to write a unit test (like you did) for a controller method that doesn't actually do anything -.-
You would write unit tests for the command handler to prove it does in fact add a product. And as far as I can see you wouldn't run into any problems there?

I do think you're a bit "aggressive" making this point, and using a bad example to "prove" your point ;)

martindilling commented Mar 9, 2017

But it wouldn't make sense to attempt to write a unit test (like you did) for a controller method that doesn't actually do anything -.-
You would write unit tests for the command handler to prove it does in fact add a product. And as far as I can see you wouldn't run into any problems there?

I do think you're a bit "aggressive" making this point, and using a bad example to "prove" your point ;)

@jbrains

This comment has been minimized.

Show comment
Hide comment
@jbrains

jbrains Mar 9, 2017

Looking first at Redirector, it does the same thing two ways. Raise the level of abstraction so that there's only one way to answer with the "products index" route and that problem disappears. In this way, the controller is too interested in implementation details. One option is to use the Spring WebMVC approach of returning a response and letting the client resolve that into a URL. This collects all the URL resolution into one spot, where it's easier to check.

Apply the same principle to Request (unify [], input(), and the magic that makes name, description, and price work).

Apply the same principle to the two ways to get the "current user ID" (use the Auth or use the Request, but pick one).

It seems that writing isolated tests --even in my head -- encourages moving some implementation details out of the controller.

So, let me introduce functions parseProduct(request) returning a Product (with user ID, name, description, price), addProduct(product) returning nothing and function respond(routeName) returning whether Redirector returned that has a single, unified way to describe the "products index" response. Now I have this:

class ProductsController extends Controller
{
    // I'm not functionally literate in PHP, so wrap these functions in abstract objects,
    // if that's what it takes to make this work. I'm treating them like function references.
    public function __construct($parseProduct, $addProduct, $respond)
    {
        $this->parseProduct = $parseProduct;
        $this->addProduct = $addProduct;
        $this->respond = $respond;
    }
    public function store(Request $request)
    {
        var $newProduct = $this->parseProduct($request);
        $this->addProduct($newProduct);
        return $this->respond("products.index");
    }
}

Now we can test-drive implementations of the results:

  • parseProduct() integrates with the HTTP library that gives me Request objects, and I might need to integrate with a live server in order to explore how the library converts HTTP requests in Request objects. I can use that contract info to learn how to gradually build all my parseX() functions without the need to run a real server.
  • addProduct() integrates with CommandBus, and I can use the same principles again -- over time, as I refine my understanding of the CommandBus contract, I don't need to plug the command bus into a real command transport to implement all the addX() methods correctly.
  • respond() integrates with Redirector. Same same same again.

I see fewer integrated tests and if we decide to get the User ID from Guard instead of the Request, then I know to patch the parseX() functions and the Controllers don't know the difference. Same if, for example, Redirector's API changes, or we have to add some before/after filter so that adding a product can happen in a transaction. Sounds quite resilient to change.

jbrains commented Mar 9, 2017

Looking first at Redirector, it does the same thing two ways. Raise the level of abstraction so that there's only one way to answer with the "products index" route and that problem disappears. In this way, the controller is too interested in implementation details. One option is to use the Spring WebMVC approach of returning a response and letting the client resolve that into a URL. This collects all the URL resolution into one spot, where it's easier to check.

Apply the same principle to Request (unify [], input(), and the magic that makes name, description, and price work).

Apply the same principle to the two ways to get the "current user ID" (use the Auth or use the Request, but pick one).

It seems that writing isolated tests --even in my head -- encourages moving some implementation details out of the controller.

So, let me introduce functions parseProduct(request) returning a Product (with user ID, name, description, price), addProduct(product) returning nothing and function respond(routeName) returning whether Redirector returned that has a single, unified way to describe the "products index" response. Now I have this:

class ProductsController extends Controller
{
    // I'm not functionally literate in PHP, so wrap these functions in abstract objects,
    // if that's what it takes to make this work. I'm treating them like function references.
    public function __construct($parseProduct, $addProduct, $respond)
    {
        $this->parseProduct = $parseProduct;
        $this->addProduct = $addProduct;
        $this->respond = $respond;
    }
    public function store(Request $request)
    {
        var $newProduct = $this->parseProduct($request);
        $this->addProduct($newProduct);
        return $this->respond("products.index");
    }
}

Now we can test-drive implementations of the results:

  • parseProduct() integrates with the HTTP library that gives me Request objects, and I might need to integrate with a live server in order to explore how the library converts HTTP requests in Request objects. I can use that contract info to learn how to gradually build all my parseX() functions without the need to run a real server.
  • addProduct() integrates with CommandBus, and I can use the same principles again -- over time, as I refine my understanding of the CommandBus contract, I don't need to plug the command bus into a real command transport to implement all the addX() methods correctly.
  • respond() integrates with Redirector. Same same same again.

I see fewer integrated tests and if we decide to get the User ID from Guard instead of the Request, then I know to patch the parseX() functions and the Controllers don't know the difference. Same if, for example, Redirector's API changes, or we have to add some before/after filter so that adding a product can happen in a transaction. Sounds quite resilient to change.

@jbrains

This comment has been minimized.

Show comment
Hide comment
@jbrains

jbrains Mar 9, 2017

@martindilling I can see the controller layer collapsing over time into a single object that follows the pattern of "parse request, dispatch command, return response". If we document the contracts of those pieces (with nice tests, maybe), then it becomes easy to do exactly what you suggest. The tedious work comes from, for example, reverse-engineering the specification for things like Request, so let's document our behavior clearly instead of creating that same work for our clients. :)

jbrains commented Mar 9, 2017

@martindilling I can see the controller layer collapsing over time into a single object that follows the pattern of "parse request, dispatch command, return response". If we document the contracts of those pieces (with nice tests, maybe), then it becomes easy to do exactly what you suggest. The tedious work comes from, for example, reverse-engineering the specification for things like Request, so let's document our behavior clearly instead of creating that same work for our clients. :)

@martindilling

This comment has been minimized.

Show comment
Hide comment
@martindilling

martindilling Mar 9, 2017

@jbrains
I don't know if you saw his talk this comes from, but what he did was to write end-to-end integration tests. In a pretty good and clean way, and imo that is what I would want for testing controllers as what you essentially want is to test that the http layer works. The unit tests should be testing the business logic. I can only see it becoming "annoying" in the long run if you attempt to write isolated unit tests for the http layer.

But I like the approach you suggested, it makes sense :)

martindilling commented Mar 9, 2017

@jbrains
I don't know if you saw his talk this comes from, but what he did was to write end-to-end integration tests. In a pretty good and clean way, and imo that is what I would want for testing controllers as what you essentially want is to test that the http layer works. The unit tests should be testing the business logic. I can only see it becoming "annoying" in the long run if you attempt to write isolated unit tests for the http layer.

But I like the approach you suggested, it makes sense :)

@prcanavan86

This comment has been minimized.

Show comment
Hide comment
@prcanavan86

prcanavan86 Mar 9, 2017

I hastily wrote a wordy reply, realised my folly and deleted it. Now I've had a think and have read what @jbrains wrote, but I haven't managed to get to a completely isolated solution that I like.

Tried to break it down to what is going on - "The current user is adding a product", could user be given an "addProduct" method?
if other entities in future needed to add products, could we create a "ProductManager" interface?

interface ProductManager
{
    public function addProduct($productInfo);
}

class ProductsController extends Controller {
    private $productManager;
    private $redirector;
    
    //User implements "ProductManager" and we inject the logged in user
    public function __construct(ProductManager $productManager, Redirector $redirector)
    {
        $this->productManager = $productManager;
        $this->redirector = $redirector;
    }

    public function store(Request $request)
    {
        //the command/dispatch stuff could now happen in this method
        $this->productManager->addProduct($this->parseRequestForProduct($request));

        return $this->redirector->route('products.index');
    }

    private function parseRequestForProduct(Request $request)
    {
        return [
            'name' => $request->name,
            'description' => $request->description,
            'price' => $request->price
        ];
    }
}

So I'd fake ProductManager + use the real Redirector/Request objects and using the Sandi Metz rules of thumb (incoming command message) would have a single test - assert the response.

prcanavan86 commented Mar 9, 2017

I hastily wrote a wordy reply, realised my folly and deleted it. Now I've had a think and have read what @jbrains wrote, but I haven't managed to get to a completely isolated solution that I like.

Tried to break it down to what is going on - "The current user is adding a product", could user be given an "addProduct" method?
if other entities in future needed to add products, could we create a "ProductManager" interface?

interface ProductManager
{
    public function addProduct($productInfo);
}

class ProductsController extends Controller {
    private $productManager;
    private $redirector;
    
    //User implements "ProductManager" and we inject the logged in user
    public function __construct(ProductManager $productManager, Redirector $redirector)
    {
        $this->productManager = $productManager;
        $this->redirector = $redirector;
    }

    public function store(Request $request)
    {
        //the command/dispatch stuff could now happen in this method
        $this->productManager->addProduct($this->parseRequestForProduct($request));

        return $this->redirector->route('products.index');
    }

    private function parseRequestForProduct(Request $request)
    {
        return [
            'name' => $request->name,
            'description' => $request->description,
            'price' => $request->price
        ];
    }
}

So I'd fake ProductManager + use the real Redirector/Request objects and using the Sandi Metz rules of thumb (incoming command message) would have a single test - assert the response.

@coderabbi

This comment has been minimized.

Show comment
Hide comment
@coderabbi

coderabbi Mar 10, 2017

As the guest straw man, I feel compelled to comment... :-P

For starters, I want to say that I think this is a fantastic vehicle to get the conversation started, Adam.

That said, I am at something of a loss how you get from implementations which are difficult to test to the conclusion that testing in isolation is antithetical to TDD. I'm not sure the "challenge" speaks to TDD at all, given that it's several concrete implementations which predate their tests. TDD is the practice of driving design via tests, not driving tests via design. You, of course, know that, so if you could draw the dotted lines between premise and conclusion, it would be helpful.

Incidentally, if I found myself with this sort of test tension "in the field", it would give me pause - which it should, that's the point - within which I could reconsider my design rather than my approach to testing.

Regarding your claim that all of your mockist friends use concrete collaborators frequently in their tests, I have little doubt that is true. Ask them, though, if they were using concrete collaborators while the units were still in development. I'm relatively certain that you'll receive a different answer. It's difficult to utilize concrete implementations which likely have yet to come into existence.

Personally, I'm sure I delete two or three isolated tests, useful, as I am fleshing out my design and discovering unit boundaries, but unnecessary as part of our test suite, for each test that ends up accompanying the stable unit. And, yes, many of those tests which remain use concrete collaborators . The isolated tests which I use to drive the design are what JBrains, above, I believe calls "programmer's tests". Eventually, though, unit boundaries become obvious, and often they include multiple classes working in concert, so these tests are discarded as they have outlived their usefulness.

You are correct in your thinking that the mockist's approach requires more up front design consideration -- I find that a good thing. Indeed, it is exploratory in nature and yes, often our hypotheses turn out to be false causing us to reconsider our design. While the classicist finds themselves decomposing over ambitious classes during the refactor phase, mockists will find themselves consolidating over abstraction.

I think the point of divergence might be in the very first sentence "collaborator isolation in tests handcuffs you to a specific implementation and cripples your ability to refactor". Indeed, as JBrains alludes to above, brittle tests are endemic in the mockist's world, but the refactoring of the TDD red-green-refactor cycle doesn't happen en masse at some later date, rather it's done every cycle - red-green-refactor rather than red-green-red-green-red-green-REFACTOR - and it's done to the tests (abandoning "programmer's tests" when they become unnecessary, lest they lead to the handcuffing you describe) as well as to the design. I'm only handcuffed by them if I allow tests which have outlived their usefulness to remain or if I omit the micro-refactoring inherent to TDD.

Finally, I must confess that I seldom "test drive" controllers. Face it, much of what we build, we've built in some variation previously. In practice, while my controllers have yet to devolve to the single object which JBrains describes above, they are highly repetitive and I'm not sure they'd benefit much at this point from test driving. I don't need the feedback cycle of test driving to vet my design explorations, as they've been vetted previously.

And Adam, what's with the type hints????!

coderabbi commented Mar 10, 2017

As the guest straw man, I feel compelled to comment... :-P

For starters, I want to say that I think this is a fantastic vehicle to get the conversation started, Adam.

That said, I am at something of a loss how you get from implementations which are difficult to test to the conclusion that testing in isolation is antithetical to TDD. I'm not sure the "challenge" speaks to TDD at all, given that it's several concrete implementations which predate their tests. TDD is the practice of driving design via tests, not driving tests via design. You, of course, know that, so if you could draw the dotted lines between premise and conclusion, it would be helpful.

Incidentally, if I found myself with this sort of test tension "in the field", it would give me pause - which it should, that's the point - within which I could reconsider my design rather than my approach to testing.

Regarding your claim that all of your mockist friends use concrete collaborators frequently in their tests, I have little doubt that is true. Ask them, though, if they were using concrete collaborators while the units were still in development. I'm relatively certain that you'll receive a different answer. It's difficult to utilize concrete implementations which likely have yet to come into existence.

Personally, I'm sure I delete two or three isolated tests, useful, as I am fleshing out my design and discovering unit boundaries, but unnecessary as part of our test suite, for each test that ends up accompanying the stable unit. And, yes, many of those tests which remain use concrete collaborators . The isolated tests which I use to drive the design are what JBrains, above, I believe calls "programmer's tests". Eventually, though, unit boundaries become obvious, and often they include multiple classes working in concert, so these tests are discarded as they have outlived their usefulness.

You are correct in your thinking that the mockist's approach requires more up front design consideration -- I find that a good thing. Indeed, it is exploratory in nature and yes, often our hypotheses turn out to be false causing us to reconsider our design. While the classicist finds themselves decomposing over ambitious classes during the refactor phase, mockists will find themselves consolidating over abstraction.

I think the point of divergence might be in the very first sentence "collaborator isolation in tests handcuffs you to a specific implementation and cripples your ability to refactor". Indeed, as JBrains alludes to above, brittle tests are endemic in the mockist's world, but the refactoring of the TDD red-green-refactor cycle doesn't happen en masse at some later date, rather it's done every cycle - red-green-refactor rather than red-green-red-green-red-green-REFACTOR - and it's done to the tests (abandoning "programmer's tests" when they become unnecessary, lest they lead to the handcuffing you describe) as well as to the design. I'm only handcuffed by them if I allow tests which have outlived their usefulness to remain or if I omit the micro-refactoring inherent to TDD.

Finally, I must confess that I seldom "test drive" controllers. Face it, much of what we build, we've built in some variation previously. In practice, while my controllers have yet to devolve to the single object which JBrains describes above, they are highly repetitive and I'm not sure they'd benefit much at this point from test driving. I don't need the feedback cycle of test driving to vet my design explorations, as they've been vetted previously.

And Adam, what's with the type hints????!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment