Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@jehugaleahsa
Last active August 29, 2015 14:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jehugaleahsa/7de11512c8d48431f11a to your computer and use it in GitHub Desktop.
Save jehugaleahsa/7de11512c8d48431f11a to your computer and use it in GitHub Desktop.
A detailed description of Ninject's MockingKernel and the Scenario Builder Pattern.

Unit Test Scenario Builder Pattern

Reusing test fixtures without hiding what’s being tested

Unit testing is hard…

Only in the most basic situations is unit testing actually “easy”. The only code that’s easy to test involve functions that simply take input and calculate a result. These types of test fall into the classic testing paradigm of calling a function and running some assertions. The problem is that most code interacts with other components, which makes it extremely difficult to test functionality in isolation. Remember, testing code in isolation is what makes it a unit test.

The only option available in type-safe languages is to replace dependencies with fakes (stubs, mocks, etc.) at runtime. This requires the dependency to be an interface or an abstract class in C#. On the flip side, in dynamic languages such as JavaScript, Python or Ruby, you can do monkey patching to replace an object’s members (including its methods) at runtime ([http://en.wikipedia.org/wiki/Monkey_patch]). This allows you to simply stub out operations that would usually involve hitting an external system or otherwise perform an expensive calculation.

Beyond just using fakes, the code being tested must be modified to use dependency injection: either passing dependencies to the constructor or having properties for them. Generally, creating interfaces and using dependency injection is considered worth the effort in order to support unit testing. It’s become such a common practice that dependency injection is used even when there are no unit tests. Some people claim that designing for testability can improve a system’s overall architecture.

Creating fakes is an intense process. Even for the smallest interfaces, it means creating an entire class. This is such a painful process that tools have been made that generate fakes dynamically at runtime. These tools significantly cut down on the amount of effort needed to write unit tests.

MockingKernel for automatic injection of mocks

I recently came across a tool that automates mock creation with dependency injection: Ninject’s MockingKernel extension ([http://github.com/ninject/ninject.mockingkernel]). Ninject is a tool that helps create objects by automatically wiring up their dependencies. You write code that specifies which implementation (concrete type) to association with an interface (or abstract class). When Ninject sees that a constructor expects a particular interface, it looks to see which implementation it should create. It will recursively create dependencies, so if you’ve completely configured your dependency bindings, you only need to ask for the top-level object. The MockingKernel extension goes a little bit further and will automatically detect missing bindings and create mock objects on your behalf.

This automatic mock creation is a massive time saver. Normally, in order to create the object being tested, you need to provide mocks for all of the constructor arguments. However, you might not care how/if your code interacts with the dependency. In that case, defining a mock manually is just overhead that clutters up your tests.

With MockingKernel, you are left just defining bindings for mocks you expect to inspect after the code runs. This is as simple as saying kernel.Bind<IMyDependency>().ToMock().InSingletonScope(). The InSingletonScope() is required so Ninject knows to return the same mock object every time. After your code runs, you can retrieve the mock using kernel.Get<IMyDependency>() and inspect which methods were called with what arguments. Overall, this minimizes the amount of boilerplate code you need to write to simply create a mock and inspect how it was used.

Parameterized tests and scenarios

When you are testing a class, it is pretty typical for most of the tests to look similar. In these cases, the only things changing among tests are the values returned by the mock objects and/or arguments passed to the method(s) being called. In order to cut down on duplication, it is important to create helper methods for configuring the mock objects and executing the code under test. If a lot of the assertions are similar, it makes sense to create custom assertions and maybe even move them into the parameterized helper method.

Some people argue that duplicating code in unit tests is an acceptable practice. The argument is that unit tests should make it immediately apparent what they are testing. Anything that requires you to read more than a few lines of code leads to no one reading your tests at all. No one wants to be responsible for fixing an obscure test; it's not always apparent what an obscure test is doing or what caused it to stop working. Some also argue that if a test is hard to read and write, then the code probably needs refactored and simplified. However, my personal experience is that any code involving mock objects is immediately complicated. Even though using MockingKernel cuts down on clutter, it implements IDisposable which means it should be properly disposed of, which leads to using statements in your test methods, which makes your tests less readable... My conclusion is that "readable" is just too subjective.

Whenever I see unit testing taking too much time, it is usually due to test duplication or the use of helper methods that try to do too much. Test duplication leads to problems because changes to the code being tested affect multiple tests simultaneously. A good suite of helper methods should isolate the impact of changes. Unit test code should be treated with the same as any production code: developers should do whatever is necessary to eliminate complexity. Eliminating complexity can result in adding additional helper methods or even removing code when it is getting too hard to follow.

One of the limitations of parameterized helper methods is that there can be just too many parameters. Even though your class may only have a few dependencies, a single dependency may be used multiple times in various ways within your code under test. You end up having a parameter for each dependency interaction, rather than just one per dependency. Once there are more than about 4 parameters, the readability of the code is drastically reduced. One approach is to create a separate helper method for each mock object being configured. This has the benefit of giving you fine grain control over configuration. Hwoever, you end up with test methods that have a long list of function calls at the top. Another common pattern to address this situation is called "Parameter Object" ([http://refactoring.com/catalog/introduceParameterObject.html]). Basically, create a class with properties for each of the arguments. Then in the test, simply initialize an object and pass it to the helper method.

In heart of object-oriented programming, the next logical thing to do is move the functionality of configuring and running your tests in with the data that they use. This means moving the parameterized helper methods into parameter objects themselves. At first, I wasn't sure if this was a good idea. I felt moving the code further away from the test method would lead to obscure tests. However, the jump from a helper method to a helper class is minimal, especially if they are in the same file. At this point, I give the parameter object the more meaningful name of "scenario". Different values for the properties allows you to execute different scenarios.

You might ask, "why not move your assertions into the scenario?" In some cases, you can. If something should always happen, regardless of the parameters, it should go into the scenario class. The whole point is to eliminate duplication. However, most of the time, you won't be able to run the same assertions. The entire point of changing the scenario's properties is to alter the flow through the code under test. This means different mock objects will be interacted with and the return values will change. It's a judgement call, but normally you should expect the assertions to change on a test-by-test basis.

Scenario Inheritance

I like to build up my scenario classes as I go. If the code I am testing only configures two mock interactions, then I only create two properties. However, as I add more tests, I realize I need more configuration when going down different code paths. I could add a new property to the scenario class. However, I have to be careful not break my existing tests. Most of the time leaving a property uninitialized in old tests doesn't lead to problems. The last thing you want to do is go back to previous tests and update them to set a property for a mock they don't even use!

An alternative is to use inheritance. Define a new scenario subclass that inherits from your old scenario. Add any new properties that you need to the subclass. Then make the configuration method in the base class virtual. Then override it in the derived scenario, call into the base class's configuration method as well as configure any additional mocks. Here is a rather complex example for testing an if/else scenario:

// code under test
public class CodeToTest
{
	private IQuery query;
	private IPositive positive;
	private INegative negative;
	
	public CodeToTest(IQuery query, IPositive positive, INegative negative)
	{
		this.query = query;
		this.positive = positive;
		this.negative = negative;
	}
	
	public int Run()
	{
		if (query.GetAnswer())
		{
			return positive.GetResult();
		}
		else
		{
			result negative.GetResult();
		}
	}
}

// test scenarios
public abstract class Scenario
{
	protected bool Answer { get; set; }
	
	public int Result { get; set; }  // only used in derived classes
	
	public virtual void Run(IKernel kernel)
	{
		Configure(kernel);
		
		CodeToTest code = kernel.Get<CodeToTest>();  // let MockingKernel deal with dependencies
		int result = code.Run();
		
		Assert.AreEqual(Result, result, "The wrong result was returned.");
		kernel.Get<IQuery>().Received(1).GetAnswer();  // check that IQuery.GetAnswer is called
	}
	
	protected virtual void Configure(IKernel kernel)
	{
		kernel.Get<IQuery>().GetAnswer().Returns(Answer)
	}
}

public class PositiveScenario : Scenario
{		
	protected override void Run(IKernel kernel)
	{
		base.Run();
		
		kernel.Get<IPositive>().Received(1).GetResult();  // check that IPositive.GetResult is called
	}
	
	protected override void Configure(IKernel kernel)
	{
		Answer = true;
		base.Configure();  // run base class's configuration first!
		kernel.Get<IPositive>().GetResult().Returns(Result);
	}
}

public class NegativeScenario : Scenario
{
	protected override void Run(IKernel kernel)
	{
		base.Run();
		
		kernel.Get<INegative>().Received(1).GetResult();  // check that INegative.GetResult is called
	}
	
	protected override void Configure(IKernel kernel)
	{
		Answer = false;
		base.Configure();  // run base class's configuration first!
		kernel.Get<INegative>().GetResult().Returns(Result);
	}
}

First notice that the Scenario.Run method follows the normal Arrange-Act-Assert (AAA) pattern. Here, we know IQuery.GetAnswer() gets called no matter what, so we move the check to the Run method. Since the Scenario class can't be run without one dependency or the other (IPositive vs INegative) it is marked abstract. The Configuration methods set the Answer to true or false, and the property is marked abstract. The Result property was moved to the Scenario class just to eliminate duplicating it in both derived classes. Additional checks are added to the derived class's Run method. Optionally, a separate method could be written to do assertions - whatever suits your tastes.

At this point, the test methods simply look like this:

[TestMethod]
public void ShouldGetPositiveResultIfAnswerPositive()
{
	using (IKernel kernel = new NSubstituteMockingKernel())
	{
		PositiveScenario scenario = new PositiveScenario() { Result = 123 };
		scenario.Run(kernel);
	}
}

[TestMethod]
public void ShouldGetNegativeResultIfAnswerNegative()
{
	using (IKernel kernel = new NSubstituteMockingKernel())
	{
		NegativeScenario scenario = new NegativeScenario() { Result = 234 };
		scenario.Run(kernel);
	}
}

By default, any un-configured mock method will return default(T). In other words, if the wrong path was executed, Ninject would implement GetResult so that it returns 0. So, by using any non-zero value, we are sure our code is working. It would be nice if MockingKernel provided a way to generate saboteurs that would cause the test to fail if interacted with - this would add extra assurance that the code flowed as expected.

Scenario inheritance is all the more complicated scenario classes should be. Even this example suffers from the complexity of abstract classes, polymorphism, calls into the base class and too many lines of code. Scenario inheritance should be used sparingly; the same code could easily have been written with a single Scenario class. Certainly don't create scenario subclasses for every branch in your code!

My experience is scenario inheritance should correspond with inheritance found in the code under test. In other words, if you create a derived class, it's unit tests will probably reuse a lot of the configuration/execution code used in the base scenario class. Instead of clouding up the scenario with parameters it will never use, just create a derived scenario class. Like I mentioned before, use scenario inheritance to achieve backwards compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment