This document is a reference for common testing patterns in a Django/Python project using Pytest.
Contents:
- Set-up and tear-down
- Mocks
- Controlling external dependencies
- Functional testing
- Running tests
- Using Pytest fixtures
- Writing high quality code and tests
Tools and patterns for setting up the world how you want it (and cleaning up afterwards).
For Django models, the basic pattern with factory boy is:
import factory
from foobar import models
class Frob(factory.django.DjangoModelFactory):
# For fields that need to be unique.
sequence_field = factory.Sequence(lambda n: "Bar" + n)
# For fields where we want to compute the value at runtime.
datetime_field = factory.LazyFunction(datetime.now)
# For fields computed from the value of other fields.
computed_field = factory.LazyAttribute(lambda obj: f"foo-{obj.sequence_field}")
# Referring to other factories.
bar = factory.SubFactory("tests.factories.foobar.Bar")
class Meta:
model = models.Frob
Using post-generation hooks:
class MyFactory(factory.Factory):
blah = factory.PostGeneration(lambda obj, create, extracted, **kwargs: 42)
MyFactory(
blah=42, # Passed in the 'extracted' argument of the lambda
blah__foo=1, # Passed in kwargs as 'foo': 1
blah__baz=2, # Passed in kwargs as 'baz': 2
blah_bar=3, # Not passed
)
Factory boy can also be used to create other object types, such as dicts. Do
this by specifying the class to be instantiated in the Meta.model
field:
import factory
class Payload(factory.Factory):
name = "Alan"
age = 40
class Meta:
model = dict
assert Payload() == {"name": "Alan", "age": 40}
There's also a convenient factory.DictFactory
class that can used for dict
factories
If the dict has fields that aren't valid Python keyword args (e.g. they include
hyphens or shadow built-in keywords like from
), use the rename
meta arg:
class AwkwardDict(factory.DictFactory):
# Named with trailing underscore as we can't use 'from'
from_ = "Person"
# Named with underscore as we can't use a hyphen
is_nice = False
class Meta:
rename = {"from_": "from", "is_nice": "is-nice"}
assert AwkwardDict() == {"from": "Person", "is-nice": False}
This is useful for writing concise tests that pass a complex object as an input.
Python's mock library is very flexible. It's helpful to distinguish between two ways that mock objects are used:
-
Stubs: where the behaviour of the mock object is specified before the act phase of a test
-
Spys: where the mock calls are inspected after the act phase of a test.
Equivalently you can think of mocks as either being actors (stubs) or critics (spys).
Stubbing involves replacing an argument or collaborator with your own version so you can specify its behaviour in advance.
When passing stubs as arguments to the target, prefer
mock.create_autospec
so function/attribute calls are checked.
from unittest import mock
from foobar.vendor import acme
def test_with_spec():
# Function stubs will have their arguments checked against real function
# signature.
fn = mock.create_autospec(spec=acme.do_thing, **attributes)
# Use instance=True when stubbing a class instance.
client = mock.create_autospec(spec=acme.Client, instance=True, **attributes)
-
Don't pass instantiated class instances as the
spec
argument, useinstance=True
instead. -
Be aware that
create_autospec
can have poor performance if it needs to traverse a large graph of objects. -
Be aware that you can't stub a
name
attribute when callingmock.create_autospec
or viamock.Mock()
. Instead, either callmock.configure_mock(name=...)
or assign thename
attribute in a separate statement:m = mock.create_autospec(spec=SomeClass, instance=True) m.name = "..."
Use the following formula to create stubbed Django model instances that can be assigned as foreign keys:
from unittest import mock
from django.db import models
def test_django_model_instance():
instance = mock.create_autospec(
spec=models.SomeModel,
instance=True,
**fields,
_state=mock.create_autospec(
spec=models.base.ModelState, spec_set=True, db=None, adding=True
),
)
This is useful for writing isolated unit tests that involve Django model instances.
Assign an iterable as a mock side_effect
:
stub = mock.create_autospec(spec=SomeClass)
stub.method.side_effect = [1, 2, 3]
assert stub.method() == 1
assert stub.method() == 2
assert stub.method() == 3
Assign an exception as a mock side_effect
:
stub = mock.create_autospec(spec=SomeClass)
stub.method.side_effect = ValueError("Bad!")
with pytest.raises(ValueError):
assert stub.method()
Use the responses
library. It provides
a decorator and a clean API for stubbing the responses to HTTP requests:
import responses
@responses.activate
def test_something():
responses.add(
method=responses.POST,
url="https://taylor.rest/",
status=200,
json={
"author": "Taylor Swift",
"quote": "Bring on all the pretenders!"
},
)
)
- Can pass
body
instead ofjson
. url
can be a compiled regex.
Use Django's
@override_settings
decorator to override scalar settings:
from django.test import override_settings
@override_settings(TIME_ZONE="Europe/London")
def test_something():
...
Pytest-Django includes an equivalent
settings
Pytest fixture:
def test_run(settings):
# Assignments to the `settings` object will be reverted when this test completes.
settings.FOO = 1
run()
Use Django's
@modify_settings
decorator to prepend/append to dict settings:
from django.test import override_settings
@modify_settings(MIDDLEWARE={
"prepend": "some.other.thing",
"append": "some.alternate.thing",
})
def test_something_with_middleware():
...
Both override_settings
and modify_settings
can be used as class decorators
but only on TestCase
subclasses.
Calling the system clock in tests is generally a bad idea as it can lead to
flakiness. Better to pass in relevant dates or datetimes, or if that isn't
possible, use time_machine
:
import time_machine
def test_something():
# Can pass a string, date/datetime instance, lambda function or iterable.
with time_machine.travel(dt, tick=True):
pass
pass
or freezegun
:
import time_machine
def test_something():
# Can pass a string, date/datetime instance, lambda function or iterable.
with time_machine.travel(dt, tick=True):
pass
Note:
- Can be used as a decorator
- Within the context block, use
time_machine.travel(other_dt)
orfreezegun.move_to(other_dt)
to move time to a specified value.
The time_machine.travel
decorator is useful for debugging flakey tests that
fail when run at certain times (like during the DST changeover day). To recreate
the flakey fail, pin time to when the test failed on your CI service:
@time_machine.travel("2021-03-28T23:15Z")
def test_that_failed_last_night():
...
Spying involves replacing an argument to the system-under-test or one of its collaborators with a fake version so you can verify how it was called.
Spys can be created as unitest.mock.Mock
instances using
mock.create_autospec
.
If stubs are actors, then spys are critics.
Here's an example of passing a spy as an argument to the system-under-test:
from unittest import mock
from foobar.vendor import acme
from foobar import usecase
def test_client_called_correctly():
# Create spy.
client = mock.create_autospec(spec=acme.Client, instance=True)
# Pass spy object as an argument.
usecase.run(client=client, x=100)
# Check spy was called correctly.
client.do_the_thing.assert_called_with(x=100)
Here's an example of using a spy for a collaborator of the system-under-test:
from unittest import mock
from foobar.vendor import acme
from foobar import usecase
@mock.patch.object(usecase, "get_client")
def test_client_called_correctly(get_client):
# Create spy and ensure factory function returns it.
client = mock.create_autospec(spec=acme.Client, instance=True)
get_client.return_value = client
# Here the client object is constructed from within the use case by calling
# a `get_client` factory function.
usecase.run(x=100)
# Check spy was called correctly.
client.do_the_thing.assert_called_with(x=100)
As you can see, the use of dependency injection in the first example leads to simpler tests.
Objects from Python's unittest.mock
library provide several
assert_*
methods
that can be used to verify how a spy was called:
assert_called
assert_called_once
assert_called_with
(only checks the last call to the spy)assert_called_once_with
assert_any_call
assert_not_called
assert_has_calls
Note assert_has_calls
shouldn't be used to check all calls to the spy as it
won't fail if additional calls are made. For that it's better to use the
call_args_list
property. E.g.
assert spy.call_args_list == [
mock.call(x=1),
mock.call(x=2),
]
If the order in which a spy is called is not important, then use this pattern:
assert len(spy.call_args_list) == 2
assert mock.call(x=1) in spy.call_args_list
assert mock.call(x=2) in spy.call_args_list
If you only want to make an assertion about some of the arguments passed to a
spy, use the
unittest.mock.ANY
helper,
which pass equality checks with everything:
m.assert_called_with(x=100, y=ANY)
Spys have several attributes that store how they were called.
Mock.called # bool for whether the spy was called
Mock.call_count # how many times the spy was called
Mock.call_args # a tuple of (args, kwargs) of how the spy was LAST called
Mock.call_args_list # a list of calls
Mock.method_calls # a list of methods and attributes called
Mock.mock_calls # a list of ALL calls to the spy (and its methods and attributes)
The call
objects returned by Mock.call_args
and Mock.call_args_list
are
two-tuples of (positional args, keyword args) but the call
objects returned by
Mock.method_calls
and Mock.mock_calls
are three-tuples of (name, positional
args, keyword args).
Use unittest.mock.call
objects to make assertions about calls:
assert mock_function.call_args_list == [mock.call(x=1), mock.call(x=2)]
assert mock_object.method_calls == [mock.call.add(1), mock.call.delete(x=1)]
To make fine-grained assertions about function or method calls, you can use the
call_args
property:
_, call_kwargs = some_mocked_function.call_args
assert "succeeded" in call_kwargs["message"]
You can wrap an object with a mock so that method calls are forwarded on but also recorded for later examination:
For direct collaborators, use something like:
from unittest import mock
from foobar.vendors import client
from foobar import usecases
def test_injected_client_called_correctly():
client_spy = mock.Mock(wraps=client)
usecases.do_the_thing(client_spy, x=100)
client_spy.some_method.assert_called_with(x=100)
For indirect collaborators, use mock.patch.object
:
from unittest import mock
from foobar.vendors import client
from foobar import usecases
@mock.patch.object(usecases, "client", wraps=client):
def test_collaborator_client_called_correctly(client_spy):
usecases.do_the_thing(x=100)
client_spy.some_method.assert_called_with(x=100)
Sentinels provide on-demand unique objects and are useful for passing into the system-under-test when the actual value of the argument isn't important.
@mock.patch.object(somemodule, "collaborator")
def test_passing_sentinel(collaborator):
arg = mock.sentinel.BAZ
somemodule.target(arg)
collaborator.assert_called_with(bar=arg)
- It makes it explicit that the test is using a stand-in object.
- Any attribute access other than
.name
raisesAttributeError
.
Reading:
For tests that need to write something to a file location but we don't leave detritus around after the test run is finished.
This should only be needed where a filepath is an argument to the
system-under-test, such as functional tests. For other types of tests, it is
preferable to pass file-like objects as arguments so tests can pass
io.StringIO
instances.
Here's how to create a temporary CSV file using Python's tempfile
module:
import csv
import tempfile
from django.core.management import call_command
def test_csv_import():
with tempfile.NamedTemporaryFile(mode="w") as f:
writer = csv.writer(f)
writer.writerow(["EA:E2001BND", "01/10/2020", 0.584955, -0.229834])
# Call the management command passing the CSV filepath as an argument.
call_command("import_csv_file", f.name)
The same thing can be done using
Pytest's tmp_path
fixture
with provides a pathlib.Path
object:
import csv
import pytest
from django.core.management import call_command
def test_csv_import(tmp_path):
# Create temporary CSV file
csv_file = tmp_path / "temp.csv"
with csv_file.open("w") as f:
writer = csv.writer(f)
writer.writerow(["EA:E2001BND", "01/10/2020", 0.584955, -0.229834])
# Call the management command passing the CSV filepath as an argument.
call_command("import_csv_file", csv_file)
Pytest provides a few other fixtures for creating temporary files and folders:
tmp_path_factory
— a session-scoped fixture for creatingpathlib.Path
temporary directories.tmpdir
— a function-scoped fixture for creatingpy.path.local
temporary directories.tmpdir_factory
— a session-scoped fixture for creatingpy.path.local
temporary directories.
End-to-end tests that trigger the system by an external interface such as a HTTP request or CLI invocation.
Functional tests will necessarily be slow and fail with less-than-helpful error messages. That's ok - the value they provide is regression protection. You can sleep well at night knowing that all your units are plumbed together correctly.
Follow these patterns when writing functional tests:
-
Explicitly comment each phase of a test to explain what is going on. Don't rely on the test name or a docstring.
-
Strive to make the test as end-to-end as possible. Exercise the system using an external call (like a HTTP request) and only mock calls to external services.
-
Ensure all relevant settings are explicitly defined in the test set-up. Don't rely on implicit setting values.
Use django-webtest
for
testing Django views. It provides a readable API for clicking on buttons and
submitting forms.
Pass status="*"
so 4XX or 5XX responses don't raise an exception.
To fill in a multi-checkbox widget, assign a list of the values to select. For Django model widgets, this is the PKs of the selected models:
form = page.forms["my_form"]
form["roles"] = [some_role.pk, other_role.pk]
response = form.submit()
Use something like this:
import io
import datetime
import time_machine
from django.core.management import call_command
from dateutil import tz
def test_some_command():
# Capture output streams.
stdout = io.StringIO()
stderr = io.StringIO()
# Control time when MC runs.
run_at = datetime(2021, 2, 14, 12, tzinfo=tz.gettz('Europe/London'))
with time_machine.travel(run_at):
call_command("some_command_name", stdout=stdout, stderr=stderr)
# Check command output (if any).
assert stdout.getvalue() == "..."
assert stderr.getvalue() == "..."
# Check side-effects.
or using Octo's private pytest fixtures:
import time_machine
from django.core.management import call_command
def test_some_command(command, factory):
run_at = factory.local.dt("2021-03-25 15:12:00")
# Run command with a smaller number of prizes to create.
with time_machine.travel(run_at):
result = command.run("some_command_name")
# Check command output (if any).
assert result.stdout.getvalue() == "..."
assert result.stderr.getvalue() == "..."
# Check side-effects.
Use something like this:
# tests/functional/conftest.py
import pytest
from click.testing import CliRunner
@pytest.fixture
def runner():
yield CliRunner(
# Provide a dictionary of environment variables so that configuration
# parsing works. Don't provide any values though - ensure tests specific
# values relevant to them.
env=dict(...)
)
# tests/functional/test_command.py
import main
import time_machine
def test_some_command(runner):
# Run command at a fixed point in time, specifying any relevant env vars.
with time_machine.travel(dt, tick=True):
result = runner.invoke(
main.cli,
args=["name-of-command"],
catch_exceptions=False,
env={
"VENDOR_API_KEY": "xxx",
},
)
# Check exit code.
assert result.exit_code == 0, result.exception
# Check side-effects.
Default is for pytest to capture but show output if the test fails.
Use -s
to prevent output capturing — this is required for ipdb
breakpoints
to work but not for pdb
or pdbpp
.
Fixtures defined in a conftest.py
module can be used in several ways:
-
Apply to a single test by adding the fixture name as an argument.
-
Apply to every test in a class by decorating with
@pytest.mark.usefixtures("...")
. -
Apply to every test in a module class by defining a module-level
pytestmark
variable:pytestmark = pytest.mark.usefixtures("...")
-
Apply to every test in a test suite using the
pytest.ini
file:[pytest] usefixtures = ...
See docs on the usefixtures
fixture.
It's tricky to configure Pytest fixtures and so it's best to inject a factory function/class that can be called with configuration arguments.
High quality code is easy to change.
Some anti-patterns for unit tests:
-
Lots of mocks - this indicates your unit under test has to many collaborators.
-
Nested mocks - this indicates your unit under test know intimate details about its collaborators (that it shouldn't know).
-
Mocking indirect collaborators - it's best to mock the direct collaborators of a unit being tested, not those further down the call chain. Use of
mock.patch
(instead ofmock.patch.object)
is a smell of this problem. -
Careless factory usage - beware of factories creating lots of unnecessary related objects, which can expose test flakiness around ordering (as the test assumes there's only one of something).
-
Conditional logic in tests - this is sometimes done to share some set-up steps but often makes the test much harder to understand. It's almost always better to create separate tests (with no conditional logic) and find another way to share common code.
Rules of thumb:
-
Design code to use dependency injection and pass in adapters that handle IO. This includes clients for third party APIs and services for talking to the network, file system or database.
-
Keep IO separate from business logic. You want your business logic to live in side-effect free, pure functions.
Useful talks:
- Fast test, slow test by Gary Bernhardt, Pycon 2012
- Stop using mocks by Harry Percival, Pycon 2020 This includes clients for third party APIs and services for talking to the network, file system or database.