Skip to content

Instantly share code, notes, and snippets.

@amirkdv
Last active October 5, 2023 09:48
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save amirkdv/861499ed1c8e7b15cc27bf47e5291936 to your computer and use it in GitHub Desktop.
Save amirkdv/861499ed1c8e7b15cc27bf47e5291936 to your computer and use it in GitHub Desktop.
Behavioral Tests for Django Apps with Behave

Behavioral Tests for Django

Table of Contents

Background

This document assumes:

  • django 2+: links to documentation point to version 2.2, current at the time of writing. Everything below has only been tested with django 2.0.5,
  • behave as the python implementation of BDD and Gherkin. Everything below has been tested with behave 1.2.6.
  • behave-django as the django app providing integration with behave. Everything below has been tested with behave-django 1.3.0.

Although I have included extensive links to relevant docs, a basic understanding of the following concepts would help the reader along the way:

  • basic testing concepts (test setup and teardown, fixtures, test isolation), python's unitttest and testing in django
  • django migrations
  • django model fixtures
  • database transactions: in SQL at large, and in django.
  • The Gherkin language for behavioral test specification.
  • Browser automation, for example using selenium. Note that most of what is covered in this document is independent of whether you include browser automation in your behavioral tests or not. The small section on this topic has been tested with selenium python bindings 3.141.0 and Google Chrome 75.

Django background

Django settings

It might be desired for your tests to modify parts of django settings that you use for your project.

If you have a way of dispatching multiple settings files in your django project you probably will have no problem specifying and modifying the settings for running your test suite. If not, you might not find an easy way to override settings in behavioral Step implementations.

Django migrations

It is assumed that you know how migrations work. Of particular interest are data migrations which differ from schema-populating migrations in that they insert records in the database. This particular kind of migration needs to be handled with care in the context of tests; more on this below.

Django model fixtures

Django has support for serialized database contents to be dumped and loaded to and from json, yaml, and xml. I will assume you know how these work; the main points to remember are:

  • These fixtures can be loaded and dumped on a per app, or per app-table basis.
  • These fixtures are transparent db dumps: when dumped they include everything that exists in database tables (including auto-incremented primary keys) and when loaded into the database they are loaded exactly as-is; i.e. model validation, and pre/post model save hooks are circumvented, so are auto-increment policy of the database (since primary key values are provided). This means that handling lots of primary and foreign keys with model fixtures is cumbersome since they contain hardcoded values for all columns.
  • Constraints that are implemented at the database level (as opposed to the ORM level) are still in effect when loading such fixtures. Examples include:
    • data type constraints (e.g. feeding an integer to a varchar column),
    • foreign key constraints (e.g. foreign key column refering to non-existent primary key in other table),
    • uniqueness constraints
  • Fixture files of this type are auto-discovered, by manage.py, or by django/behave test utilities, from <app>/fixtures by simplifyspecifying the filename relative to this directory.

Behave

It is assumed that you know how to run a Hello World! example using behave.

Behave basics

See the documentation for Gherkin and behave for full details.

Behave tests are structured as follows:

  • A unit of testing is a Scenario. A Scenario is defined by a number of Steps some of which prepare a user scenario (using Given, When, And) and the rest make assertions about the behavior of the app (using Then, And, But).
    • Both kinds of Steps are implemented in python code. This means that you can do whatever you want in the implementation of a Step. For example you can have add a sys.stdin.readline() the implementation of any Step to make the execution of your test suite pause until you press Enter in the terminal; this will give you the time to debug the state of the browser, files, or the database in the middle of the test suite.
    • Given, When, Then, And, and But are all the same under the hood; use them for readability. docs.
    • Get a list of available steps via --steps-catalog (same applies to behave-django CLI).
  • Scenarios are grouped together in Features. This provides:
    • semantic coherence relating test cases to user needs (possibly expressed as user stories),
    • a way for groups of related Scenarios to share certain aspects of their preconditions (more on this below).
  • Features and Scenarios can have Tags with the @some_tag syntax (multiple tags are allowed). Tags can be used for:
    • easily filtering tests in the command line using --tags (boolean expressions are allowed)
    • specifying custom behavior to be applied to a Feature or a Scenario using the before_tags and after_tags hooks (more on hooks below).
    • docs:
  • Fixtures are needed to simplify the implementation of preconditions for Features, Scenarios, and Steps. These could be anything from launching a (headless) web browser, to creating files, and most importantly preparing the database while providing appropriate isolation between tests.

Behave hooks

Behave provides a list of hooks for customizing setup and teardown steps involved at various levels, docs.

  • Feature level: {before,after}_feature(context, feature) are called before and after executing every Feature.
  • Scenario level: {before,after}_scenario(context, scenario) are called before and after executing every Scenario.
  • Step level: {before,after}_step(context, step) are called before and after executing every Step.
    • Relevant to this list is also the signature of Step implementations that are required to be:
      @given("I do something")
      def some_step_impl(context, *args):
          # ...
  • Tags: {before,after}_tag(context, tags) are called for every Feature and Scenario that have tags.

To understand how these hooks work and how to use them effectively you need to understand the context object that is passed around to hooks and Step implementations.

Behave context and its stack

As your test suite gets executed, behave implements all its magic into "layers" (organized in a stack, see below) that are dynamically added and removed, docs.

  • testrun is the highest-most level and applies to everything in the test suite. Every time you execute your test suite a testrun layer is created and is kept for the entire duration of tests.
  • feature corresponds to a single Feature and applies to all the Scenarios in it. Every time a new Feature is being executed:
    1. a new feature layer is added on top of the testrun layer,
    2. the before_feature hook is called,
    3. all Scenarios in the Feature are executed,
    4. the after_feature hook is called, and
    5. finally, the feature layer is removed.
  • scenario corresponds to a single Scenario and applies to all the Steps in it. Every time a new Scenario is being executed:
    1. a new scenario layer is added on top of the current feature layer,
    2. the before_scenario hook is called,
    3. all Steps in the Scenario are executed,
    4. the after_scenario hook is called,
    5. and finally the scenario layer is removed.
  • step corresponds to a single Step. Every time a new Step is being executed:
    1. a new step layer is created on top of the current scenario layer,
    2. the before_step hook is called,
    3. the Step implementation is executed,
    4. the after_step hook is called, and
    5. finally, the step layer is removed.

What is the context object and what are these layers?

The context object is a single instance of behave.runner.Context that lives for the entirety of your test suite. You can verify this by inspecting id(context) in various places (Step implementations, hooks). The fact that interactions with the context object behave differently, and hopefully appropriately, depending on where (i.e. in which hook) it is being used is made possible by behave's internal book-keeping of the layers listed above.

These layers are stored as stack of dictionaries in the _stack attribute of the context object. Among other things this is where the magic for defining the scope of fixtures happens (more on fixtures below):

  • when using use_fixture to use behave fixtures in a before_X hook, fixture setup is executed and its cleanup is registered in the @cleanup attribute of the appropriate layer (corresponding to X) and executed after X has been executed (more on this below).
  • when setting context.fixtures to use behave-django fixtures in a before_X hook, fixture setup (i.e. loaddata) is executed before X is executed, and before every Y layer that sits atop the X layer. (more on this below).

An Example

Suppose we have the following in environment.py:

from behave import given

def print_context(context):
    from pprint import pprint as pp
    print('-> context object (%d) has stack:' % id(context))
    pp(context._stack)

def before_all(context):
    print('-> Before all')
    context.fixtures = ['institutions.json']
    print_context(context)

def before_feature(context, feature):
    print('-> Before feature', feature)
    context.fixtures = ['users.json']
    print_context(context)

def before_scenario(context, scenario):
    print('-> Before scenario', scenario)
    print_context(context)

def before_tag(context, tag):
    print('-> Before tag', tag)
    print_context(context)

def before_step(context, step):
    print('-> Before step', step)
    print_context(context)

@given(u'I do nothing')
def nothing_impl(context):
    print('-> Step implementation', '"I do nothing"')
    print_context(context)

And the following feature definition:

@some_feature_tag
Feature: A Feature

    @some_scenario_tag
    Scenario: A Scenario
        Given I do nothing

Here is the result of executing behave-django, formatted and annotated for clarity:

$ python manage.py behave --no-capture
Creating test database for alias 'default'...

      -> Before all
      -> context object (140105905721240) has stack:
      [{'@cleanups': [], # behave fixture cleanup closures end up here
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        # ... truncated
        }]
      -> Before tag some_feature_tag
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        # ... truncated
        }]
      -> Before feature <Feature "A Feature": 1 scenario(s)>
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'fixtures': ['users.json'],
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        # ... truncated
        }]

@some_feature_tag
Feature: A Feature # portal/tests/features/basic.feature:2

      -> Before tag some_scenario_tag
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'scenario',
        'scenario': <Scenario "A Scenario">,
        'tags': {'some_scenario_tag', 'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'fixtures': ['users.json'],
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        # ... truncated
        }]
      -> Before scenario <Scenario "A Scenario">
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'scenario',
        'scenario': <Scenario "A Scenario">,
        'tags': {'some_scenario_tag', 'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'fixtures': ['users.json'],
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        # ... truncated
        }]

  @some_scenario_tag
  Scenario: A Scenario  # portal/tests/features/basic.feature:5
    Given I do nothing  # portal/tests/features/environment.py:38

      -> Before step <given "I do nothing">
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'scenario',
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf29ddd30>,
        'scenario': <Scenario "A Scenario">,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        'tags': {'some_scenario_tag', 'some_feature_tag'},
        'test': <behave_django.testcase.BehaviorDrivenTestCase testMethod=runTest>},
       {'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'fixtures': ['users.json'],
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf2ac74c8>,
        # ... truncated
        }]
      -> Step implementation "I do nothing"
      -> context object (140105905721240) has stack:
      [{'@cleanups': [],
        '@layer': 'scenario',
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf29ddd30>,
        'scenario': <Scenario "A Scenario">,
        'stderr_capture': <_io.StringIO object at 0x7f6cf29f8288>,
        'table': None, # this is where Gherkin Tables end up
        'tags': {'some_scenario_tag', 'some_feature_tag'},
        'test': <behave_django.testcase.BehaviorDrivenTestCase testMethod=runTest>,
        'text': None},
       {'@cleanups': [],
        '@layer': 'feature',
        'feature': <Feature "A Feature": 1 scenario(s)>,
        'fixtures': ['users.json'],
        'tags': {'some_feature_tag'}},
       {'@cleanups': [],
        '@layer': 'testrun',
        'config': <behave.configuration.Configuration object at 0x7f6cf2f4e438>,
        'feature': None,
        'fixtures': ['institutions.json'],
        'log_capture': <behave.log_capture.LoggingCapture object at 0x7f6cf2bd0208>,
        'stderr_capture': <_io.StringIO object at 0x7f6cf2ac74c8>,
        # ... truncated
        }]

    Given I do nothing  # portal/tests/features/environment.py:38 0.001s

1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
1 step passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.001s
Destroying test database for alias 'default'...

Other useful Gherkin features

  • You can use Tables in Step definitions to complement fixtures. Use the former for simple cases involving one or two models and latter for more complex cases involving many models; table docs (more on fixtures below). Tables are parsed by behave and handed to the Step implementation via the context. Populating the DB accordingly still needs to happen in Step implementation.
  • Each Feature can have a Background section specifying a set of Steps which are interpreted and executed in the same way as Scenario Steps: they both use Given, When, And and both come from the same pool of available steps. The Steps in the Background section effectively get prepended to the steps of every scenario, i.e. are executed again for every scenario after the before_scenario hook. docs
  • Parametric scenarios can be implemented using Scenario Outline (as per behave docs, or equivalently Scenario Template as per gherkin spec) with parameters (variable parts) defined in table format under Examples, docs.

Test Database

Inevitably you will want to specify a specific starting state for the database to be used for each test. The main django testing framework does a lot of the heavy lifting: a separate test database. that is in some way "cleared" after each test case to provide an isolated environment to each test case. Behave-django uses the same underlying plumbing to turn Scenarios into django test cases.

Note that when using SQLite as a database backend (e.g. in dev), by default the test database will reside in memory. This can be changed; see section on debugging the database.

Database isolation

The following inheritance chain (discussed in detail below) captures most of what you need to know about how database isolation is achieved in django and how it's modified by behave-django:

Notes:

  1. Migrations are run only once before the entire test suite (this is where data migrations become problematic; more on this below). The way that the data put in the database for various tests is flushed in teardown depends on whether you are using django.test.TransactionTestCase (in the above chain; what one has to use with behave-django) or django.test.TestCase (not in the above chain; discussed below).

  2. SimpleTestCase does not provide any DB isolation. It's merely a barebones addition to unittest.TestCase to set up and tear down very basic django things, e.g. settings.

  3. django.test.TransactionTestCase provides DB isolation by "flush[ing] the contents of the database to leave a clean slate" (as per code docs) in its teardown (cf. its _post_teardown()). This means data put in DB by migrations will be blown away after the first test.

  4. There is also django.Test.TestCase(TransactionTestCase) (not in the above chain, and not to be confused with unittest.TestCase) which provides isolation by wrapping the test case in a DB transaction which is rolled back (as opposed to the blanket truncation of TransactionTestCase). Django code docs say:

    In most situations, TestCase should be preferred to TransactionTestCase as it allows faster execution. However, there are some situations where using TransactionTestCase might be necessary (e.g. testing some transactional behavior).

    Also see: https://docs.djangoproject.com/en/2.2/topics/testing/tools/#transactiontestcase

  5. The naming might be confusing: TransactionTestCase is for testing transactional code but does not use transaction for isolation (that would not have worked: nested transactions). TestCase uses transactions for test isolation and therefore it is not appropriate for testing transactional code.

  6. When using behave-django you are stuck with TransactionTestCase's truncating behavior since BehaviorDrivenTestCase inherits from it. That's ok because if your app needs behavioral testing you probably also have a lot of transactional code.

  7. The way that django.test.TransactionTestCase provides database isolation between tests breaks data migrations. This can be fixed by setting serialized_rollback of all test cases to True:

    from behave_django.testcase import BehaviorDrivenTestCase
    BehaviorDrivenTestCase.serialized_rollback = True

    This will serialize the contents of the datbase pre-setup, store it in memory, and repopulate the DB to its original, presumably non-blank state in teardown. docs. Relevant issues:

  8. If for some reason your tests rely on hardcoded row ids (PKs), you should set BehaviorDrivenTestCase.reset_sequences = True. Otherwise despite the database truncation after each scenario, the internal row counter of the database will continue auto-incremeting and possibly collid with your hardcoded ids. This can get further confusing when confounded with issues like this.

Behave-django and the database

Since each Scenario is turned into a django TransactionTestCase in the end the default behavior is for all tables in the database to be flushed after each Scenario (the tables themselves, containing the schema survive).

Since Scenario is the only level of isolation that survives django's DB isolation logic, you might need some way of sharing certain DB states across Scenarios of the same Feature, or all Scenarios in all Features.

Note: this is a completely separate problem than that of using data migrations which put records in the database; these need serialized_rollback as discussed in the previous section. You have to deal with that problem no matter what testing framework or library you use since the root of the issue is django itself.

To specify in a DRY way the common DB state at a level but Scenario you fundamentally have two options:

  1. You can use context.fixtures (provided by behave-django, discussed below) in before_{all,feature,scenario} to specify django model fixtures to be populated for the corresponding scope. This will work by reloading the applicable features for every scenario (e.g. a Feature-level fixture will be re-populated from scratch in every Scenario in the Feature).
    • you can not use behave fixtures (discussed below) for DB purposes because they don't have a way of reproducing themselves when django truncates the tables after each Scenario.
  2. You can use the Background feature of Gherkin (discussed above); docs https://behave.readthedocs.io/en/latest/gherkin.html#background
    • A related useful feature for specifying database contents (among others) in Gherkin, in Background sections or elsewhere in Scenarios, is Tables (discussed above); docs.

Debugging Tests

Debugging python code

If you want to debug your test-suite at an arbitrary Step or in a hook invocation you can either use a debugger or simply:

  1. add a sys.stdin.readline() to the hook function or your Step definition. For easy re-use you can have use the following Step:

    @when("I pause to debug")
    def pause_to_debug(context):
        import sys
        sys.stdin.readline()

    which effectively blocks the test-suite, giving you time to debug the database, until your press Enter in the terminal.

  2. In order to use stdout/stderr as a means for debugging (e.g. if you are using pdb) you need to pass the --no-capture argument to python manage.py behave.

When using an automated browser you if you wish to inspect the state of the rendered page (and have access to your favorite dev tools) you can:

  1. Turn the browser into non-headless mode (which you probably need for your CI environment) and use the same technique as above to pause the execution of the test-suite to give you time to debug the state of the browser.
  2. When non-headless browser is not an option (in CI or in dev environments without a windowing system) you can use the webdriver's ability to take screenshots and control that either using a debugging tag or after each failed Step/Scenario.

Debugging the Database

In order to debug the test database here are two tricks that address different needs; docs https://docs.djangoproject.com/en/2.2/topics/testing/overview/#the-test-database

  • If you want the test database to not be dropped after the test-suite has finished you can use the keepdb option which is also available in the behave-django CLI.
    • Note that when using SQLite for tests; the test database is by default kept in memory (super fast), force it to be on disk by changing settings.DATABASEES.TEST['name'] from None (default) to path on disk. This slows down migrations/fixture-loading by a lot; docs
  • If you want to test the database mid-test you need to pause the test suite (see previous section) in addition to keeping the db alive (these two are independent aside from the fact that for SQLite you still need to use an on-disk database to be able to debug it).

Fixtures

Fixtures are useful for specifying reusable setup-cleanup pairs that perform a task necessary for running some portion of test-suite. They could apply to a single Step, a single Scenario, all Scenarios in a Feature, or all Features.

In addition, tags can offer more specificity and readability when applying fixtures to multiple Scenarios. either via the before_tags hook or by inspecting the tags of a Scenario or Feature in before_scenario and before_feature).

There are two very common use cases for fixtures in this context (there are plenty more depending on the requirements of your specific project):

  • (non-DB-related): specifying setup logic that is unrelated to the django database. The appropriate tool for these type of fixtures is use_fixture provided by behave, see below. Examples:
    • starting a browser in setup (of scenario or feature or all tests) and closing it (more on browser automation below)
    • similarly: any ad-hoc temporary files, non-django database connection, network connection.
  • (DB-related): specify prior DB state for Scenarios on a per-scenario, per-feature, whole-test-suite, or per-tag basis. This use case relates to the discussion of test database above. As noted there the most appropriate tool for these type of fixtures is context.fixtures provided by behave-django (see below).

Behave Fixtures

Behave fixtures can be used via use_fixture. There are two supported modes, docs.

  • A function that returns the fixtured object, nothing to do in cleanup.
  • A generator that yields the fixture object (must yield only once). The remainder logic of the generator is captured as a closue by behave and executed upon cleanup.

For more advanced usage use_fixture_by_tag and use_composite_fixture_with are also available.

Behave-django fixtures

Behave-django provides a mechanism for loading django model fixtures (discussed above) as test fixtures in a way that is compatible with django's table-truncation that happens after each Scenario; docs.

To use these you need to specify the desired model fixtures (as paths relative to <app>/fixtures) via context.fixtures = ['my_fixture.json'] in:

  • before_all in which case your django model fixture will apply to all Scenarios in the test suite. They will simply be loaded before every single Scenario. No cleanup is necessray since django automatically flushes them after every Scenario.
  • before_feature similarly for the model fixture to be loaded for all Scenarios in the specified Features.
  • before_scenario, similarly for the model fixture to be loaded for specified Scenarios.
  • before_step or via a Step implementation decorator. Note that such fixtures will not be cleaned up after the step; they will stick around for upcoming steps and will only be blown away by django's truncation at the end of the Scenario.

The way that the above magic work is by behave-django using the internal stack of the context object (discussed above) to store the fixtures specified in each layer and re-applying all fixtures from existing lower layers (e.g. when executing a Scenario, all model fixtures from the testrun, feature layres are also loaded).

Using tags to manage fixtures

Behave tags provide an easy way to standardize custom logic. Fixtures are a prime candidate for this: you can have a convention where @fixtures.my_fixture translates to context.fixtures = ['my_fixture.json'] in an appropriate before hook. docs.

Browser automation (or not)

You may or may not need to use browser automation. Certain aspects of your app might lend themselves more appropriately to browser-less test where all you need is Steps like this:

@when(u'I visit "{url}"')
def visit_url(context, url):
    context.test.client.login(username='test-user', password='test-password')
    context.response = context.test.client.get(reverse(url))

@then(u'I should see "{text}"')
def should_see(context, text):
    assert text in context.response.content.decode('utf-8')

You can make your tests more specific by parsing the DOM using BeautifulSoup:

from bs4 import BeautifulSoup

@then(u'I should see element "{selector}"')
def should_see_element(context, selector):
    html = context.response.content.decode('utf-8')
    doc = BeautifulSoup(html, 'html.parser')
    assert len(doc.select(selector)) > 0

Sessions and Cookies

You might need to log in a test-user as a Scenario step or fixture. This requires the browser to set the appropriate cookie for the appropriate domain (which includes the port and will be the live server provided by django).

Merely submitting the signin form in your Steps might not be enough (TODO: why?). A more standard way of automatically logging the user without manipulating the sign-in form is to set the proper cookie for the browser:

@given("I am logged in")
def login(context):
  context.test.client.login(username='test-user', password='test-password')
  cookie = context.test.client.cookies['sessionid']

  context.browser.get(context.get_url('/')) # set the current domain for browser
  context.browser.add_cookie({
      'name': 'sessionid', # django's session ID cookie name
      'value': cookie.value, # contains django's encoded session data
      'secure': False, # or True if you are running your tests under HTTPS
      'path': '/',
  })

Notes on django sessions

> SELECT * FROM django_session;
session_key|session_data|expire_date
4zxl16eu0q5k8qrv61z3v7u4v5dw7nml|MzRiNjAxNmYyYjFlMDgyMDYzMzViZjkzMjQ3YTZiMDhhNzUwNDBkNjp7Il9jc3JmdG9rZW4iOiJvVXNVaGV3dFlnRWl2YlN2MUwwNWtJVDFqazlNZ3BabTRsWHhVZXVlMTZpRnAyUjZwMWpuZ3dneUFaemlINnhtIiwiX2F1dGhfdXNlcl9pZCI6IjYiLCJfYXV0aF91c2VyX2hhc2giOiI5ZDI2NjkzNGQ2NzE1ZTY1M2VlNzBkMGE4NDg0MzMyOTFlN2RiMWQ4IiwiX2F1dGhfdXNlcl9iYWNrZW5kIjoiZGphbmdvLmNvbnRyaWIuYXV0aC5iYWNrZW5kcy5Nb2RlbEJhY2tlbmQifQ==|2019-07-24
20:06:05.124000

There are two cookies set on the browser side: sessionid (matching session_key in DB) and csrftoken (contained in encoded session_data in DB).

Can I turn sessions into a django fixture?

Probably not. While the sessions table can be dumped via python manage.py dumpdata sessions but it is not easily fixturable since session data contains an encoded verseion of the CSRF token which is re-issued upon every request:

>>> from django.contrib.sessions.models import Session
>>> from django.contrib.auth.models import User
>>> session_key = '4zxl16eu0q5k8qrv61z3v7u4v5dw7nml'
>>> session = Session.objects.get(session_key=session_key)
>>> print(session.get_decoded()) # uses settings.SECRET_KEY to decode
{ '_auth_user_id': '6',
  '_csrftoken': 'f4L1TeMznkrtVwM0eqSz68oLbC4c1dFoWNfkFGGWjSIbk0zMVoSFJZ7NUhGcEts4',
  '_auth_user_hash': '9d266934d6715e653ee70d0a848433291e7db1d8',
  '_auth_user_backend': 'django.contrib.auth.backends.ModelBackend'
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment