Skip to content

Instantly share code, notes, and snippets.

@hassek
Created March 14, 2018 16:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hassek/7af86f475f4f76e79c46c612e1a9ef17 to your computer and use it in GitHub Desktop.
Save hassek/7af86f475f4f76e79c46c612e1a9ef17 to your computer and use it in GitHub Desktop.
Testing

Automated testing

Fixtures vs Factories

  • Anything that may be modified is not a good choice for fixtures but for factories instead
  • Fixtures hide the testing logic, don't use it for building use cases, instead use it for easing the testing process.
  • Fixtures still has some use cases to ease big data representations and objects that should never change (emails, user metrics, etc).

Saving vs not saving in the DB

I like to save everything on the DB until it becomes a problem, this avoids a

Type of testings

  • Unit testing
  • Action testing
  • Integration testing
    • Docker is amazing for these type of tests
  • Live testing <In my ideal world>
    • Copy and send a % of the production requests to QA and have an exact copy of production data.

The testing process

  1. Create tests based on requirements
    • Defining what are the values you expect before hand is very important
    • For code migrations/big refactors, the value you expect is the one received before the changes take place.
    • Be careful on hiding the testing logic, it should be easy to understand how and why you are doing the tests.
  2. Build the code
  3. Update testing logic to start doing the actual business

Rules of thumb

  1. Testing Data should belong only to the test, data shouldn't be modify-able or available by other tests. meaning, tests should be idempotent.
  2. DO third party calls and copy the response.
  3. Mock third party calls with the exact response given in the previous step
  4. Use real data when possible! assumptions are the very start for all errors

Mockers

There are plenty of mocking libraries out there

  • mocker
  • mock
  • mockito
  • etc

I like to use the one that comes by default in python which is mock but his is a personal preference. As of today I haven't found one case where I can't do my mocking right with the tool.

Some examples

Mock an object by using with statement

# Mock credentials manager to return the team leader as the authenticated user
with patch.object(CredentialsManager, 'get_most_recent_user_email', return_value=team.leader.user):
    response = self.client.get("/team/csv")

Mock object for the whole function

@patch('report.models.Report')
def testing_function(self, mock_report):
	mock_report.get_post_process.return_value = ReportPostProcessor()

or even a whole testing class

@patch('report.models.Report')
class TestingMyHomies(TestCase):
	def test_my_case(self, mock_report):
		mock_report.get_post_process.return_value = ReportPostProcessor()

Raise errors

with patch.object(Report, 'get_post_process', side_efect=ValueError):
	logic_function() # <-- will raise error when we call report function.

Factory boy examples

Lazy attributes are loaded on time, so you can add some logic if necessary to fields.

class User(DjangoModelFactory):
    # Meta, first_name, last_name - as above...
    is_staff = False

    @lazy_attribute
    def email(self):
        domain = "myapp.com" if self.is_staff else "example.com"
        return o.username + "@" + domain

Some times an object has no meaning without other objects (report and metrics for example) and the post generation helps here very well

class ReportFactory(DjangoModelFactory):
    class Meta:
        model = Report

    id = Sequence(lambda x: uuid4().hex)
    credentials = SubFactory(CredentialsFactory)
    period_to = LazyAttribute(lambda x: datetime.now(pytz.UTC).replace(day=1, hour=0, minute=0, second=0,
        microsecond=0))
    period_from = LazyAttribute(lambda x: (datetime.now(pytz.UTC).replace(day=1, hour=0, minute=0, second=0,
        microsecond=0) - timedelta(days=1)).replace(day=1))
    ready = True

    @post_generation
    def metrics(self, create, extracted, **kwargs):
        if not create or extracted is False:
            return

        # Assuming the metrics fixture was loaded on this case.
        # Good example on when fixtures are useful vs factories
        for mtr in Metric.objects.all():
            MetricFactory(kind=mtr.kind, value=mtr.value, report=self)

Typical case for django user and userProfile

@factory.django.mute_signals(post_save)
class ProfileFactory(factory.django.DjangoModelFactory):
    class Meta:
        model = my_models.Profile

    title = 'Dr'
    # We pass in profile=None to prevent UserFactory from creating another profile
    # (this disables the RelatedFactory)
    user = factory.SubFactory('app.factories.UserFactory', profile=None)

@factory.django.mute_signals(post_save)
class UserFactory(factory.django.DjangoModelFactory):
    class Meta:
        model = auth_models.User

    username = factory.Sequence(lambda n: "user_%d" % n)

    # We pass in 'user' to link the generated Profile to our just-generated User
    # This will call ProfileFactory(user=our_new_user), thus skipping the SubFactory.
    profile = factory.RelatedFactory(ProfileFactory, 'user')

Let's build a test

Based on reports-portal code on branch automation_tests let's do these steps:

  • Build a test to generate a team csv report
    • Build factories for team and team member
    • Build test logic for team csv generation
      • Create Team with 3 members
      • Create a reports for all team members
        • A not ready report
        • A ready report with all metrics
        • One team member without reports
        • A ready report with half the metrics? we can get crazy here
      • Define what is expected
        • We want all users to appear in the report
        • Ready report should have all the metrics
        • Non ready and non existent reports should appear with just the name and email of the user
        • Any times should be expressed on the leaders timezone
      • When verifying expectations we need to do that in a flexible way so it doesn't break easily, for example: * Adding a new metric shouldn't break our test * In general there is no need to verify all the exact metric values in the CSV * If I change the leader timezone, the test should still pass * etc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment