Skip to content

Instantly share code, notes, and snippets.

@briancurtin
Created January 19, 2018 21:14
Show Gist options
  • Save briancurtin/214317df1189d69d2045e52df0e5a0b9 to your computer and use it in GitHub Desktop.
Save briancurtin/214317df1189d69d2045e52df0e5a0b9 to your computer and use it in GitHub Desktop.

import unittest - super high level basics

Like other Python code, tests are organized in a package structure, but naming becomes significant. You can name the package anything—in our case it's everything under tests.unit—but the test modules themselves need to be prefixed with "test", e.g., tests.unit.api.test_utils. Inside those modules, classes that subclass unittest.TestCase are imported by the test runner, and methods on those subclasses that begin with "test" are executed. Sticking with the example, tests.unit.api.test_utils includes two suites of TestCases, organized by the types of tokens we're going to test the check_token function for, one for shared and one for rackspace.

If a test method is able to run without being failed by some assertion, it passes. In the case of those check_token tests, we're mostly trying to pass bad creds and ensure the proper execption gets raised, and we have a couple that pass good data and ensure the function returns nothing.

https://docs.python.org/3/library/unittest.html has the full details on what sort of assertions can be made.

Testing TURTLES-421

In the case of this PR, we don't currently have testing for the library end of the ops_review report, we just have the behave tests for it. I think the best thing to do would be to introduce a new tests.unit.lib.test_ops_review module and probably include a class that inherits from unittest.TestCase called something like TestOpsReview. The TestCase subclass names don't need to follow any specific convention, as the test discovery code knows to load the test_ops_review module and then it finds all of the classes it exposes and chooses to load the ones that subclass TestCase.

For the change to _count_and_percentage, I think it'd be good to have two test methods: One where total is greater than zero to fall into that "total > 0" branch, and one where total is zero so we ensure that we're not dividing by zero and whatnot.

Therefore, in TestOpsReview, I'd probably have a test_count_and_percentage_nonzero and test_count_and_percentage_zero method, and since these are module-level functions we don't need to create much scaffolding, as compared to if they were methods on a class, where we'd have to instantiate said class and potentially mock things out in order to isolate our test to just the piece we're focusing on.

For the nonzero test, I'd probably use a "self.assertEqual" assertion and test that something like {"count": 50, "percentage": 50.0} is equal to the result of the call to _count_and_percentage(50, 100). I often like to split such a test into three lines, where you do expected = {"count"..., actual = _count_and..., and then self.assertEqual(expected, actual), though when these things are small they're fine as a one-liner assert call.

The zero test is slightly more involved, as we also want to check that no exception was raised by the call, and secondarily check that the returned data is correct. That first part I'm really only suggesting because this is fixing a bug where an exception was raised, whereas if we were writing this function from scratch right now (in the way that you've fixed it to work) with tests at the same time, I would normally just be testing that known inputs match expected outputs.

In this case, we want to do our "self.assertEqual" test wrapped in a try/except block that is checking for a ZeroDivisionError. In the except block, if that exception does happen to be caught, we can call "self.fail(<optional_message>)" to signal that the test failed. self.fail is the root failure mechanism that all of the self.assert* methods call under the hood.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment