Skip to content

Instantly share code, notes, and snippets.

@macabreb0b
Last active March 20, 2018 21:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save macabreb0b/6589b171cda53a167372fd373405ebc9 to your computer and use it in GitHub Desktop.
Save macabreb0b/6589b171cda53a167372fd373405ebc9 to your computer and use it in GitHub Desktop.
Some thoughts on testing
  • Tests should be "stupid"
    • Don't include complex logic
  • Tests should be easy to understand
    • Give your tests descriptive names
    • Ask "will the person who sees this test fail understand why it's failing?"
  • Tests should not be DRY
  • Easy-to-test code is good code / Hard-to-test code is bad code
    • your tests should instruct your feature code
~good test~
given input X, expect output Y

~bad test~
run method X 100 times, expect it to do Y at least 60% of the time
  • Everything should be unit tested / Some things should be integration tested

    • In a codebase with 100% coverage, what mix of tests would be easiest to manage when deleting a certain feature?

      It seems that the mix of tests that would be easiest to update would be something like:

      • 1 unit test per function and/or conditional operation
      • 1 integration test per critical feature
    • As a test's scope inflates, it becomes harder to determine which part of the test is critical.

      Integration tests introduce debt, because changing any part of the code being tested could cause a failure.

      Consider: if you refactor or delete some part of the codebase that your test depends on, should your test fail? (what is it actually testing?)

  • Tests should be considered part-of-the-feature-code

  • Test for existence, don't test for absence-of-existence

# if the expected value of my_list is:
['dog', 'dog', 'dog']

# but the actual value of my_list is:
['moose', 'moose', 'moose']

~bad test~ 
def test_no_cats_in_my_array():
    assert not any('cat' in my_list)  # this test would pass, even though expected does not match actual
    
~good test~
def test_only_dogs_in_my_array(): 
    assert all('dog' in my_list)  # this test would fail (we want this)

good testing habits

Some habits that result in better tests (and feature code)

  • ✅ write your test cases down before writing your code (don't need to fill them out)
  • ✅ do something to break your test, confirm that it breaks in the way that you expect it to break
  • ✅ ask "what could I do to break this feature without breaking the test?"
  • ✅ test the stuff that matters to your feature
    • assume TheNextDeveloper™ will go in with a hammer and chisel, and anything that doesn't make a test fail is fair game to change with abandon
    • it's not enough for a test to pass.. if your test is not clear and easy-to-update, then TheNextDeveloper™ might not bother to (or know they need to) update it

testing antipatterns

Think twice when you catch yourself doing any of the following

  • 🚫 having complex logic in your test
    • you don't have tests for your tests, so bugs in this code will not be apparent
    • if your test is too complex, maybe the method you're testing is too complex
    • see "tests should be stupid"
  • 🚫 using a helper method to DRY up your tests
  • 🚫 testing things outside of the scope of the method you're testing
  • 🚫 sleep(X)
  • 🚫 making assumptions about time
def test_its_not_five_oclock():
    assert time.now().hour != 5 # this test will fail occasionally
  • 🚫 testing mock data
  • 🚫 give a hoot, don't pollute

testing infrastructure antipatterns

Make it easy for developers to test their code. Make it hard for developers to not test their code.

  • 🚫 slow-running tests
  • 🚫 hard-to-understand output
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment