Skip to content

Instantly share code, notes, and snippets.

@ikusalic
Created October 24, 2013 21:12
Show Gist options
  • Save ikusalic/7145077 to your computer and use it in GitHub Desktop.
Save ikusalic/7145077 to your computer and use it in GitHub Desktop.
Exploration: how to do unit testing

Testing notes

Uncle Bob: Test First

Source: http://blog.8thlight.com/uncle-bob/2013/09/23/Test-first.html

  • tests are specs for the system and are more important than the system itself
  • (Tests should be) short, well factored, and well named. They ought to read like specifications; because they are specifications
  • (Goal:) trust your test suite to the extent that, if it passes, you know you can ship! If you trust your tests that much, then those tests must describe everything that the system does.
  • The quality of the production code depends on the tests; but the quality of the tests is independent of the production code.
    • You can (and do) create the system from the tests, but you can't create the tests from the system.
    • he tests are the most important component in the system. They are more important than the production code
  • Guide for writing code: "First make it work, then make it right, then make it small and fast." Kent Beck
  • three magic words: Given, When, and Then. Before you write a test, you should be able to describe the test you are about to write using those three words.
    • all (that) complexity can be extracted from the test into utility functions, leaving behind the three critical statements: Given, When, and Then.

Dan North: Introducing BDD

Source: http://dannorth.net/introducing-bdd/

  • Test method names should be sentences
    • when they wrote the method name in the language of the business domain, the generated documents made sense to business users, analysts, and testers.
  • template – The class should do something – means you can only define a test for the current class.
    • leads to single responsibility classes and dependency injection
  • An expressive test name is helpful when a test fails
  • shift from thinking in tests to thinking in behaviour
    • What to call your test is easy – it’s a sentence describing the next behaviour in which you are interested. How much to test becomes moot – you can only describe so much behaviour in a single sentence. When a test fails, simply work through the process described above – either you introduced a bug, the behaviour moved, or the test is no longer relevant.
  • useful way to stay focused was to ask: What’s the next most important thing the system doesn’t do?
    • identify the value of the features you haven’t yet implemented and to prioritize them
    • It also helps you formulate the behaviour method name: The system doesn’t do X (where X is some meaningful behaviour), and X is important, which means it should do X; so your next behaviour method is simply: public void shouldDoX()
      • answer to another TDD question, namely where to start.
  • We started describing the acceptance criteria in terms of scenarios, which took the following form:
    • Given some initial context (the givens),
    • When an event occurs,
    • then ensure some outcomes.
  • Acceptance criteria should be executable

Sandi Metz: Magic Tricks of Testing (RailsConf)

Source: https://speakerdeck.com/skmetz/magic-tricks-of-testing-railsconf

  • Unit tests goals: Through, Stable, Fast, Few
  • Message origin: Incoming, Sent to self, Outgoing
  • Message Type: Query, Command (should not be mixed together)
    • Query: Return something, change nothing
    • Command: Return nothing, change something
  • Rules:
    • Test the interface, not the implementation

    • Do not test private methods (break for good reason)

      • do not make assertions about their result
      • do not expect to send them
    • Ensure tests doubles stay in sync with the API

    • Test incoming query messages by making assertions about what they sand back

      • Assert result
    • Test incoming command messages by making assertions about direct public side effects

      • Assert direct public side effects
    • Do not test messages sent to self

    • Do not test outgoing query messages

      • do not make assertions about their result
      • do not expect to send them
      • if the message has no visible side-effect, the sender should not test it
    • Expect to send outgoing command messages

    • Be a minimalist

    • test everything once

    • test interface, trust collaborators, insist on simplicity

Steven Sanderson: Writing Great Unit Tests: Best and Worst Practices

Source: http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

  • It’s overwhelmingly easy to write bad unit tests that add very little value to a project while inflating the cost of code changes astronomically.
  • Unit testing is not about finding bugs
    • Goal & Strongest technique
      • Finding bugs (things that don’t work as you want them to)
        • Manual testing (sometimes also automated integration tests)
      • Detecting regressions (things that used to work but have unexpectedly stopped working)
        • Automated integration tests (sometimes also manual testing, though time-consuming)
      • Designing software components robustly
        • Unit testing (within the TDD process)
  • TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests. That’s all!
  • Test scale:
    • True Unit Tests (design a single component)
      • cheep to maintain, scales to any size
    • Integration tests (automate the entire system to detect regressions)
      • resonabley cheep to maintain
      • prove what features are actually working
    • Dirty Hybrids - everything in between is bad practice
      • it’s unclear what assumptions you’re making and what you’re trying to prove
      • Refactoring might break these tests, or it might not, regardless of whether the end-user experience still works
      • Any small change to the internal workings of a single unit might force you to fix hundreds of seemingly unrelated hybrid tests
  • Advice:
    • Make each test orthogonal (i.e., independent) to all the others
      • Any given behaviour should be specified in one and only one test. Otherwise if you later change that behaviour, you’ll have to change multiple tests
      • Don’t make unnecessary assertions
        • It’s counterproductive to Assert() anything that’s also asserted by another test: it just increases the frequency of pointless failures without improving unit test coverage one bit
        • if it isn’t the core behaviour under test, then stop making observations about it!
        • have only one logical assertion per test
        • Remember, unit tests are a design specification of how a certain behaviour should work, not a list of observations of everything the code happens to do.
      • Test only one code unit at a time
        • architecture must support testing units
          • If you can’t do this, then your architecture is limiting your work’s quality - consider using Inversion of Control
      • Mock out all external services and state
        • Otherwise, behaviour in those external services overlaps multiple tests, and state data means that different unit tests can influence each other’s outcome.
        • Avoid having common setup code that runs at the beginning of lots of unrelated tests. Otherwise, it’s unclear what assumptions each test relies on, and indicates that you’re not testing just a single unit.
      • Don’t unit-test configuration settings
      • Name your unit tests clearly and consistently
  • "Many in our industry claim that any unit tests are better than none, but I disagree: a test suite can be a great asset, or it can be a great burden that contributes little."
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment