Skip to content

Instantly share code, notes, and snippets.

@thcipriani
Created October 15, 2019 17:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save thcipriani/ffdf3b0c754790b382cb83ee2c5ebc54 to your computer and use it in GitHub Desktop.
Save thcipriani/ffdf3b0c754790b382cb83ee2c5ebc54 to your computer and use it in GitHub Desktop.

RelEng bookclub

Software Testing Anti-patterns

Key Terms

Unit test

  • fast, no setup needed, tests a function, low complexity

Integration test

  • slow, needs setup, tests a component or service that makes up a system

Heuristics

  • a test uses a database
  • a test uses the network to call another component/application
  • a test uses an external system (e.g. a queue or a mail server)
  • a test reads/writes files or performs other I/O
  • a test does not rely on the source code but instead it uses the deployed binary of the app

e2e/UI tests

  • out of scope for the article
  • not mentioned come with their own antipatterns

1. Unit tests w/o integration tests

  • Integration tests are needed because, “Any connections to other modules either developed by you or external teams need integration tests”

My problems with this section

  • I don’t know that these are solved with integration tests really:
    • Performance/Timeouts
    • Deadlocks/Livelocks
    • Cross-cutting Security Concerns

2. Integration tests w/o unit tests

  • unit tests # == sum cyclomatic complexity
  • integration tests # == product of cyclomatic complexity
  • integration tests are hard to debug
  • Why you need unit tests
    • Unit tests are easier to maintain
    • Unit tests can easily replicate corner cases and not-so-frequent scenarios
    • Unit tests run much faster than integration tests
    • Broken unit tests are easier to fix than broken integration tests

3. Wrong kinds of tests

understand what type of tests add the most value to your application

4. Testing the wrong functionality

  • Critical code (100% coverage) - This is the code that breaks often, gets most of new features and has a big impact on application users
  • Core code (100% coverage) - This is the code that breaks sometimes, gets few new features and has medium impact on the application users
  • Other code (diminishing returns here) - This is code that rarely changes, rarely gets new features and has minimal impact on application users.

5. Testing internal implementation

  • Tests that need to be refactored all the time suffer from tight coupling with the main code.
  • Similar to programming to “interfaces” rather than implementation

6. Paying excessive attention to test coverage

Not sure how to get some of these, PTVB?

  • PDWT % of Developers writing tests 100% 20%-70% Anything less than 100%
  • PBCNT % of bugs that create new tests 100% 0%-5% Anything less than 100%
  • PTVB % of tests that verify behavior 100% 10% Anything less than 100%
  • PTD % of tests that are deterministic 100% 50%-80% Anything l

Mentiones Pareto principal

7. Having flaky or slow tests

In practice flaky and slow tests are almost always integration tests and/or UI tests

8. Running tests manually

  • Tests should run automagically for every commit
  • “Developers should learn the result of the test for their individual feature after 5-15 minutes of committing code”

9. Treating test code as a second class citizen

  • Testing code is often gross
  • “If you employ tools for static analysis, source formatting or code quality then configure them to run on test code, too.”

10. Not converting production bugs to tests

  • relates back to metrics mentioned in #6
  • “Bugs that slip into production are perfect candidates for writing software tests”

11. Treating TDD as a religion

  • There any many valid ways to write tests for a peice of software
  • TDD is only one way
  • Some code needs no tests

12. Writing tests without reading documentation first

  • Understand what the test library provides before writing “helper” functions
  • Understand how to:
    • write stubs/mocks
    • conditionally run tests
    • group tests/categorize
    • setup/teardown
    • parameterize tests

13. Giving testing a bad reputation out of ignorance

  • bad experiences lead to folks badmouthing testing, but really they’re doing it wrong
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment