- fast, no setup needed, tests a function, low complexity
- slow, needs setup, tests a component or service that makes up a system
- a test uses a database
- a test uses the network to call another component/application
- a test uses an external system (e.g. a queue or a mail server)
- a test reads/writes files or performs other I/O
- a test does not rely on the source code but instead it uses the deployed binary of the app
- out of scope for the article
- not mentioned come with their own antipatterns
- Integration tests are needed because, “Any connections to other modules either developed by you or external teams need integration tests”
- I don’t know that these are solved with integration tests really:
- Performance/Timeouts
- Deadlocks/Livelocks
- Cross-cutting Security Concerns
- unit tests # == sum cyclomatic complexity
- integration tests # == product of cyclomatic complexity
- integration tests are hard to debug
- Why you need unit tests
- Unit tests are easier to maintain
- Unit tests can easily replicate corner cases and not-so-frequent scenarios
- Unit tests run much faster than integration tests
- Broken unit tests are easier to fix than broken integration tests
understand what type of tests add the most value to your application
- Critical code (100% coverage) - This is the code that breaks often, gets most of new features and has a big impact on application users
- Core code (100% coverage) - This is the code that breaks sometimes, gets few new features and has medium impact on the application users
- Other code (diminishing returns here) - This is code that rarely changes, rarely gets new features and has minimal impact on application users.
- Tests that need to be refactored all the time suffer from tight coupling with the main code.
- Similar to programming to “interfaces” rather than implementation
Not sure how to get some of these, PTVB?
- PDWT % of Developers writing tests 100% 20%-70% Anything less than 100%
- PBCNT % of bugs that create new tests 100% 0%-5% Anything less than 100%
- PTVB % of tests that verify behavior 100% 10% Anything less than 100%
- PTD % of tests that are deterministic 100% 50%-80% Anything l
Mentiones Pareto principal
In practice flaky and slow tests are almost always integration tests and/or UI tests
- Definition strawman: “flaky” test result as a test that exhibits both a passing and a failing result with the same code.
- Reasons for flaky tests:
- Large tests may be flaky: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html
- Concurrency
- Infra problems
- Dealing with flaky tests
- Track them
- let developers mark them
- re-run only failing tests
- Tests should run automagically for every commit
- “Developers should learn the result of the test for their individual feature after 5-15 minutes of committing code”
- Testing code is often gross
- “If you employ tools for static analysis, source formatting or code quality then configure them to run on test code, too.”
- relates back to metrics mentioned in #6
- “Bugs that slip into production are perfect candidates for writing software tests”
- There any many valid ways to write tests for a peice of software
- TDD is only one way
- Some code needs no tests
- Understand what the test library provides before writing “helper” functions
- Understand how to:
- write stubs/mocks
- conditionally run tests
- group tests/categorize
- setup/teardown
- parameterize tests
- bad experiences lead to folks badmouthing testing, but really they’re doing it wrong