Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save mikenikles/bc14ae021a062fab884aeacb2c48d599 to your computer and use it in GitHub Desktop.
Save mikenikles/bc14ae021a062fab884aeacb2c48d599 to your computer and use it in GitHub Desktop.
AssertJS 2018 Notes - www.assertjs.com
* **Takeaways**
* Share test utilities
* Use a lint rule when people don't follow best practices
* Not sure this is worth the effort for us
* It's easy to start writing tests, it's hard to continuously maintain tests and consistently keep coverage high
* How do you test a module?
* In isolation
* With integrations
* Mock integrations
* Test app but mock external dependencies
* 1. What makes a good test
* Topics
* Runs fast
* Doesn't break often
* Easy to read/understand
* Catches bugs
Always test your tests!
* Good coverage to effort ratio
* 2. Testing strategies
* Isolation (less to more)
* Enzyme / Snapshots
* Test multiple modules (integration tests, reducer tests, etc.)
* Unit tests
* General principles
* Testing UI is extremely hard
* Mocking too much reduces test quality
With each mock, you're a bit further away from reality
* More isolation = less bugs caught
* Replace UI with a test that pretends it's the UI
* Pros for tests
* Failing test = most likely a bug
* Very easy to read
* Survives major refactoring
* Cons for tests
* Initial state is hard to set up
* Can't pinpoint a bug
* Unclear where test for files live
This is why we want `__tests__` to be right next to the files they test
* 3. Keep tests clean
* Share test utilities
* `mockInitialData()` instead of massive JSON blob fixtures
* Use ESLint Rule Tester for custom ESLint rules
* Generate test templates and use them
* Use a lint rule when people don't follow best practices
* Unit / Integration tests
* UI component tests
* Data mutation tests
* App routing tests
* **Takeaways**
* Speed, speed, speed. Use `cy.request()` to avoid UI interactions, for example with login
* Don't use the UI to build state
* Don't use page objects to share UI knowledge, write isolated tests instead
* Testing pyramid: Start at the top!
* What is Cypress
* Often compared to Selenium or Webdriver, but not built on either
* 1 spec per page
* Use folders for further organization
* Use `shared` for header, footer, etc.
* Strategies
* Stub requets
* Pro
* Fast, easy, flexible
* Cypress intercepts network requests
* Cons
* Not true e2e
* Requires fixtures
* API integration tests could create the fixtures!
This is huge
* Static User
* Pros
* Real session e2e
* Cons
* Requires server
* Seed the DB
* Shares the test state
* Dynanic User
* Pros
* No state mutations
* Flexible / powerful
* Cons
* DB Setup / Teardown
* Slow / Complex
* Write all `it` commands first, then iterate on each one at a time
* Don't test the browser default behaviour
* E.g. don't click a link, instead test `href` attribute exists
* Custom commands
* `cy.login()` for example
* `cy.request()` uses the browser's cookie store
* Cypress is bound to the web apps origin
* `cy.clock()` to test time-sensitive features
* `cy.stub`(), for example `console.log`
* `window.Cypress` is true when run in Cypress
* Use to expose the Redux store. Call `dispatch()` in tests!
* What do we need?
* Real browser (DOM, storage, ...)
* Sub the server
* NPM `cypress-<X>-unit-test`
* Use to stub other components
* What if I told you there are no integration tests?
* Too many passing tests
* Snapshot diffing
* Less maintenance
* Shows only what changed
* Screenshot diffing
* **Next level of testing**
* Don't do pixel by pixel diffing
* Show the diff that caused the UI change
* Go a step back, take the application state before/after into account
* Where did the behavior start to change?
* "WHY?" did the page change? E.g. API response changed, user input was different
* How do we get there? We have to record everything
* Big test data
* You get back to your car, it's scratched. Imagine you could rewind and find out what happened. Well, imagine your test fails and you could rewind and see what caused it to fail
* **Takeaways**
* Look into tools to deal with cyclomatic and cognitive complexity
* Our monorepo approach helps with making sure we don't break unrelated code
* Use bots to enforce reviews, it's less personal. A bot acts the same way among all team members, consistently
* The bot doesn't trust company rockstars!
* Make sure the package and service templates come fully tested
* Run half-day testing hackathons per sprint where each developer writes tests
* Share more good code examples and explain why they're good
* Visualize code quality
* Measure code quality over time
* Shipping code without tests means missing the deadline
* "Manual QA doesn't catch everything"
* Testing requirements
* App builds successfully
* All tests pass
* Coverage threshold must be met for new code
* Quality guidelines
* Limit cyclomatic and cognitive complexity
* Avoid known problematic patterns
* E.g. `==` vs `===`
* Check for security vulnerabilities
* Eliminate duplicate code
* Automate code quality checks, use IDE plugins
* Run tests in a CI environment
* **Tools:** SonarCloud, Code Climate, Codacy
* Make the right thing the easy thing
This is why we spend more time than probably many other companies
* Train executives
* An untested product is fragile
* Fragile products break in unpredictable ways and times
* Unpredictability causes delays
* Delays cost money and frustrate customers
* Make code quality a point of pride
* Don't: I stayed up all night and it works!
* Do: I shipped at 5pm and it works!
* **Takeaways**
* Hundreds of slides in 45 minutes... watch the talk online
* Creator of testdouble
* Don't fake parts of the thing you're testing
* If the test subject is real (as in a real thing out there in the world), mock it
* Agree on a common pattern and follow it, everyone
* **Tests should fail if it or its dependencies change**
* Layering !== Abstraction
* Instead of layering, think of a better, more focused design / architecture
* If you see `.forEach` it means side effect because it doesn't return anything.
* **Takeaways**
* Don't expose private functions just for testing
* Linters are a kind of test
* What is BDD
* https://cucumber.io/
* **Team** collaborates
Not just developers
* Natural language description
* Connect to automated tests
* Living documentation
* Structure
* Feature
* Scenario
* Given ... (context)
* And ...
* And ...
* When ... (action)
* Then ... (outcome)
* Code
* `const { Given, When, Then } = require('cucumber');`
* Why a new framework?
* Better communication with a living document connected to test
* Testing **should** give us confidence that software works **well**.
* Works well includes
* Performance
* Visual Correctness
* https://meowni.ca/posts/2017-puppeteer-tests/
* Code Necessity
* Chrome 59 shipped `chrome --headless`
* Use full browser tests on CI
* Chrome 59 also shipped the DevTools protocol
* Puppeteer
* Performance tracing
* Screenshots
* Code coverage
* Chrome browser has a "Coverage" feature
* Exposed via the devtool protocol
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment