Skip to content

Instantly share code, notes, and snippets.

@dipunm
Last active August 12, 2019 00:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dipunm/deab77a6d032d68e3be05da8ca313f06 to your computer and use it in GitHub Desktop.
Save dipunm/deab77a6d032d68e3be05da8ca313f06 to your computer and use it in GitHub Desktop.

Testing

Creating modifications and adding of new features should typically be accompanied by tests in one form or another. The exception to this rule would typically be temporary bug fixes or workarounds that is expected to last an objectively short amount of time. In this case, there should be a "time-bomb" associated with the feature/hotfix.

time-bomb: A reminder of some sort to remove said feature after some time that allows others to also hold you or your team accountable. For now, we do not stipulate how to acheive this, we may consider adding a tool or feature in the CFE in order to assist in this endevour.

Getting full coverage

A typical page on the CFE includes:

  • A controller and middleware that:
    • parses a request
    • validates the request
    • pulls data from external services
    • pushes data to external services (eg. tracking and logging)
    • sets up the store
    • chooses the react page to render
    • provides extra data for the surrounding template
  • A client entry file that:
    • loads appropriate global styles
    • re-hydrates the store
    • hydrates the react app
  • Components that:
    • Provide visual elements for the user to interact with
    • Provide an API for our application to hook into (mapProps, mapDispatch)
    • Can introduce effects
  • Selectors that:
    • Transforms data from the store of a specific page to the structure required for a components props
  • Thunks and Libraries that:
    • Implement the core business logic
  • Containers that:
    • Tie the application and the React components together
    • Provide transformed state for a component
    • Wire the wanted functionality to a component
  • Reducers that:
    • Update the store based on actions sent by components via actionCreators provided by containers
  • Hooks that:
    • encapsulate common complex behaviours inside components that can be re-used

Testing strategies

Test the controller as an integration test.

Test the containers as integration tests upto actions.

This will implicitly test:

  • The component features that are used and should not change
  • Selectors
  • Features of Thunks and libraries that are depended upon and should not change
  • Features of Hooks that should not change
it('loads correctly given data in the store')
it('clicking the submit button will trigger a fetch of data' () => expect(fetch).wasCalled())
it('clicking the submit button will update the intent data', () => expect(actions).contains(expect.objectWithShape({
  type: 'SET_INTENT_DATA',
  payload: {}
}))

Test reducers in isolation

Why? - Because when we initialise the fake store for testing, we don't know what reducers we will have installed in production. Therefore, we cannot know how the store should look after dispatching an action.

Test Thunks, Libraries, Reducers and Components as units to get full edge case coverage

Things we may not have to test

The client entry file

Typically, this file is pretty declarative; there will likely be no loops or if statements. This job of this file is simply to load the client-side dependencies, and then initialise the redux store, and hydrate the react application. If dependencies are incorrectly provided, end-to-end tests will likely notify us about it.

Selectors

Selectors are also very declarative. Testing a selector will likely leave you writing the same code twice. Selectors are expected to be very simple and any tests written for them will likely be brittle.


==============================================

{ controller, client-entry, component*, container, selector*, thunk, library, actionCreators*, reducers*, hooks* }

Determining the difference between unit tests and integration tests should be non-important for the sake of this document. Typically, both types of tests will use similar techniques, (integration tests will likely involve more mocking) both should be fast, and both should describe functionality in plain, clear english. For the sake of this document, we will describe three types of tests:

  • Utility testing
  • Feature testing
  • End to end testing

Utility testing

Utility tests are typically written for utility functions. Utility functions tend to take responsibility for one thing. They tend to be created when multiple files or features are doing similar things and we need to share functionality. An example might be a function that generates a url for a particular page such as restaurant-profile or booking-flow. Another example may be a function or library that is responsible for ensuring that data shared via cookies are written and read to and from the correct key.

These functions and their respective tests allow us to define new terminology eg: "search page" or "attribution cookie". These terms encode certain domain knowledge such as "search pages live on the '/s' endpoint" or "the attribution cookie is stored with the 'ftc' key across all pages". They can also encode the edge case scenarios that make the utility function valuable; things like parameter validation or default values. Features that rely on these functions may be described using these new terms which in turn, helps to reduce the duplication of verbose edge case testing.

Feature testing

Most tests should be feature tests. Typically, a task starts from a Jira ticket and the task is written with details of what the expected customer impact is going to be. These tests range between "X should be shown", "user should be redirected to Y", "service X should be informed about Z" or "XYZ"

  • controller:
    • should validate the request
    • should pull data from graphql
    • should save data to the redux store that is required for hydration on the client-side
  • actionCreator:
    • validates parameters
    • transforms arguments into the expected action
    • Typically for developer experience tests
  • thunk:
    • fetches x from service y
    • dispatches action z to the reducer
    • causes effect x using library/utility y
    • handles error case a by doing b
  • action:
    • NA
  • reducer:
    • processes action x by modifying y in the store
    • correctly performs action x given these initial states
  • store:
    • combines reducer x for y functionality
    • Not really worth testing most of the time, e2e or integration tests will catch most issues.
  • component:
    • shows X
    • user action Y causes Z
    • prop A affects B
    • Again, developer based tests, will assert that the component has it's finite capabilities, but does not describe them from a user's perspective on a particular page
  • container:
    • Initialises the component with X when the store contains Y
    • Utilises feature X of component when Y happens in the store
    • Applies Y to the store when X happens in the component
    • These are like integration tests, less specific, but ensures that the component can react correctly to the important live data changes provided by redux
  • lib
    • calculates X based on Y
    • employs behaviour Y when X to ensure/influence Z

Web application has N phases.

  • Initial load
  • Hydration: Can this be tested automatically?
  • Interaction

Web components have N phases in Y states.

  • Initial load
  • Interaction

Containers are part of an application. They are typically designed for a single page and not shared.

Actions are like components, they are agnostic to the application. Reducers are

Action: Do something or broadcast a message Reducer: Change state of "something" Store: Include one or more of "something"

example:

  • Action: broadcast page change
  • Reducers:
    • Adjust page number
    • Change theme of page based on page number
      • eg: if red and page is even, stay red. if red and page is odd, go blue. if blue and page is even, stay blue. if blue and page is odd, go red.
  • Store: use "search-results" and "theme" features.

----- ITS ABOUT SIDE EFFECTS!!! User clicks "save" button

  • Action: Save User Records
  • Reducer: Add new user
  • Reducer: Count total saves made

End to end testing

Unit tests should be written for code that does not attempt to exit the memory space. Pure functions (functions without any dependencies apart from its arguments) are typically the easiest to test using this approach, but you should consider what value it provides to test such a function in isolation.

exiting the memory space: Javascript functions may attempt to make calls to external web API's, or write to disk etc. Typically, most functions that exit the memory space will be asynchronous, but not all asynchronous methods exit the memory space (eg. setTimeout), and not all synchronous methods remain in memory space (eg. fs.readFileSync).

Integration testing

Typically, if you are mocking modules or are mocking http requests, you are likely writing an integration test. The CFE is not a large project with multiple layers, so

Examples

Given a fictional utility method: function createSearchPageUrl(params = {}) { ... } ``> (✖) generates a correct url \

(✔) creates a relative url pointing to the /s endpoint
(✔) applies the provided parameters as querystrings on the created url
(✔) throws an error when provided params that are unsupported querystring parameters for the search url

  • Utility functions are usually hardest to formalise. We spend time formulating a good name for the function, and we start to believe that the function is now self explanatory. Unfortunately, things like the definition of what a "SearchPage" is may change over time. It is the job of a unit test to expand on the meaning of what is effectively a correct SearchPage url.

  • Writing multiple tests instead of one may feel repetitive, but these tests will age better; this is important because tests are written to prevent future regression. When something changes, these tests will provide more context for the developer into why the change should be given a second thought and can prevent cross-site bugs that are difficult to detect.

  • Interestingly, you may find that you are able to build a better developer experience by taking the extra time. A typical implementation of a function like this may either blindly apply params to the url as querystring parameters, or may use a whitelist and ignore unsupported params. While writing tests that describe this behaviour, it may become apparent that validation might better benefit the developer who calls the method because it can prevent logical errors such as mispelling parameter names.

(✖) homepage should have a div with .header when userType is 'concierge'
(✔) homepage should present a header when the user is logged in as a concierge

  • Features should be described from a user perspective. The test itself may reference the .header class, but the test description should give insight into what a .header represents.
  • Avoid describing the implementation of the code; instead of referencing a variable or parameter name, consider what that variable indicates and reword it in plain English.

Example:

(✖) ... the restaurant names should be set correctly
(✔) ... the restaurant names should be capitalised

  • Typically, tests ending in '...set correctly' or '...be valid' will not survive the test of time; the definition of 'correctly' or 'valid' will change over time.
  • In some cases, it is inconvenient to describe all of the transformations that may take place in order to normalise a value. In this case, it may be acceptable to use this description, but this is not encouraged and there should be an accompanying comment to help define what 'valid' or 'correctly' means.

Example:

(✖) it calls the userService
(✖) it calls userService.fetch with the userId
(✔) it fetches the user's details from the user service by providing the user's id

  • Avoid referring to objects and libraries within the codebase; instead describe the overall effect.
  • Ensure that user service refers to the external http service even if you do mock userService as part of your test.

Example:

(✖) It does not attempt to unannounce if the application is shutting down before announcing itself to discovery

  • Some edge cases behave slightly differently that described by the standard tests; consider how important it is to test these edge cases.
  • In this case, we can manually verify that attempting to call unannounce when the application should shutdown does not prevent the application from terminating even if the application never announced. The cost of regression is minimal; even if the logic was re-written and the application attempted to unannounce incorrectly, the implications are negligable compared to the confusion or fatigue caused by describing every edge case.

Naming test files

Test files should describe a feature rather than a class object, or file. Consider using the work specification to influence the name of the test file. This should lead to more descriptive and informative tests in general.

Example:

(✖) middlewares.test.js
(✔) common-middleware-features-executed-per-request.test.js

(✖) attribution-middleware.test.js
(✔) integration-with-attribution-service.test.js

(✖) search-controller.test.js
(✔) loading-the-search-page.test.js

  • This would encourage us to create and test functions instead of files
  • Sometimes, it encourages us to increase the scope of what we are testing. We could test a middleware and a service and we can take the opportunity to describe how they work together.
  • It encourages only mocking complex imported dependencies, or those with side-effects; reducing the chances of mocks becoming out of date and producing false positives.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment