- Use unit tests to test as many variations as possible for the app, saving ATDD for system level and breadthy tests.
- Assume that other tests or humans are using the system at the same time and that you can't mutate data that another test is using. You will need to uphold this concept to enable you to parallelizing your tests.
- Keep a base set of immutable data that is scripted/reseedable so that you can reset the DB to a a given base/seeded state.
- For scenarios that need to mutate data, create a new entity for that scenario. Use the app's API if needed. If it doesn't have an API for that, then make one. :)
- You will need to define where the lines are for im/mutable data. i.e. Is adding another item to a collection considered "mutating" the data? If not then your test can add a new thingie and just use that.
- Use the app's Api's for assisting in assertions. e.g. To validate the data set or counts of things, you can call the backend to make sure the UI is correct.
dasfdsa |
You will never be "done", and your version of done will vary by feature at different points in a product's life cycle. Early prototypes might not have exhaustive tests or pixel perfect styling. Once you start releasing your code to the "public" you need to tighten things up. Here's what to look for:
-
Designs reviewed and understood by the business and developers (make sure everyone knows what "done" means for the story at that point in time)
-
Unit tests are written and are green (make sure it works)
-
Acceptance tests written for common cases and ran in CI (Selenium, HTTPClient or Appium)
You will never be "done", and your version of done will vary by feature at different points in a product's life cycle. Early prototypes might not have exhaustive tests or pixel perfect styling. Once you start releasing your code to the "public" you need to tighten things up. Here's what to look for:
*Designs reviewed and understood by the business and developers
*Unit tests are written and are green (make sure it works)
*Acceptance tests written for common cases (Selenium or Appium)
Every product has a unique risk profile that we need to measure and create a testing strategy that addresses those risks. Test coverage comes from a variety of methods, no purely automated approach is able to provide sufficient coverage by itself. Likewise no purely manual testing approach can appropriately validate a product, usually time and lack of human resources prevents exhaustive manual testing strategies from being effective.
A balanced approach is required that manages risk at each layer and provides crucial feedback to a team via Continuos Integration as well as by manual validation. Broadly speaking, our testing coverage comes by some combination of unit tests, acceptance tests, and exploratory testing.
Ideally, we're able to get the bulk of our feedback on the quality of the product via automated channels, like Continuous Integration. This allows for manual/exploratory testing efforts to be better spent on activities that can
# manages resque-pool, which in turn manages a collection of resque workers | |
start on runlevel [2345] | |
stop on runlevel [!2345] | |
respawn | |
env TERM_CHILD=1 | |
kill timeout 60 |