Skip to content

Instantly share code, notes, and snippets.

View w1150n's full-sized avatar

Matthew Wilson w1150n

  • Lab Zero Innovations
  • San Francisco, CA
View GitHub Profile
@w1150n
w1150n / resque_pool_upstart
Last active August 31, 2015 22:21
Resque Pool Upstart Script
# manages resque-pool, which in turn manages a collection of resque workers
start on runlevel [2345]
stop on runlevel [!2345]
respawn
env TERM_CHILD=1
kill timeout 60
@w1150n
w1150n / gist:f668f40190c03c7686a7
Last active August 29, 2015 14:11
LZ Testing Strategy

Testing Strategy

Overview

Every product has a unique risk profile that we need to measure and create a testing strategy that addresses those risks. Test coverage comes from a variety of methods, no purely automated approach is able to provide sufficient coverage by itself. Likewise no purely manual testing approach can appropriately validate a product, usually time and lack of human resources prevents exhaustive manual testing strategies from being effective.

A balanced approach is required that manages risk at each layer and provides crucial feedback to a team via Continuos Integration as well as by manual validation. Broadly speaking, our testing coverage comes by some combination of unit tests, acceptance tests, and exploratory testing.

How to spend your testing dollars

Ideally, we're able to get the bulk of our feedback on the quality of the product via automated channels, like Continuous Integration. This allows for manual/exploratory testing efforts to be better spent on activities that can

Definition of Done

You will never be "done", and your version of done will vary by feature at different points in a product's life cycle. Early prototypes might not have exhaustive tests or pixel perfect styling. Once you start releasing your code to the "public" you need to tighten things up. Here's what to look for:

*Designs reviewed and understood by the business and developers

*Unit tests are written and are green (make sure it works)

*Acceptance tests written for common cases (Selenium or Appium)

@w1150n
w1150n / gist:1bd418381792c7a835f6
Last active August 29, 2015 14:06
Definition of Done

Definition of Done

You will never be "done", and your version of done will vary by feature at different points in a product's life cycle. Early prototypes might not have exhaustive tests or pixel perfect styling. Once you start releasing your code to the "public" you need to tighten things up. Here's what to look for:

  • Designs reviewed and understood by the business and developers (make sure everyone knows what "done" means for the story at that point in time)

  • Unit tests are written and are green (make sure it works)

  • Acceptance tests written for common cases and ran in CI (Selenium, HTTPClient or Appium)

@w1150n
w1150n / gist:4c6efd41f020e4a8bff3
Last active August 29, 2015 14:05
Guidelines for managing acceptance test data for a shared QA system.
  1. Use unit tests to test as many variations as possible for the app, saving ATDD for system level and breadthy tests.
  2. Assume that other tests or humans are using the system at the same time and that you can't mutate data that another test is using. You will need to uphold this concept to enable you to parallelizing your tests.
  3. Keep a base set of immutable data that is scripted/reseedable so that you can reset the DB to a a given base/seeded state.
  4. For scenarios that need to mutate data, create a new entity for that scenario. Use the app's API if needed. If it doesn't have an API for that, then make one. :)
  5. You will need to define where the lines are for im/mutable data. i.e. Is adding another item to a collection considered "mutating" the data? If not then your test can add a new thingie and just use that.
  6. Use the app's Api's for assisting in assertions. e.g. To validate the data set or counts of things, you can call the backend to make sure the UI is correct.
dasfdsa