Skip to content

Instantly share code, notes, and snippets.

@poblish
Last active January 13, 2016 16:26
Show Gist options
  • Save poblish/9bda8bddf77f9dde00eb to your computer and use it in GitHub Desktop.
Save poblish/9bda8bddf77f9dde00eb to your computer and use it in GitHub Desktop.
3D testing

Background

"It's one of the various dimension of regression testing: across environments, rather than through time, or across code revisions. It's something I've already started doing with Blah: comparing a, b, and c versions in the way they respond to particular inputs, attempting to diff the metrics, the generated outputs. I guess this might provide a higher level or more structured way of defining what is tested, managing the cluster, reconciling the diffs etc.

You almost need a new DSL to describe the features of a system, the various exposed endpoints/outputs, and define a differ for each, one that's able to turn the diffs into prioritised blahs (which could be sitecode changes, regressions, env changes, noise). It could then be run against other systems across each change-dimension. Each blah could then be tagged (closed, resolved) with either a Git commit, a JIRA ticket, or an explanatory. 3D reg testing, perhaps."

Goal

Represent software changes in multiple dimensions (use matrices!). Goal is to be able to automatically test all dimensions (though with clever logic to strip out inconsequential changes, e.g. README changes) and provide a visualisation - or rather - a navigable 3D space of software changes.

Help bug / regression discovery. Tags offer searchability, grouping by individual developer, component etc.

Change(able) dimensions: inputs

  1. Subject ("System under Test"). Varies / diffed by: commit (git, SVN, etc.)
  2. Environment (server, VM, docker image). Varies / diffed by: RAM change, environment variable change, commit against image, etc.
  3. Inputs Varies / diffed by: change in test data, HTTP request etc.

Change(able) dimensions: outputs

  1. Outputs (for user, or test output). Diffed by IO stream output (file, screen, HTTP response)
  2. Characteristics (metrics)

Results / insights:

Tests run across all dimensions, differs run [TBC]

Questions

  1. [Q] How do dependencies fit in? [A] Don't need to model them too, just treat as a change dimension.
  2. [Q] What are the tests? [A] Any type of automated test that can be 'discovered', annotated, parsed etc. Should be possible to extract name, change dimension, and tagging.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment