||Why? What? How?
||Metrics, Success/Failure criteria;
||Given/When/Then? How Long? How Many? What is Required? Inputs/Outputs/Trends?
|Decisions and Artifacts
||Automated metric grabber (per hour, per day); Side effects
STEP 1: Change Request
title: WHAT - “How might we increase the number of orders?”
STEP 2: Confirmation
title: WHY: "We need business to grow."
title: WHAT: "Increase the number of sales per week."
color: 200, 200, 200
title: Assumptions: Results of the brainstorm
- [x] A1: Web traffic can be converted to sales
- [ ] A2: Customer reviews can trigger sales
- [ ] A3: Discount with an expiration date can trigger sales
- [ ] A4: Payback with an expiration date can trigger sales
- [ ] A5: Instagram train/giveaway can trigger sales
- [ ] A6: ...continue...
title: Voting Results: selected A1 as the primary action
STEP 3: Execution
- [ ] M1: number of page views/impressions
- [ ] M2: number of orders
- [ ] M3: $$$ order amount
- [ ] M4: number of drops/exits/cancelations
- [ ] M5: $$$ cost of the customer
- [ ] M6: conversion rate (%)
title: Side effects/artifacts:
- [ ] E1: Why was the order canceled? (examples: Price too high, not proper quality, ...)
- [ ] E2: Where does the user exit from the "making order" sequence? (examples: too many options in order, on payment step, on personal details, ...)
- [ ] E3: Session duration
- [ ] E4: Bounce Rate
- [ ] E5: New vs. Returning visitor
- [ ] E6: Average pages/steps per session
- [ ] E7: Cost of Execution
title: Success/Failure criteria
- [ ] Fail. Customer cost is too high (>15$ per customer)
- [ ] Success. Conversion rate >10%
- [ ] Success. Customer cost is low (<5$ per customer)
- [ ] Success. Too Many orders (stop the experiment if needed, reduce the size)
title: Should we execute it?
- [ ] is the experiment atomic?
- [ ] are risks of failure identified (roadblocks)? What can we change to reduce risks?
- [ ] is implementation time short?
- [ ] can we score the test's priority/value/impact?
STEP4: Decisions and Artifacts
title: Required for execution: select one option only
- [ ] Organic traffic
- [ ] Purchased traffic (ads networks)
- [ ] Purchased traffic (3rd party, freelance)
- [ ] Purchased traffic (social networks ads)
title: Duration: 1 week (or till we reach 10'000 impressions)
title: Size: 10'000 impressions
- [ ] website is ready to accept traffic in such an amount
- [ ] business is ready to be shortly overloaded (too many orders)
- [ ] website proper functioning in "order sequence"
- [ ] metrics collected by analytics from all pages (M1-M...)
- [ ] can we divide the experiment into two equal groups? (One group **without** experiment (clean group), another with experiment (test group))
- [ ] clean run (no other experiments executed)
- [ ] metrics extracted every hour and saved to the database
- [ ] tested something that can trigger sales (texts, graphics, layouts, colors, etc.)
- [ ] collected metrics are visualized with calculated trends
- [ ] assumption on trend defined for detecting anomaly behavior
- [ ] comparison to the group clean group results visualized
title: Decisions (Stop|Adjust|Scale up)
title: Is Success? How can we apply this in scale?
- [ ] convert experiment to a business rule
- [ ] identify failure criteria that should switch off this business rule
title: Is Failure? What is the reason of failure?
color: 255, 0, 0
- [ ] clarify that no outside forces are involved in test (for example: infrastructure failure)
title: Results are unclear? What influenced the experiment? Can we eliminate the side risks and repeat the experiment?
color: 255, 251, 45
- [ ] test is not atomic
- [ ] test does not produce a statistically significant difference
- [ ] test results are under the influence of outside force
color: 48, 96, 255
- [ ] Metrics collection tech should become a part of MAIN code
- [ ] Any findings should be documented and become a part of the backlog