Skip to content

Instantly share code, notes, and snippets.

@adamf321
Last active August 4, 2016 05:49
Show Gist options
  • Save adamf321/3524dd0ccc4d31e26bd3fd16335f7b15 to your computer and use it in GitHub Desktop.
Save adamf321/3524dd0ccc4d31e26bd3fd16335f7b15 to your computer and use it in GitHub Desktop.

Moxie QA Lead Test Project

Hi there, thanks for your interest in working with Moxie! We have devised the following test to give you the opportunity to show us what you can do.

Overview

Take the following mobile design: https://invis.io/AX6SXHN2F

Devise a test plan and workflow for this website. Take into account the following:

  • A fully responsive site will be built, working from high-res desktops down to small mobile devices.
  • Only consider the login, register and lost password funcions, and the feed screen.
  • The site will be built as an SPA using Angular. It will use a WordPress backend which will expose data via the WP-API.
  • Development is done using an Agile process with 1-week sprints.
  • Your test plan should test for pixel perfect implementation as well as functional tests.

Deliverables:

  • A test plan showing all the relevant cases. Use whichever format you think best.
  • An explanation of the whole testing lifecycle and how testing fits into the project workflow.

Bonus Points

  • Explain how automated testing could be used and the benefits.
@carafloyd23
Copy link

NEWSapp Test Plan

Table of Contents

  1. INTRODUCTION 3
    1.1. Purpose 3
    1.2. Project Overview 3
    1.3. Audience 3
  2. TEST STRATEGY 4
    2.1. Test Objectives 4
    2.2. Test Assumptions 4
    2.3. Test Principles 5
    2.4. Data Approach 6
    2.5. Scope and Levels of Testing 6
    2.5.1. Automation Test 6
    2.5.2. Functional Test 6
    TEST ACCEPTANCE CRITERIA 7
    TEST DELIVERABLES 7
    MILESTONE LIST 8
    2.5.3. User Acceptance Test (UAT) 8
    TEST DELIVERABLES 8
    2.6. Test Effort Estimate 9
  3. EXECUTION STRATEGY 9
    3.1. Entry and Exit Criteria 9
    3.2. Test Cycles 10
    3.3. Validation and Defect Management 10
    3.4. Test Metrics 11
    3.5. Defect tracking & Reporting 12
  4. TEST MANAGEMENT PROCESS 13
    4.1. Test Management Tool 13
    4.2. Test Design Process 14
    4.3. Test Execution Process 15
    4.4. Test Risks and Mitigation Factors 16
    4.1. Communications Plan and Team Roster 17
    4.2. Role Expectations 17
    4.2.1. Project Management 17
    4.2.2. Test Planning (Test Lead) 17
    4.2.3. Test Team 18
    4.2.4. Test Lead 18
    4.2.5. Development Team 18
  5. TEST ENVIRONMENT 19
  6. INTRODUCTION
    1.1. Purpose
    This test plan describes the testing approach and overall framework that will drive the testing of the NEWSapp Your Daily News Feed
    My document introduces:
    • Test Strategy: rules the test will be based on, including the givens of the project (e.g.: start / end dates, objectives, assumptions); description of the process to set up a valid test
    • Execution Strategy: describes how the test will be performed and process to identify and report defects, and to fix and implement fixes.
    • Test Management: process to handle the logistics of the test and all the events that come up during execution
    1.2. Project Overview
    NEWSapp is a powerful app providing people with the ability to view relevant information on your news feed with the latest updates on what is going on around you and updating you throughout the day/night so you do not miss a thing.
    1.3. Audience
    • Project team members perform tasks specified in this document, and provide input and recommendations on this document.
    • Project Manager Plans for the testing activities in the overall project schedule, reviews the document, tracks the performance of the test according to the task herein specified, approves the document and is accountable for the results.
    • The stakeholders’ representatives and participants may take part in the UAT test to ensure the business is aligned with the results of the test.
    • Technical Team ensures that the test plan and deliverables are in line with the design, provides the environment for testing and follows the procedures related to the fixes of defects.
    • Business analysts will provide their inputs on functional changes.
  7. TEST STRATEGY
    2.1. Test Objectives
    The objective of the test is to verify that the functionality of NEWSapp
    The test will execute and verify the test scripts, identify, fix and retest all high and medium severity defects per the entrance criteria, prioritize lower severity defects for future fixing.
    The final product of the test is twofold:
    • A production-ready application;
    • A set of stable test scripts that can be reused for Functional and UAT test execution.
    2.2. Test Assumptions
    Key Assumptions
    • Production like data required and be available in the system prior to start of Functional Testing
    General
    • Testing will be carried out once the build is ready for testing
    • All the defects would come along with a snapshot
    • The Test Team will be provided with access to Test environment via VPN connectivity
    • Test case design activities will be performed by QA Group
    • Test environment and preparation activities will be owned by Dev Team
    • Dev team will provide Defect fix plans based on the Defect meetings during each cycle to plan. The same will be informed to Test team prior to start of Defect fix cycles
    • BUSINESS ANALYST will review and sign-off all Test cases prepared by Test Team prior to start of Test execution
    • The defects will be tracked through HP ALM only. Any defect fixes planned will be shared with Test Team prior to applying the fixes on the Test environment
    • Project Manager/BUSINESS ANALYST will review and sign-off all test deliverables
    • The project will provide test planning, test design and test execution support
    • Test team will manage the testing effort with close coordination with Project PM/BUSINESS ANALYST
    • Project team has the knowledge and experience necessary, or has received adequate training in the system, the project and the testing processes.
    • There is no environment downtime during test due to outages or defect fixes.

Functional Testing
• During Functional testing, testing team will use preloaded data which is available on the system at the time of execution.

UAT
• UAT test execution will be performed by end users and QA Group will provide their support on creating UAT script.

2.3. Test Principles
• Testing will be focused on meeting the business objectives.
• There will be common, consistent procedures for all teams supporting testing activities.
• Testing processes will be well defined, yet flexible, with the ability to change as needed.
• Testing activities will build upon previous stages to avoid redundancy or duplication of effort.
• Testing environment and data will emulate a production environment as much as possible.
2.4. Data Approach
• In functional testing, will contain pre-loaded test data and which is used for testing activities.
2.5. Scope and Levels of Testing
2.5.1. Automation
PURPOSE: Used to look inside the application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected.
SCOPE: We will use existing test scripts in selenium and create new test scripts if needed
TESTERS: Testing team.
METHOD: this automation testing is carried out in the application with test scripts and positive/negative scenarios
TIMING: at the beginning of each cycle.
2.5.2. Functional Test
PURPOSE: Functional testing will be performed to check the functions of application. The functional testing is carried out by feeding the input and validates the output from the application.
Scope: Log in failure, Password Resets, Upgrade Notification, Positive/Negative Testing
TESTERS: Testing Team.
METHOD: The test will be performed according to Functional scripts, which are stored in HP ALM.
TIMING: after Exploratory test is completed.
TEST ACCEPTANCE CRITERIA

  1. Approved Functional Specification document, Use case documents must be available prior to start of Test design phase.
  2. Test cases approved and signed-off prior to start of Test execution
  3. Development completed, unit tested with pass status and results shared to Testing team to avoid duplicate defects
  4. Test environment with application installed, configured and ready to use state

TEST DELIVERABLES

S.No. Deliverable Name Author Reviewer

  1. Test Plan Test Lead Project Manager/ Business Analyst’s
  2. Functional Test Cases Test Team Business Analyst’s Sign off
  3. Logging Defects in HP ALM Test Team Test Lead/ Programming Lead(Vijay)
    (4. Daily/weekly status report Test Team/ Test Lead Test Lead/ Project Manager
  4. Test Closure report Test Lead Project Manager

MILESTONE LIST
The milestone list is tentative and may change due to below reasons

a) Any issues in the System environment readiness
b) Any change in scope/addition in scope
c) Any other dependency that impacts efforts and timelines

2.5.3. User Acceptance Test (UAT)
PURPOSE: this test focuses on validating the business logic. It allows the end users to complete one final review of the system prior to deployment.
TESTERS: the UAT is performed by the end users
METHOD: Since the business users are the most indicated to provide input around business needs and how the system adapts to them, it may happen that the users do some validation not contained in the scripts. Test team write the UAT test cases based on the inputs from End user and Business Analyst’s.
TIMING: After all other levels of testing (Exploratory and Functional) are done. Only after this test is completed the product can be released to production.
TEST DELIVERABLES

S.No. Deliverable Name Author Reviewer

  1. UAT Test Cases Test Team Business Analyst’s Sign off
  2. EXECUTION STRATEGY
    3.1. Entry and Exit Criteria
    • The entry criteria refer to the desirable conditions in order to start test execution; only the migration of the code and fixes need to be assessed at the end of each cycle.
    • The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.
    • Entry and exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation. All this is input to the project manager for a final “go-no go” decision.
    • Entry criteria to start the execution phase of the test: the activities listed in the Test Planning section of the schedule are 100% completed.
    • Entry criteria to start each cycle: the activities listed in the Test Execution section of the schedule are 100% completed at each cycle.
    Exit Criteria Test Team Technical Team Notes
    100% Test Scripts executed
    95% pass rate of Test Scripts
    No open Critical and High severity defects
    95% of Medium severity defects have been closed
    All remaining defects are either cancelled or documented as Change Requests for a future release
    All expected and actual results are captured and documented with the test script
    All test metrics collected based on reports from HP ALM
    All defects logged in HP ALM
    Test Closure Memo completed and signed off
    Test environment cleanup completed and a new back up of the environment

3.2. Test Cycles
o There will be two cycles for functional testing. Each cycle will execute all the scripts .
o The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. It is expected to use some work-around in order to get to all the scripts.
o The objective of the second cycle is to identify remaining high and medium defects, remove the work-around from the first cycle, correct gaps in the scripts and obtain performance results.
• UAT test will consist of one cycle.
3.3. Validation and Defect Management
• It is expected that the testers execute all the scripts in each of the cycles described above. However it is recognized that the testers could also do additional testing if they identify a possible gap in the scripts. This is especially relevant in the second cycle, when the Business analyst’s join in the execution of the test, since the BUSINESS ANALYSTs have a deeper knowledge of the business processes. If a gap is identified, the scripts and traceability matrix will be updated and then a defect logged against the scripts.
• The defects will be tracked through HP ALM only. The technical team will gather information on a daily basis from HP ALM, and request additional details from the Defect Coordinator. The technical team will work on fixes.
• It is the responsibility of the tester to open the defects, link them to the corresponding script, assign an initial severity and status, retest and close the defect; it is the responsibility of the Defect Manager to review the severity of the defects and facilitate with the technical team the fix and its implementation, communicate with testers when the test can continue or should be halt, request the tester to retest, and modify status as the defect progresses through the cycle; it is the responsibility of the technical team to review HP ALM on a daily basis, ask for details if necessary, fix the defect, communicate to the Defect Manager the fix is done, implement the solution per the Defect Manager request.
Defects found during the Testing will be categorized according to the bug-reporting tool “HP ALM” and the categories are:
Severity Impact
1 (Critical) • This bug is critical enough to crash the system, cause file corruption, or cause potential data loss
• It causes an abnormal return to the operating system (crash or a system failure message appears).
• It causes the application to hang and requires re-booting the system.
2 (High) • It causes a lack of vital program functionality with workaround.
3 (Medium) • This Bug will degrade the quality of the System. However there is an intelligent workaround for achieving the desired functionality - for example through another screen.
• This bug prevents other areas of the product from being tested. However other areas can be independently tested.
4 (Low) • There is an insufficient or unclear error message, which has minimum impact on product use.
5(Cosmetic)
• There is an insufficient or unclear error message that has no impact on product use.

3.4. Test Metrics
Test metrics to measure the progress and level of success of the test will be developed and shared with the project manager for approval. The below are some of the metrics
Report Description Frequency
Test preparation & Execution Status To report on % complete, % Pass, % Fail
Defects severity wise Status – Open, closed, any other Status Weekly / Daily (optional)
Daily execution
status To report on Pass, Fail, Total defects, highlight Showstopper/ Critical defects Daily
Project Weekly Status report Project driven reporting (As requested by PM) Weekly – If project team needs weekly update apart from daily and there is template available with project team to use.

3.5. Defect tracking & Reporting
Following flowchart depicts Defect Tracking Process:

  1. TEST MANAGEMENT PROCESS

4.1. Test Management Tool
HP Application Lifecycle Management is the tool used for Test Management. All testing artifacts such as Test cases, test results are updated in the HP Application Lifecycle Management (ALM) tool.
• Project specific folder structure will be created in HP ALM to manage the status of this DFRT project.
• Each resource in the Testing team will be provided with Read/Write access to add/modify Test cases in HP ALM.
• During the Test Design phase, all test cases are written directly into HP ALM. Any change to the test case will be directly updated in the HP ALM.
• Each Tester will directly access their respective assigned test cases and update the status of each executed step in HP ALM directly.
• Any defect encountered will be raised in HP ALM linking to the particular Test case/test step.
• During Defect fix testing, defects are re-assigned back to the tester to verify the defect fix. The tester verifies the defect fix and updates the status directly in HP ALM.
• Various reports can be generated from HP ALM to provide status of Test execution. For example, Status report of Test cases executed, Passed, Failed, No. of open defects, Severity wise defects etc.
4.2. Test Design Process

• The tester will understand each requirement and prepare corresponding test case to ensure all requirements are covered.
• Each Test case will be mapped to Use cases to Requirements as part of Traceability matrix.
• Each of the Test cases will undergo review by the BUSINESS ANALYST and the review defects are captured and shared to the Test team. The testers will rework on the review defects and finally obtain approval and sign-off.
• During the preparation phase, tester will use the prototype, use case and functional specification to write step by step test cases.
• Testers will maintain a clarification Tracker sheet and same will be shared periodically with the Requirements team and accordingly the test case will be updated. The clarifications may sometimes lead to Change Requests or not in scope or detailing implicit requirements.
• Sign-off for the test cases would be communicates through mail by Business Analyst’s.
• Any subsequent changes to the test case if any will be directly updated in HP ALM.
4.3. Test Execution Process

• Once all Test cases are approved and the test environment is ready for testing, tester will start a exploratory test of the application to ensure the application is stable for testing.
• Each Tester is assigned Test cases directly in HP ALM.
• Testers to ensure necessary access to the testing environment, HP ALM for updating test status and raise defects. If any issues, will be escalated to the Test Lead and in turn to the Project Manager as escalation.
• If any showstopper during exploratory testing will be escalated to the respective development for fixes.
• Each tester performs step by step execution and updates the executions status. The tester enters Pass or Fail Status for each of the step directly in HP ALM.
• Tester will prepare a Run chart with day-wise execution details
• If any failures, defect will be raised as per severity guidelines in HP ALM tool detailing steps to simulate along with screenshots if appropriate.
• Daily Test execution status as well as Defect status will be reported to all stakeholders.
• Testing team will participate in defect triage meetings in order to ensure all test cases are executed with either pass/fail category.
• If there are any defects that are not part of steps but could be outside the test steps, such defects need to be captured in HP ALM and map it against the test case level or at the specific step that issue was encountered after confirming with Test Lead.
• This process is repeated until all test cases are executed fully with Pass/Fail status.
• During the subsequent cycle, any defects fixed applied will be tested and results will be updated in HP ALM during the cycle.
As per Process, final sign-off or project completion process will be followed
4.4. Test Risks and Mitigation Factors
Risk Prob. Impact Mitigation Plan
SCHEDULE
Testing schedule is tight. If the start of the testing is delayed due to design tasks, the test cannot be extended beyond the UAT scheduled start date. High High • The testing team can control the preparation tasks (in advance) and the early communication with involved parties.
• Some buffer has been added to the schedule for contingencies, although not as much as best practices advise.
RESOURCES
Not enough resources, resources on boarding too late (process takes around 15 days.
Medium High Holidays and vacation have been estimated and built into the schedule; deviations from the estimation could derive in delays in the testing.
DEFECTS
Defects are found at a late stage of the cycle or at a late cycle; defects discovered late are most likely be due to unclear specifications and are time consuming to resolve.

Medium  High    Defect management plan is in place to ensure prompt communication and fixing of issues. 

SCOPE
Scope completely defined
Medium Medium Scope is well defined but the changes are in the functionality are not yet finalized or keep on changing.
Natural disasters Low Medium Teams and responsibilities have been spread to two different geographic areas. In a catastrophic event in one of the areas, there will resources in the other areas needed to continue (although at a slower pace) the testing activities.
Non-availability of Independent Test environment and accessibility Medium High Due to non availability of the environment, the schedule gets impacted and will lead to delayed start of Test execution.
Delayed Testing Due To new Issues Medium High During testing, there is a good chance that some “new” defects may be identified and may become an issue that will take time to resolve.
There are defects that can be raised during testing because of unclear document specification. These defects can yield to an issue that will need time to be resolved.
If these issues become showstoppers, it will greatly impact on the overall project schedule.
If new defects are discovered, the defect management and issue management procedures are in place to immediately provide a resolution.
4.1. Communications Plan and Team Roster
4.2. Role Expectations
The following list defines in general terms the expectations related to the roles directly involved in the management, planning or execution of the test for the project.
SN0. Roles Name Contact Info

  1. Project Manager
  2. Test Lead
  3. Business Analyst
  4. Development Lead
  5. Testing Team
  6. Development Team
  7. Technical Lead
    4.2.1. Project Management
    • Project Manager: reviews the content of the Test Plan, Test Strategy and Test Estimates signs off on it.
    4.2.2. Test Planning (Test Lead)
    • Ensure entrance criteria are used as input before start the execution.
    • Develop test plan and the guidelines to create test conditions, test cases, expected results and execution scripts.
    • Provide guidelines on how to manage defects.
    • Attend status meetings in person or via the conference call line.
    • Communicate to the test team any changes that need to be made to the test deliverables or application and when they will be completed.
    • Provide on premise or telecommute support.
    • Provide functional (Business Analysts) and technical team to test team personnel (if needed).
    4.2.3. Test Team
    • Develop test conditions, test cases, expected results, and execution scripts.
    • Perform execution and validation.
    • Identify, document and prioritize defects according to the guidance provided by the Test lead.
    • Re-test after software modifications have been made according to the schedule.
    • Prepare testing metrics and provide regular status.
    4.2.4. Test Lead
    • Acknowledge the completion of a section within a cycle.
    • Give the OK to start next level of testing.
    • Facilitate defect communications between testing team and technical / development team.
    4.2.5. Development Team
    • Review testing deliverables (test plan, cases, scripts, expected results, etc.) and provide timely feedback.
    • Assist in the validation of results (if requested).
    • Support the development and testing processes being used to support the project.
    • Certify correct components have been delivered to the test environment at the points specified in the testing schedule.
    • Keep project team and leadership informed of potential software delivery date slips based on the current schedule.
    • Define processes/tools to facilitate the initial and ongoing migration of components.
    • Conduct first line investigation into execution discrepancies and assist test executors in creation of accurate defects.
    • Implement fixes to defects according to schedule.
  8. TEST ENVIRONMENT

Testing will be completed in windows environment with Internet Explorer and with Firefox as well as Google Chrome.

@mzilich
Copy link

mzilich commented Aug 3, 2016

NEWSapp - A Sample Test Strategy and Plan Outline
(see attached MindMap) for Moxie, by Michael Ilich, July 2016
Strategy
• Manual Browser Testing methodologies will be used for all feature testing, including functional and user experience

  • Includes use of Debuggers, Cross-Browser testing tools, and other Greybox tools like MySQL, Charles, etc.
  • Desktop Testing (MacOS/Windows, Chrome/Firefox/Safari/IE
  • Mobile Testing (iOS/Android, Safari/Chrome/Firefox)
  • "COP FLUNG GUN" Techniques (Communication, Orientation, Platform, Function, Location, User Scenarios, Network, Gesture, Guidelines, Updates, Notifications;
    see http://moolya.com/uncategorized/test-mobile-applications-with-cop-who-flung-gun/)
    • Performance and Load Testing will focus on both the Client-Side and Server-Side (API), to account for use of Javascript and Ajax on the Client side
    • Non-manual testing (aside from login/out, registration, and basic navigation) will be prioritized for Automation first, as it will be most feasible, and thus offer the most time-saving rewards
    • Manual testing scripts will be automated using a program like Selenium Webdriver, or any other tool(s) specific to the Wordpress API (WP-API) that might be available. After the feature achieves stability over
    multiple iterations and the Product Team confirms there are no functionality changes ahead in the Product Roadmap, more test manual test cases may be scripted for time gains in future Sprints.

Test Plan
NEWSapp

⁃ Introduction
⁃ Personalized, Real-Time News Service Application
⁃     Audience
⁃         Registered Users
⁃             Email Signup
⁃         Returning Users
⁃             Login Credentials
⁃ Fully Responsive Site
⁃     Technical Support
⁃         Operating Systems
⁃             Desktop
⁃                 MAC OS
⁃                 Windows
⁃             Mobile
⁃                 iOS
⁃                 Android
⁃         Browsers
⁃             Chrome
⁃             Firefox
⁃             Safari
⁃         Basic Email Support 
⁃             Registration Validation
⁃             Password Retrieval
⁃     Technical Specs
⁃         Front-End
⁃             SPA (AngularJS)
⁃         Back End
⁃             Wordpress (WP-API)
⁃ Testing Scope
⁃ Product
⁃     A/B Testing
⁃ Technical
⁃     Unit Tests
⁃     Load Tests
⁃     Performance Tests
⁃     Popular Browser Tests (e.g Safari, Chrome, Firefox, IE)
⁃         Desktop
⁃         Mobile
⁃     Cross-Browser Tests (i.e. strings of sessions in different browsers)
⁃         Desktop
⁃         Mobile
⁃     Session Management Tests
⁃         Device Management test cases
⁃             Desktop
⁃             Mobile
⁃         Connectivity test cases
⁃             Offline testing
⁃             Reconnecting to network testing
⁃         Concurrent Session Tests
⁃             Desktop Browsers
⁃             Mobile Browsers
⁃     Automated Tests
⁃         All tests that can be successfully scripted and executed reliably 
⁃             Unit test cases
⁃             Load test cases
⁃             Performance test cases
⁃             Cross-Browser test cases
⁃             Session Management test cases
⁃             Functional (Positive/Negative)
⁃                 Login/Logout
⁃                 Site Navigation
⁃         Automation tools/methodologies specific to the WordPress API (WP-API) are preferable
⁃     Back End Testing (White Box)
⁃         API (Client/Server) 
⁃ Basic User Flow
⁃     Functional 
⁃         Register
⁃             Positive/Negative test cases
⁃             Boundary test cases
⁃             Email Validation test cases
⁃         User Session Management
⁃             Login/Logout
⁃         Password Recovery
⁃         Newsfeed
⁃             Newsfeed Navigation
⁃                 News Items
⁃                 Search
⁃                 Settings
⁃                 Site Preferences
⁃                     Bookmarks
⁃                     "Loves"
⁃ Exceptions
⁃     What won't be tested
⁃         Content validation tests 
⁃             News Items
⁃             Settings
⁃             Site Preferences
⁃                 Bookmarks
⁃                 "Loves"
⁃             Search Results
⁃         Database tests beyond validation of storage and updates of user credentials and site preferences
⁃         Integration tests with any supporting platforms
⁃         Any Product Versions embedded in Native Applications (i.e. full "builds" that require end-to-end regression testing not amenable to 1 week Sprints)
⁃ Project Teams: Domains and [Tools]
⁃ Developers Domain
⁃     [Confluence, Jira, Chat/Email Apps, and countless others!]
⁃     Unit Testing
⁃     Load Testing
⁃     Performance Testing
⁃     Session Management (Technical/White Box)
⁃         API (Client/Server) 
⁃     Environment Management
⁃     Version Management
⁃ QA Domain
⁃     [MindMap tools, Confluence, Chat/Email apps, TestRail, TestRun, Jira, Pivotal Tracker, Chrome Developer Tools, Firebug, Charles, Xcode, JUnit, Jenkins, Selenium etc.]
⁃     Test Strategy
⁃         Test Plan
⁃             Test Cases
⁃                 Functional
⁃                 User Session Management (Black/Grey Box)
⁃                 Email Validation
⁃                 Device Management (Black Box)
⁃                 A/B Tests
⁃         Defect/Bug Management
⁃             Scripted Test case execution 
⁃             Exploratory test case execution
⁃             Defect/Bug Fix Validation
⁃             Regression/Smoke Testing
⁃         Test Completion Criteria
⁃             Work with Product Team to identify "core" test cases to validate Exit Criteria
⁃             "Test Runs" (sets of test cases) organized according to priority (Exit Criteria)
⁃                 Which/How many test cases have passed/failed for the most current version in the Sprint?
⁃                     Which processed bugs/defects have been designated for fix in future Sprints?
⁃             Work with Developers to compile testing results for the Backend/API
⁃ Product Domain
⁃     [Confluence, Jira, Calendar apps, Chat/Email apps, etc.]
⁃     Product Specifications (formal, e.g. in a document, or informal, e.g. via Q&A)
⁃         User Flow Expectations
⁃             Registration
⁃             Email Validation
⁃             Login/Logout
⁃             User Session 
⁃             Password Retrieval
⁃         UI Design Coordination
⁃             Desktop Dimensions
⁃             Mobile Dimensions
⁃         Cataloguing of all UI Functionality
⁃             Desktop mouse-clicks and navigation
⁃             Mobile finger-taps and gestures
⁃         Definition of Product Support Levels
⁃             What will/won't be guaranteed Product support
⁃             Basic Technical Requirements
⁃             Technical Support Levels
⁃         Prioritization of Product Needs
⁃             "Must-have" criteria
⁃             "Nice to have" criteria
⁃             Definition of acceptable standards of product quality for various stages of release (Exit Criteria)
⁃                 Defect/Bug Analytics review prior to release
⁃                     Discerning which defects/bugs can be "tolerated" in this iteration, with their fix priority postponed for a future Sprint
⁃                 Review of Test Case Results
⁃                     Which/How many test cases are core and must pass for validation of the release
⁃         Marketing Campaigns
⁃             A/B Testing Projects
⁃     Iteration Planning, Design and Management
⁃         Sprint Planning, Design and Management
⁃ QA Deliverables [format]
⁃ Product Specification Review Notes [Confluence]
⁃     QA Expectations, Assumptions and Questions for Clarification
⁃ Test Strategy [Mindmap and Outline]
⁃ Test Plan [Mindmap and Outline]
⁃     Master Suite of Test Cases ["Singular-result" format for product spec validation and future script automation]
⁃         Sub-suite of Smoke test cases for basic validation of functionality and user flow
⁃ Testing Results [Test Rail/Test Run Reports, Jira tickets, Powerpoint, etc]
⁃     Iteration-Specific
⁃         "Test Runs" of Master test case suite
⁃         "Test Runs" of Smoke test case suites
⁃         Defect/Bug Reports
⁃         Back-End/API Testing Results
⁃         Post-Iteration Notes
⁃     Sprint-Specific
⁃         "Test Runs" of Master test case suite
⁃         "Test Runs" of Smoke test case suites
⁃         Defect/Bug Reports
⁃         Back-End/API Testing Results
⁃         Post-Sprint Notes
⁃ Team Standards and Project Ownership
⁃ Developers
⁃     Organize/Maintain Code Branches
⁃     Devise Code Check-In/Release Schedules
⁃         Arrange special deployment schedules and/or configurations for features/stories requiring successive and sequential Sprints to complete
⁃     Run Unit tests on code ready for check-in
⁃     Build and Maintain Test Environments
⁃         Perform Data Migration when needed for User Testing
⁃         Perform Load Testing 
⁃         Conduct Performance Testing
⁃         Work with White Box QA staff (if available) to automate tasks
⁃         Configuration/Settings Management (if needed)
⁃     Work with Development and QA teams to Manage/Complete Iterations/Sprints
⁃         Defect/Bug Triage Fixes
⁃         Code Release Requirements
⁃         Post-Release Requirements
⁃         Review/Critique Testing Deliverables
⁃         Review all Defect/Bug Reports immediately
⁃         Submit Code Fixes for Defects/Bugs according to Schedule and/or Product's Priority scale and/or Severity
⁃         Update Project Team of any potential Delays to Schedule
⁃ QA 
⁃     Prepare/Maintain Testing Resources
⁃         Testing tools configured properly for testing environment(s)
⁃         User test accounts available, if needed
⁃         Other Assets established, if needed
⁃         Work with Developers to Prepare/Maintain Testing Environment(s)
⁃             Deploy new versions
⁃             Migrate/Configure data 
⁃             Configure parameters for environment stability
⁃     Devise/Maintain Test Strategy, Plan and Cases
⁃         Drafts are begun as soon as Product Specifications and/or Test Builds are Updated/Deployed
⁃         Identify "Core" Test Cases to fulfill Exit Criteria set by Product Team
⁃     Begin Testing as soon as Test Environment is deemed fit and stable
⁃     Prioritize defects/bugs found according to Severity
⁃         P1/Showstopper
⁃             Bug/Defect causes App to "crash" or "hang"; data and/or content is permanently corrupted or lost; central user flow is blocked/disrupted, without simple workaround options
⁃         P2/Major
⁃             Bug/Defect causes temporary corruption/loss of data and/or content; partially blocks a user flow, but simple workaround options are available
⁃         P3/Average
⁃             Bug/Defect causes an error or unexpected result with any primary functionality; temporarily disrupts a user flow, without blocking it in any way; distorts the display of data and/or content
⁃         P4/Minor
⁃             Bug/Defect is a cosmetically unappealing or inaccurate display of design; unexpected, but acceptable results for all non-essential functionality
⁃     Keep Developer and Product Teams abreast of testing results per current version, with high severity issues prioritized 
⁃     Verify Fixes to Defects/Bugs ASAP
⁃         Perform Smoke tests to ensure no further regressions in surrounding areas
⁃     Work with Developers to help with/automate White Box Testing
⁃     Work with Development and Product Teams to Manage/Complete Iterations/Sprints
⁃         Report/Escalate any Regressions in New Builds
⁃             Discuss Code Reversion Strategies to meet Sprint goals
⁃                 Adjust Test Plan accordingly for new set of "core" test cases
⁃         Update Project Team of any potential Delays to Schedule
⁃         Defect/Bug Triage Testing and Fix Verification
⁃     Exit Criteria Validation Requirements
⁃         Confirm passage of all core test cases and defect/bug fix verification
⁃         Regression/Smoke Testing (on Staging Environment, if available)
⁃         QA Team "Signs off" on final validation of Exit Criteria
⁃     Post-Release Requirements
⁃         Smoke Testing on Production Environment
⁃         Sprint/Iteration Analytics Reporting
⁃             Code Change Overview
⁃             Stories Completed
⁃             Test Cases Passed/Failed
⁃             Defects/Bugs Found/Fixed
⁃             Performance and Load testing review
⁃             Automation results/gains (if applicable)
⁃ Product
⁃     Devise/Maintain Product Specifications
⁃     Review/Critique Testing Deliverables
⁃     Set Schedules for Iterations and Sprints
⁃     Assign Prioritization rating scale (e.g. P1-P4) to "must haves" and "nice to haves"
⁃     Work with Development and QA teams to Manage/Complete Iterations/Sprints
⁃         Update Project Team of any potential Delays to Schedule
⁃         Work with other Project Managers to coordinate "Project Verticals" and their respective place on the Product Roadmap
⁃     Schedule/Facilitate Iteration/Sprint Planning meetings
⁃         Iteration Planning "Kickoffs"
⁃         Daily Stand-ups (if needed)
⁃         Post-Release Review (if needed)
⁃ Schedules
⁃ Iteration Planning
⁃     Product Roadmap and/or Story/Bug/Task Backlog are parsed into "sets" or Iterations to deliver new features, upgrades, and bug fixes
⁃         Product Team works with Development and QA Teams to write necessary stories/tasks, preliminary development and testing plans
⁃         Iterations are composed of X number of 1-week Sprints, as devised/agreed upon by representatives of the Development (including IT), Product and QA teams 
⁃             Sprint Planning
⁃                 Development tasks are parsed as such so that they can be integrated into a stable version (i.e. pass unit tests), and can be tested and verified within 5 days
⁃                     Any Development work to be released in specific sequence must be organized for testing and release in the desired sequential order respective to Sprint duration
⁃                 Story Backlog is parsed according to Product's Roadmap, with selections estimated just above the capacity for a one week Sprint, in case tasks are finished early and more throughput is possible
⁃                     Singular Stories and their associated tasks Not Linked to other work will be prioritized
⁃                         Unfinished stories/tasks will be moved to future Sprints
⁃                     Interdependent Stories and tasks will be parsed as part of sets, spread out over multiple Sprints
⁃                         "Feature" Sprints composed of only interdependent stories/tasks that don't meet Exit Criteria will be postponed to the next week's Release date
⁃                 Pre-existing defects affecting relevant functionality identified and accounted for
⁃                 Possible gains from Automation (if applicable)
⁃ Sprint Execution
⁃     QA Team begins first cycle of testing, executing all core test cases 
⁃         Failed core test cases are reported as defects/bugs with critical or major severity
⁃             Any regressions to existing functionality are reported as defects/bugs with critical or major severity
⁃         Developer fixes to defects/bugs checked-in and deployed are verified immediately
⁃             Core test cases are re-run to pre-validate Exit Criteria
⁃                 Automation of tasks applied appropriately (when applicable)
⁃     QA Team begins second cycle of testing, executing all supplemental tests cases
⁃         Any regressions to existing functionality are reported as defects/bugs with critical or major severity
⁃         Failed supplemental test cases are reported as defects/bugs with average or minor severity
⁃         Developer fixes to defects/bugs checked-in and deployed are verified immediately
⁃             All test cases are re-run to validate Exit Criteria
⁃                 Automation of tasks applied appropriately (when applicable)
⁃     QA Team runs final Smoke Tests for build validation
⁃         Any defects/bugs found will be communicated with Product and Development Teams for immediate consideration with regards to the Quality of the Release
⁃             Defects/bugs deemed to be regressions or failed core test cases may determine that the Release date must be moved to the next Sprint
⁃ Product Release
⁃     After QA Validation of the Sprint, the changes are Released to the Production Environment
⁃         QA Team will perform Smoke Tests on Production Environment to validate the Release
⁃             Automation of tasks applied appropriately (when applicable)
⁃ Post-Sprint Analysis (Optional)
⁃     QA Team prepares Sprint Release Notes 
⁃         Test Strategy Success
⁃         Test Plan Review
⁃             Core Test Case Stats
⁃             Supplemental Test Case Stats
⁃             Defect/Bug Notes
⁃             Automation Stats (if available)
⁃         Outstanding Issues and Implications for Future Sprints
⁃     All Teams reconvene to discuss Sprint Release Notes 
⁃         Team Velocity is calculated
⁃         Lessons Learned/Suggestions for Future Sprints are recorded
⁃         Product Team presents Status Report on Team Velocity
⁃ Post-Iteration Analysis (Optional)
⁃     QA Team prepares Iteration Release Notes 
⁃         Test Strategy Success
⁃         Test Plan Review
⁃             Core Test Case Stats
⁃             Supplemental Test Case Stats
⁃             Defect/Bug Notes
⁃             Automation Stats (if available)
⁃         Outstanding Issues and Implications for Future Iterations
⁃     All Teams reconvene to discuss Iteration Release Notes 
⁃         Team Velocity is calculated
⁃         Lessons Learned/Suggestions for Future Iterations are recorded
⁃         Product Team presents Status Report on Product Roadmap
⁃ Risks
⁃ Tight Testing Schedule
⁃     Easily Disrupted by Delays
⁃         Design/Product task delays
⁃         Developer task delays
⁃         Operations task delays
⁃             Delays in stabilizing Test Environment
⁃             Delays in needed Data Migration/Setup
⁃     Subject to Changes to Product Requirements
⁃     Extended by Automation of Test Scripts 
⁃         Early Time added can result in Time gained in future Sprints/Iterations
⁃     Option: Inserting "Buffer" time for delays and adjustments to Sprints is sometimes helpful to meeting release goals
⁃ Resources
⁃     Each Team has necessary members available to fulfill scope for Sprint
⁃     Each Team Member has tools necessary to complete tasks and functions
⁃ Defects
⁃     Pre-existing defects must be identified and accounted for in Sprint planning
⁃     Critical and Major Defects must be identified/resolved immediately
⁃     All Regressions to existing functionality must be identified/resolved in timely manner
⁃ Unforeseen Circumstances
⁃     "Natural Disasters"
⁃ Human Error
⁃     Miscalculations
⁃     MisReporting
⁃     MisInterpretation
⁃     Plain old mistakes :)

@mzilich
Copy link

mzilich commented Aug 4, 2016

newsapp sample test mindmap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment