- principles of testing
- testing shows presence of defects
- it allows us to see that defects are present, but it cannot prove the opposite (software is bug-free)
- exhaustive testing is impossible
- we always select scope using a variety of metrics (priorities, risk analysis)
- early testing
- start testing asap, optimally before we ship to production
- defect clustering
- defects often concentrate in select modules
- pesticide paradox
- if we use the same type of tests, we will soon run out of detected bugs and will have to change our methodology
- testing is context-dependent
- mission-critical systems are tested differently than an e-commerce site
- absence-of-errors fallacy
- if system built is unusable and does not serve its purpose, then any amount of debugging is useless
- testing shows presence of defects
- white / black / grey box testing
- we (do not) know internal structure of system, alt. we can rely on docs and specs
- ex. whitebox - source code analysis, blackbox - functional test cases based on use cases
- verification vs validation
- verification - tests whether products of a dev phase satisfy conditions imposed at the start of such phase
- validation - tests whether products satisfy specified requirements at the end of dev process
- V-model
- creation - spec, design, implementation
- business model, requirements, functional design, technical design, implementation
- testing - removing defects
- development, integration, system, user acceptance, production
- creation - spec, design, implementation
- test level
- group of test activities that are organized and managed together
- Boehm's first law
- errors are most frequent during the earlier phases and are more expensive to remove the later they are discovered
- V-model sucks, because we do not test at all during development
- W-model - build and revise after each phase
- regression
- the act of introducing defects to the system as a consequence of not fixing changes in system
- 100 little bugs in code
- test coverage
- which ratio of software is tested by our scenarios
- static testing
- testing component at spec or implementation level without execution of software
- allows us to test during design stage
- code revision types
- informal review
- ex. pair programming, code review
- walk-through
- driven by author of artifact in question
- author receives feedback
- technical review
- formal type of review chaired by a dedicated person
- documented process, led by chairman (not author)
- inspection
- technical review with usage of metrics
- informal review
- quality gates
- reviews as part of dev cycle - on transition between phases
- formal
- process can't continue until we meet formal requirements
- transparent process with clear guidelines for each artifact
- can be reduced to unnecessary bureaucracy and hastened thus reducing quality, slows down the process
- collaborative
- group session with consumer, looking for inconsistent or undefined parts
- efficient
- requires communication agenda
- implicit review
- we verify consistency of specification during test preparation
- ex. data lifecycle test - consistency of CRUD matrix
- how to write spec
- designer, tester and customer's view
- do not copy-paste stuff
- keep references to docs updated
- incomplete text should be avoided (TBD, TODO)
- verification checklist
- abstraction - identification of relevant information of particular type
- check - check requirements that concern actors mentioned
- automatic tests
- allows us to target multiple platforms
- saves time in long run
- helps with regression and smoke testing
- typical workflow
- unit tests
- service tests
- functional tests (UI)
- smoke test
- set of quick basic tests that exercise the most important functionality of our system
- test condition
- atomic element or event in the system that can be verified by one or more test cases
- ex. program function, database transaction
- classification tree
- can be created for each decision point in tested application
- can be dependent on user input or internal computation
- nodes - inputs, leaves - equivalence classes
- equivalence class (EC)
- we split input combinations into subsets so that all values from a specific EC have an equal chance of detecting a defect
- types by input type
- interval (we have to find boundaries)
- discrete values
- types by data validity
- data is valid - should work in usual way
- data is invalid - should output error
- from technical view (incompatible data types)
- from business view (account number does not exist)
- consistency rules
- EC must not be empty
- intersection of any two ECs must be empty
- conjunction of all ECs must give us all input options
- boundary value
- let M be the boundary of ECs and I smallest value that the application can detect
- we proceed to test M-I, M, M+I
- test design techniques
- MCC - complete test of combinations
- 2^N, where N is number of binary conditions
- combination of all values
- MC/DC - all combinations which have impact on decision of expression
- we focus on conditions that can independently influence result of decision
- N+1 combinations
- neutral values - if any property changes, it won't affect the result
- pairwise testing
- defects are most commonly caused by value of one specific output or by combination of values in two inputs
- we therefore test each input pair (all combinations)
- CC+DC
- CC - condition coverage
- DC - decision coverage
- manual selection (ex. for smoke test)
- we select main scenarios
- MCC - complete test of combinations
- model-based testing
- concerns selection of test input data
- approaches
- generation of input data from domain model
- generation of test cases from an environment model
- generation of test scripts from abstract tests
- reduces maintenance cost, automates testing process to varying degree
- workflow - model SUT (system under load), generate abstract tests, concretize to make them executable, execute on SUT, analyze results
- models are represented by pre/post notiations or FSM/UML
- is the system data-oriented or control-oriented?
- testing processes
- manual testing
- we create tests based on requirements and test plan and then execute them manually
- no automatization whatsoever, tester goes by a human readable document
- capture/replay testing
- tester uses a layer that provides ability to record and then use input/output
- ex. Selenium
- often breaks if we change a system property
- script-based testing
- uses a script written in standard programming language
- automatic execution
- manual testing
- test coverage criteria
- structural model CC - dependent on control-flow
- often finds inspiration in code-based (whitebox) tests
- control-flow-oriented
- coverage criteria - statement, decision, path
- transition-based coverage
- coverage - all-states, all-configuration (for parallel systems), all-transitions
- data CC - dependent on input data
- chooses a finite number of values to use as test inputs from a huge pool of possible values
- testing a single variable
- extreme ways to do it (test one value from the entire domain, or all possible values)
- use test selection criteria
- boundaries, statistical (random) data coverage, pairwise testing
- structural model CC - dependent on control-flow
- static code analysis
- focuses on error prevention rather than detection (because we don't actually run the code)
- white box approach
- useful to employ coding standards
- peer review, group examination
- formal review includes summary, list of defects
- type, frequency, location, severity type
- compiler often provides basic form of static analysis
- data flow anomalies
- undefined value is read
- assigned value becomes undefined without having been read
- assigned value is overwritten without it being used
- process testing
- paths coverage
- we want to cover all actions we need to test, but also minimize the number of procedures
- varying intensity
- cover all nodes
- TDL 1 - each edge is mentioned once
- TDL 2 - at each decision point, list combinations of possible inputs and outputs
- TDL 3 - at each decision point, list combinations of possible inputs, outputs, and output at next DP reached
- all paths are covered
- exploratory testing
- used when sufficient test basis is not available
- free testing
- hire testers and let them report any found defects
- we want to focus on covering application with tests
- should be systematic
- data consistency tests
- check if data is being handled correctly
- lifecycle consists of CRUD
- CRUD matrix
- includes application functions (row), data entities (col), listing of operations (C/R/U/D) that are exercised (cells)
- completeness test
- verify if all data entities have at least one mention of every operation
- data consistency check
- verify if all functions keep data entities consistent
- exercise R after every function that handles C, U or D
- error guessing
- based on previous experience, quality of docs, spec
- can bring new perspectives
- we can test unpredicted inputs, situations
- we try to purposefully bring the system to its knees
- focus on exceptions, go crazy approach
- operation profiles
- describes how our system is used by a particular type of user
- estimation (based on similar existing app) or record (based on previous version of our app) of user's actions
- application logs, database server logs, monitoring tool
- test management
- entry criteria
- conditions that have to be met before activity can begin
- exit criteria
- entry criteria
- test strategy
- high-level description of test levels to be performed and testing within those levels for organization or programme (can concern multiple projects)
- test approach
- implementation of test strategy for particular project
- test plan
- document describing scope, approach, resources, schedule of intended test activities
- has to be thoroughly updated
- test goals
- describe independently what goals of testing are
- can be subdivided into smaller organizational goals
- test prioritization
- we place emphasis on testing modules that have a high probability of failure or can cause significant damage