Skip to content

Instantly share code, notes, and snippets.

@Lukasa
Last active January 6, 2020 10:46
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Lukasa/5c0c949104a20b24a1e040c2c38e91d3 to your computer and use it in GitHub Desktop.
Save Lukasa/5c0c949104a20b24a1e040c2c38e91d3 to your computer and use it in GitHub Desktop.
Pytest help output
(py35) cory@heimdall:hyper-h2/ % py.test --help
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
positional arguments:
file_or_dir
general:
-k EXPRESSION only run tests which match the given substring
expression. An expression is a python evaluatable
expression where all names are substring-matched
against test names and their parent classes. Example:
-k 'test_method or test other' matches all test
functions and classes whose name contains
'test_method' or 'test_other'. Additionally keywords
are matched to classes and functions containing extra
names in their 'extra_keyword_matches' set, as well as
functions which have names assigned directly to them.
-m MARKEXPR only run tests matching given mark expression.
example: -m 'mark1 and not mark2'.
--markers show markers (builtin, plugin and per-project ones).
-x, --exitfirst exit instantly on first error or failed test.
--maxfail=num exit after first num failures or errors.
--strict run pytest in strict mode, warnings become errors.
-c file load configuration from `file` instead of trying to
locate one of the implicit configuration files.
--fixtures, --funcargs
show available fixtures, sorted by plugin appearance
--import-mode={prepend,append}
prepend/append to sys.path when importing test
modules, default is to prepend.
--pdb start the interactive Python debugger on errors.
--capture=method per-test capturing method: one of fd|sys|no.
-s shortcut for --capture=no.
--runxfail run tests even if they are marked xfail
--lf, --last-failed rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first run all tests but run the last failures first. This
may re-order tests and thus lead to repeated fixture
setup/teardown
--cache-show show cache contents, don't perform collection or tests
--cache-clear remove all cache contents at start of test run.
reporting:
-v, --verbose increase verbosity.
-q, --quiet decrease verbosity.
-r chars show extra test summary info as specified by chars
(f)ailed, (E)error, (s)skipped, (x)failed, (X)passed
(w)pytest-warnings (p)passed, (P)passed with output,
(a)all except pP.
-l, --showlocals show locals in tracebacks (disabled by default).
--report=opts (deprecated, use -r)
--tb=style traceback print mode (auto/long/short/line/native/no).
--full-trace don't cut any tracebacks (default is to cut).
--color=color color terminal output (yes/no/auto).
--durations=N show N slowest setup/test durations (N=0 for all).
--pastebin=mode send failed|all info to bpaste.net pastebin service.
--junit-xml=path create junit-xml style report file at given path.
--junit-prefix=str prepend prefix to classnames in junit-xml output
--result-log=path path for machine-readable result log.
collection:
--collect-only only collect tests, don't execute them.
--pyargs try to interpret all arguments as python packages.
--ignore=path ignore path during collection (multi-allowed).
--confcutdir=dir only load conftest.py's relative to specified dir.
--noconftest Don't load any conftest.py files.
--doctest-modules run doctests in all .py modules
--doctest-glob=pat doctests file matching pattern, default: test*.txt
--doctest-ignore-import-errors
ignore doctest ImportErrors
test session debugging and configuration:
--basetemp=dir base temporary directory for this test run.
--version display pytest lib version and import information.
-h, --help show help message and configuration info
-p name early-load given plugin (multi-allowed). To avoid
loading of plugins, use the `no:` prefix, e.g.
`no:doctest`.
--trace-config trace considerations of conftest.py files.
--debug store internal tracing debug information in
'pytestdebug.log'.
--assert=MODE control assertion debugging tools. 'plain' performs no
assertion debugging. 'reinterp' reinterprets assert
statements after they failed to provide assertion
expression information. 'rewrite' (the default)
rewrites assert statements in test modules on import
to provide assert expression information.
--no-assert DEPRECATED equivalent to --assert=plain
--no-magic DEPRECATED equivalent to --assert=plain
--genscript=path create standalone pytest script at given target path.
coverage reporting with distributed testing support:
--cov=[path] measure coverage for filesystem path (multi-allowed)
--cov-report=type type of report to generate: term, term-missing,
annotate, html, xml (multi-allowed). term, term-
missing may be followed by ":skip-covered".annotate,
html and xml may be be followed by ":DEST" where DEST
specifies the output location.
--cov-config=path config file for coverage, default: .coveragerc
--no-cov-on-fail do not report coverage if test run fails, default:
False
--cov-fail-under=MIN Fail if the total coverage is less than MIN.
--cov-append do not delete coverage but append to current, default:
False
distributed and subprocess testing:
-n numprocesses shortcut for '--dist=load --tx=NUM*popen', you can use
'auto' here for auto detection CPUs number on host
system
--max-slave-restart=MAX_SLAVE_RESTART
maximum number of slaves that can be restarted when
crashed (set to zero to disable this feature)
--dist=distmode set mode for distributing tests to exec environments.
each: send each test to each available environment.
load: send each test to available environment.
(default) no: run tests inprocess, don't distribute.
--tx=xspec add a test execution environment. some examples: --tx
popen//python=python2.5 --tx socket=192.168.1.102:8888
--tx ssh=user@codespeak.net//chdir=testcache
-d load-balance tests. shortcut for '--dist=load'
--rsyncdir=DIR add directory for rsyncing to remote tx nodes.
--rsyncignore=GLOB add expression for ignores when rsyncing to remote tx
nodes.
--boxed box each test run in a separate process (unix)
-f, --looponfail run tests in subprocess, wait for modified files and
re-run failing test set until all pass.
custom options:
--hypothesis-profile=HYPOTHESIS_PROFILE
Load in a registered hypothesis.settings profile
[pytest] ini-options in the next pytest.ini|tox.ini|setup.cfg file:
markers (linelist) markers for test functions
norecursedirs (args) directory patterns to avoid for recursion
testpaths (args) directories to search for tests when no files or dire
usefixtures (args) list of default fixtures to be used with this project
python_files (args) glob-style file patterns for Python test module disco
python_classes (args) prefixes or glob names for Python test class discover
python_functions (args) prefixes or glob names for Python test function and m
xfail_strict (bool) default for the strict parameter of xfail markers whe
doctest_optionflags (args) option flags for doctests
addopts (args) extra command line options
minversion (string) minimally required pytest version
rsyncdirs (pathlist) list of (relative) paths to be rsynced for remote dis
rsyncignore (pathlist) list of (relative) glob-style paths to be ignored for
looponfailroots (pathlist) directories to check for changes
environment variables:
PYTEST_ADDOPTS extra command line options
PYTEST_PLUGINS comma-separated plugins to load during startup
PYTEST_DEBUG set to enable debug tracing of pytest's internals
to see available markers type: py.test --markers
to see available fixtures type: py.test --fixtures
(shown according to specified file_or_dir or current dir if not specified)
(py27-alldeps-nocov-macos) cory@heimdall:twisted/ % python -m twisted.trial --help
__main__.py [options] [[file|package|module|TestCase|testmethod]...]
Options:
-h, --help Display this help and exit.
-N, --no-recurse Don't recurse into packages
--help-orders Help on available test running orders
--help-reporters Help on available output plugins (reporters)
-e, --rterrors realtime errors, print out tracebacks as soon as they
occur
--unclean-warnings Turn dirty reactor errors into warnings
--force-gc Have Trial run gc.collect() before and after each
test case.
-x, --exitfirst Exit after the first non-successful result (cannot be
specified along with --jobs).
-b, --debug Run tests in a debugger. If that debugger is pdb,
will load '.pdbrc' from current directory if it
exists.
-B, --debug-stacktraces Report Deferred creation and callback stack traces
--nopm don't automatically jump into debugger for
postmorteming of exceptions
-n, --dry-run do everything but run the tests
--profile Run tests under the Python profiler
-u, --until-failure Repeat test until it fails
-o, --order= Specify what order to run test cases and methods. See
--help-orders for more info.
-z, --random= Run tests in random order using the specified seed
--temp-directory= Path to use as working directory for tests. [default:
_trial_temp]
--reporter= The reporter to use for this test run. See --help-
reporters for more info. [default: verbose]
--debugger= the fully qualified name of a debugger to use if
--debug is passed [default: pdb]
-l, --logfile= log file name [default: test.log]
-j, --jobs= Number of local workers to run, a strictly positive
integer.
--disablegc Disable the garbage collector
--without-module= Fake the lack of the specified modules, separated
with commas.
-r, --reactor= Which reactor to use (see --help-reactors for a list
of possibilities)
--spew Print an insanely verbose log of everything that
happens. Useful when debugging freezes or
locks in complex code.
--testmodule= Filename to grep for test cases (-*- test-case-name).
--version Display Twisted version and exit.
--coverage Generate coverage information in the coverage file in
the directory specified by the temp-directory
option.
--help-reactors Display a list of possibly available reactor names.
--tbformat= Specify the format to display tracebacks with. Valid
formats are 'plain', 'emacs', and 'cgitb'
which uses the nicely verbose stdlib
cgitb.text function
--recursionlimit= see sys.setrecursionlimit()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment