Skip to content

Instantly share code, notes, and snippets.

@ssfrr
Last active January 12, 2018 17:34
Show Gist options
  • Save ssfrr/9963ddbb4ac5ab9e8a87872c397345a5 to your computer and use it in GitHub Desktop.
Save ssfrr/9963ddbb4ac5ab9e8a87872c397345a5 to your computer and use it in GitHub Desktop.

Based on a long-running conversation, mostly on slack with @oxinabox and @c42f, with a little bit @evizero. One issue that led to it is the desire to do XML reports of test runs, which exposes some issues with the extension mechanisms currently in place for Base.Test. I think to figure out how much rearchitecting is necessary we should answer a few questions.

Desired Features

oxinabox says DataStructures.jl has some testing-related hacks, we should investigate what they've had to do to see what might be needed in Base.Test

Report XML

Run subset of tests

at what granularity? Currently TestSetExtensions splits at the file level, which seems pretty reasonable.

parallelize tests

seems like this should happen at the same level of granularity as running subsets

configure how tests are run

e.g. whether to store all the info on passing tests, which was disabled a while ago to keep memory reasonable when running Base tests, but is sometimes needed. This seems like something that might be useful to do on a per-testset level…maybe.

Does Base.Test have the right hooks to be able to extend it?

to figure out whether we can extend it we should think about the ways people might want to extend it. There are some existing examples in the wild, but we probably also want to think about other possibilities for the future.

Change the way that the results are reported at the end

example - TestReports generates an XML report that can be ingested by other tools

Change the way progress/results is reported as tests are run

example - TestSetExtensions prints dots as the tests run, and prints failures more attractively than the default, including showing diffs. Note this is actually not great for running within Atom, so that could be another use-case we could think about.

add features for test authors to use

Current known testset types

FallbackTestSet

basically an implementation detail so that @test outside of a testset will just thrown an error on failure

DefaultTestSet

This is like 99.9% of all uses of Testset since it is the default if you use Testset without ever (in any parent) specifying a testset type. It does console output. It is quiet pretty.

ExtendedTestSet

displays a green dot for every pass and a prettier error for every failure (including diffs). It wraps another testset with configurable type and defers most handlling to it.

ReportingTestSet

very similar to DefaultTestSet except that it doesn't throw an error at the end when there's a failing test.

TestSet Behavior purposes

@c42f enumerated the following purposes for testsets:

  • Group tests logically
  • Capture test results
  • Serve as a scope to catch exceptions which occur outside test macros
  • Format or otherwise communicate test results
  • A scope to provide consistency for certain global state, for example seeding global RNGs

Proposals

@oxinabox - have TestSet{B, R}

B is for Behavior, R is for Render

Give @test a 2nd argument for a description

I don't actually like this very much or think it's necessary, I think @testset descriptions are good enough. I tend to use pretty small leaf @testset's.

Fix dotest so it doesn't hardcode throwing away Pass data

@oxinabox reported this - it seems like a straight-up bug, and makes it impossible to do anything with Pass data from a testset. source link, reported here

specify testset rendering via some global variable in Base.Test

@oxinabox - make testsets never throw exceptions, but have Pkg.test check the return type and set the exit status.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment