from a user perspective, we want something that can run in the context of a cabal project.
it should have some idea of a hierarchy of tests (could be from tasty or hspec, i wouldn't bother trying to make it general). when it starts up, it ought to try to run all the tests, noting which ones fail (and also which ones fail to compile: ideally, you would be able to achieve local progress even if there are compile errors in other parts of the program, though it's not essential.)
when a test is failing, it should scope down to that file, and keep running that test on file change until it works. when it succeeds, the whole test suite one layer up should run, and so on up to the root of the test tree.
When a test file changes, that's easy - we just run it. When another file changes, we need to trace back all the test files that are affected by it, and add them to the list of tests to be run.
expected pain points: