I faced a problem which I didn't expect to have. Frontend development these past few years moved from running tests in the browser to completely mocking the browser via JSDOM or similar over nodejs. Since I'm doing browser game engines, I did not want to mock the browser. I tried using karma and web-test-runner with jasmine or similar but the fact that the dev server/build process differs between the game and the test running code, which is driven by the test framework, it was making it harder for me to be as close as possible to the real thing.
I thought, OK, let's hack a dedicated entry point for tests that uses the exact same pipeline as the game does. Each test would be a function (potentially async ie returns a promise) and to make things saner I'm using chai to have expect goodies like deep comparison etc. All tests are loaded via import functions in the entry point file.
We just need a convention for exporting tests (I'm making every test file export an array of tests function in the default export so I can comment out some if needed). Also using actual functions so I can get their names as I introspect them (arrow functions wouldn't work).
It worked great!
The test runner entry point is pretty straightforward. We just lack the test filtering goodies (every test import is explicit. Can be commented out or reordered, etc.).
I was just lacking test isolation. I was doing OK without it because I can control RNG seeds and other factors, namely clearing local storage before tests run. Anyway it would be cool to be able to run isolated tests...
So to support it...
Tests are always imported. If the page has the iso=1
query param, only a single test is run per session.
Test state and console output are captured to a dedicated local storage key. One test is run at a time, saved and page is reloaded to run the next, presenting all output once all have been traversed.
This is considerably slower than running them in series, as expected, and simpler than iframing children pages. Worked beautifully too.
I'm sharing my entry point and an example test file.
I find this to be great: uses standard ESM imports. We can exercise things such as canvas/webgl/svg and do diffing if necessary. Image/DOM snapshotting would be more complex to capture but I'll tackle that if need be (unlikely in this scenario). The browser visiting part can eventually be automated too, it will just be setting up which browsers to run, window size, criteria to stop and capture signals. Completely agnostic of the tests themselves.