Skip to content

Instantly share code, notes, and snippets.

@JosePedroDias
Last active December 29, 2022 12:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save JosePedroDias/5998699b1ec477d4f3d1113ea5f72a7c to your computer and use it in GitHub Desktop.
Save JosePedroDias/5998699b1ec477d4f3d1113ea5f72a7c to your computer and use it in GitHub Desktop.
DIY browser tests without mocking

I faced a problem which I didn't expect to have. Frontend development these past few years moved from running tests in the browser to completely mocking the browser via JSDOM or similar over nodejs. Since I'm doing browser game engines, I did not want to mock the browser. I tried using karma and web-test-runner with jasmine or similar but the fact that the dev server/build process differs between the game and the test running code, which is driven by the test framework, it was making it harder for me to be as close as possible to the real thing.

I thought, OK, let's hack a dedicated entry point for tests that uses the exact same pipeline as the game does. Each test would be a function (potentially async ie returns a promise) and to make things saner I'm using chai to have expect goodies like deep comparison etc. All tests are loaded via import functions in the entry point file.

We just need a convention for exporting tests (I'm making every test file export an array of tests function in the default export so I can comment out some if needed). Also using actual functions so I can get their names as I introspect them (arrow functions wouldn't work).

It worked great!

The test runner entry point is pretty straightforward. We just lack the test filtering goodies (every test import is explicit. Can be commented out or reordered, etc.).

I was just lacking test isolation. I was doing OK without it because I can control RNG seeds and other factors, namely clearing local storage before tests run. Anyway it would be cool to be able to run isolated tests...

So to support it... Tests are always imported. If the page has the iso=1 query param, only a single test is run per session. Test state and console output are captured to a dedicated local storage key. One test is run at a time, saved and page is reloaded to run the next, presenting all output once all have been traversed. This is considerably slower than running them in series, as expected, and simpler than iframing children pages. Worked beautifully too.

I'm sharing my entry point and an example test file.

I find this to be great: uses standard ESM imports. We can exercise things such as canvas/webgl/svg and do diffing if necessary. Image/DOM snapshotting would be more complex to capture but I'll tackle that if need be (unlikely in this scenario). The browser visiting part can eventually be automated too, it will just be setting up which browsers to run, window size, criteria to stop and capture signals. Completely agnostic of the tests themselves.

import { expect } from 'chai';
import { ArrayUtils } from './ArrayUtils';
function sortByNumericAttribute() {
const arr = [
{ a: 'a', n: 3 },
{ a: 'b', n: 2 },
{ a: 'c', n: 6 },
];
expect(ArrayUtils.sortByNumericAttribute(arr, 'n', false)).to.deep.eq([
{ a: 'b', n: 2 },
{ a: 'a', n: 3 },
{ a: 'c', n: 6 },
]);
expect(ArrayUtils.sortByNumericAttribute(arr, 'n', true)).to.deep.eq([
{ a: 'c', n: 6 },
{ a: 'a', n: 3 },
{ a: 'b', n: 2 },
]);
}
function times() {
expect(ArrayUtils.times(3)).to.deep.equal([0, 1, 2]);
}
export default () => [
sortByNumericAttribute,
times,
];
(async () => {
const KEY_PROGRESS = 'TEST_PROGRESS';
function loadLS(key: string): any {
let raw: string;
try {
raw = localStorage.getItem(key);
if (raw === null) return undefined;
return JSON.parse(raw);
} catch (ex) {
console.error(ex);
console.error(`ERROR LOADING KEY ${key} WITH RAW DATA:`, raw);
return undefined;
}
}
function saveLS(key: string, data: any): void {
try {
const serialized = JSON.stringify(data);
localStorage.setItem(key, serialized);
} catch (_) {
console.error(`ERROR SAVING KEY ${key} WITH DATA:`, data);
}
}
function clearUnknownLSKeys() {
const len = localStorage.length;
for (let i = 0; i < len; ++i) {
const k = localStorage.key(i);
if (k !== KEY_PROGRESS) {
localStorage.removeItem(k);
}
}
}
function createHeader(label: string) {
const el = document.createElement('h1');
el.style.position = 'absolute';
el.style.top = '0';
el.style.left = '20px';
el.style.fontFamily = 'sans-serif';
el.appendChild(document.createTextNode(label));
document.body.appendChild(el);
}
type Progress =
| {
testNames: string[];
testStatuses: boolean[];
nextTestIndex: number;
output: string[];
}
| undefined;
const params = new URLSearchParams(location.search);
const IN_ISOLATION = params.get('iso');
createHeader(
IN_ISOLATION
? '🐎 running each test in isolation'
: '🏎️ running all tests in series',
);
clearUnknownLSKeys();
const tests = [
await import('../../src/Models/CardModel.test'),
await import('../../src/Models/GameModel.test'),
await import('../../src/GameLogic/ArrayUtils.test'),
await import('../../src/GameLogic/AnimationUtils.test'),
await import('../../src/GameLogic/GameLogic.test'),
await import('../../src/GameLogic/TimeUtils.test'),
]
.map((mod) => mod.default())
.reduce((prev, curr) => prev.concat(curr), []);
if (IN_ISOLATION) {
let progress = loadLS(KEY_PROGRESS) as Progress;
if (progress === undefined) {
progress = {
testNames: tests.map((t) => t.name),
testStatuses: [],
nextTestIndex: 0,
output: new Array(tests.length).fill(1).map((_, __) => ''),
};
}
if (progress.testNames.length === progress.testStatuses.length) {
// no more tests to run
console.warn('🏁 all done!');
// make room for the output
const gpEl = document.querySelector('#gamePlace');
gpEl.parentNode.removeChild(gpEl);
document.body.style.padding = '80px 10px 10px 10px';
document.body.style.overflow = 'visible';
// present the output
for (let i = 0; i < tests.length; ++i) {
const testName = progress.testNames[i];
const testStatus = progress.testStatuses[i];
const testOutput = JSON.parse(progress.output[i]);
const h2El = document.createElement('h2');
h2El.style.fontFamily = 'sans-serif';
const testLabel = `${i + 1}/${tests.length}: ${
testStatus ? '✅' : '❌'
} ${testName}`;
h2El.appendChild(document.createTextNode(testLabel));
document.body.appendChild(h2El);
console.warn('%c' + testLabel, 'font-weight:bold');
testOutput.forEach((line) => {
const method = line.shift();
console[method](...line);
const divEl = document.createElement('div');
divEl.style.fontFamily = 'monospace';
divEl.appendChild(
document.createTextNode(
`${method.toUpperCase()}: ` + line.join(', '),
),
);
document.body.appendChild(divEl);
});
}
localStorage.clear();
} else {
// capture console
const feedback = [];
for (const method of ['log', 'warn', 'trace']) {
console[method] = (...args) => feedback.push([method, ...args]);
}
// do next test
const test = tests[progress.nextTestIndex];
try {
document.title = `test ${progress.nextTestIndex + 1} / ${tests.length}`;
await test();
progress.output[progress.nextTestIndex] = JSON.stringify(feedback);
progress.testStatuses[progress.nextTestIndex] = true;
} catch (ex) {
console.log(`** step ${test.name} failed: ${ex.toString()}`);
progress.output[progress.nextTestIndex] = JSON.stringify(feedback);
progress.testStatuses[progress.nextTestIndex] = false;
}
++progress.nextTestIndex;
saveLS(KEY_PROGRESS, progress);
location.reload();
}
} else {
// run all tests in series...
console.warn('🚦 start');
let i = 0;
for (const test of tests) {
console.warn(
`%c🧪 ${i + 1}/${tests.length}: ${test.name} ...`,
'font-weight:bold',
);
await test();
++i;
}
console.warn('🏁 all done!');
}
})();
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment