Skip to content

Instantly share code, notes, and snippets.

@julien-h
Created January 10, 2020 11:16
Show Gist options
  • Save julien-h/87c0d77cbc4659d712ddbe5b0731af53 to your computer and use it in GitHub Desktop.
Save julien-h/87c0d77cbc4659d712ddbe5b0731af53 to your computer and use it in GitHub Desktop.

Tests

I wrote tests to make sure the code before and after the refactoring produced the same results. The tests are not perfect, and I don't consider them as a product of my work to be shipped with the refactored code. Rather, they were a tool that enabled me to work faster and more safely.

As the code was tightly coupled to the GUI and the I/O (writing files rather than returning data), I had to write end-to-end tests that simulate mouse input and check the resulting files. On the time budget that I allowed myself, I couldn't get pytest-qt to work on the lab's machines, and ended up running the test on my own machine, which is not Ubuntu based. Since pytest-qt simulates mouse input, the tests are tightly coupled to the window's size and my screen resolution, they are platform dependent, machine dependent, and require significant setup to be run on another machine.

I found no easy way to make the tests sufficiently robust to be useful for everyone. They break often for reasons independent from the code, and as such require a good understanding of false positive. Therefore, I decided not to include them in the final pull-request. The whole test file is only around 300 lines of code, so anyone should be able to write a similar tool quickly anyway. Also, the tests are still available in the git history.

The tests that I used can be categorized as follows:

  1. Setup tests make sure that the files on disk are setup correctly for the following tests
  2. Basic GUI tests make sure that when a GUI button is clicked, the expected callback is called
  3. Manual correction tests simulate user input and compare the results with a reference result.

The manual correction tests are the least robust but were the most useful during the refactoring. I used them as follows: (1) create a virtual desktop specifically for running the tests, ensuring that the application window is positioned the same at every run, (2) run the mouse simulation part of the test on the old code, and manually save the output files somewhere, (3) refactor the code, (4) run the mouse simulation part of the test on the new code, and use diff to make sure produced files are the same as before.

By any measure, these are not good tests. But they were good enough to enable my work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment