Skip to content

Instantly share code, notes, and snippets.

@douglatornell
Last active December 28, 2015 17:29
Show Gist options
  • Save douglatornell/7536512 to your computer and use it in GitHub Desktop.
Save douglatornell/7536512 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#EOAS Software Workout\n",
"## Testing and Code Design\n",
"18-Nov-2013"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"References:\n",
"\n",
"* [Bootcamp Code Design & Testing Notebook](http://nbviewer.ipython.org/url/douglatornell.github.io/2013-09-26-ubc/lessons/ubc-testing/testing-0-unit-tests.ipynb)\n",
"* [List of Unit Test Frameworks](http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks)\n",
"* [xUnit](http://en.wikipedia.org/wiki/XUnit)\n",
"* [Salish Sea MEOPAR Tools repo](https://bitbucket.org/salishsea/tools)\n",
"\n",
" * [bathy_tools.py Module](https://bitbucket.org/salishsea/tools/src/tip/SalishSeaTools/salishsea_tools/bathy_tools.py)\n",
" * [test_bathy_tools.py Module](https://bitbucket.org/salishsea/tools/src/tip/SalishSeaTools/tests/test_bathy_tools.py)\n",
"* [pytest Docs](http://pytest.org/latest/)\n",
"* [Python Coverage Measurement Tool](http://nedbatchelder.com/code/coverage/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here's a pretty common pattern of writing a piece of code and testing it interactively.\n",
"I've switched from just printing the result and confirming by eye/brain that it is\n",
"the correct result\n",
"to letting the computer do the work via `assert` statments."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def mean(data):\n",
" return sum(data) / len(data)\n",
"\n",
"result = mean([1, 2, 3])\n",
"assert result == 2\n",
"\n",
"result = mean([1, 2])\n",
"assert result == 1.5"
],
"language": "python",
"metadata": {},
"outputs": [
{
"ename": "AssertionError",
"evalue": "",
"output_type": "pyerr",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mAssertionError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-6-cb6f5bf9e3ac>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmean\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m2\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 8\u001b[0;31m \u001b[0;32massert\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m1.5\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mAssertionError\u001b[0m: "
]
}
],
"prompt_number": 6
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You pretty rapidly reach the point where it makes sense to take the code out of Notebook \n",
"so that you can take advantage of test automation tools.\n",
"So,\n",
"I put the function is a `mean.py` module:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"%load mean.py"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 7
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def mean(data):\n",
" return sum(data) / len(data)\n"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and I made the tests into functions whose names start with `test_` and put them in a `test_mean.py` module:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"%load test_mean.py"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 8
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from mean import mean\n",
"\n",
"\n",
"def test_int_mean():\n",
" result = mean([1, 2, 3])\n",
" assert result == 2\n",
"\n",
"\n",
"def test_float_mean():\n",
" result = mean([1, 2])\n",
" assert result == 1.5\n",
"\n"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With that bit of structure and boilerplating I can use `py.test`\n",
"to automatically collect and run the tests:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```bash\n",
"$ py.test\n",
"============================ test session starts ============================\n",
"platform darwin -- Python 2.7.5 -- pytest-2.4.2\n",
"collected 1 items\n",
"\n",
"test_mean.py .\n",
"\n",
"========================= 1 passed in 0.03 seconds ==========================\n",
"tom:workout$ py.test\n",
"============================ test session starts ============================\n",
"platform darwin -- Python 2.7.5 -- pytest-2.4.2\n",
"collected 2 items\n",
"\n",
"test_mean.py .F\n",
"\n",
"================================= FAILURES ==================================\n",
"______________________________ test_float_mean ______________________________\n",
"\n",
" def test_float_mean():\n",
" result = mean([1, 2])\n",
"> assert result == 1.5\n",
"E assert 1 == 1.5\n",
"\n",
"test_mean.py:11: AssertionError\n",
"==================== 1 failed, 1 passed in 0.04 seconds =====================\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Salish Sea MEOPAR `bathy_tools.py` and `test_bathy_tools.py` modules\n",
"(links in the References section above)\n",
"show a more realistic application of these concepts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most languages have some sort of unit test framework and test automation tools.\n",
"See the Wikipedia list linked in the Reference section above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python also has a test coverage tool that monitors the execution of the test suite\n",
"and reports on whether or not each line of code was touched by a test.\n",
"A recent improvement\n",
"(in contrast to the situation at the time of the Bootcamp)\n",
"is that coverage is now available in the Anaconda distribution:\n",
"`conda install coverage`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##Key Points\n",
"\n",
"* Use `assert` statements to make the computer check your test results for you\n",
"* Writing test functions formalizes testing that you are probably doing manually anyway, and make it repeatable\n",
"* Test automation tools let you run your tests often; i.e before every commit\n",
"* A test coverage tool can help you figure out paths through your code that are untested, and identify declarative sections of code that may not need to be tested\n",
"* Breaking code into small functions make it not only easier to read and understand, but also easier to test. \n",
"* Building larger pieces of functionality out of small pieces of well-tested code gives you confidence that the larger functionality will do what you expect.\n",
"* Tests provide \"executable documentation\" for your future self (and others) about how a piece of code is expected to work"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment