Automating tests
|
Automated tests allow a program’s complete behaviour to be tested every time changes are made, revealing any problems the changes may have caused.
Test frameworks provide tools to make writing tests easier, and test runners will automatically search for tests, run them, and verify that they give the correct results. pytest is an example of both of these.
Write tests functions that use assert s to check that the results are as expected. Name the functions to start with test , and put them in files starting with test_ or ending with _test.py . Run the tests automatically by calling pytest .
|
pytest features
|
Use the @pytest.mark.parametrize decorator to run the same test multiple times with different data.
Use with pytest.raises: to define a block that is expected to raise an exception. The test will fail if the exception is not raised.
Use pytest --doctest-modules to check the examples given in any docstrings, and ensure that the output given is correct.
|
Input data for tests
|
A fixture is a piece of test data that can be passed to multiple tests.
Define a fixture by creating a function with the @pytest.fixture decorator that returns the desired data. Any test that takes an argument of the same name will receive the data in the fixture.
Set the scope parameter to the @pytest.fixture decorator to control if and where the fixture is re-used across multiple tests. For example. scope="session" reuses the fixture for the complete run of tests.
|
Edge and corner cases, and integration testing
|
In problems that have fixed boundaries, an edge case is where a parameter sits on one of the boundaries.
In multidimensional problems with fixed boundaries, a corner case is where more than one parameter sits on one of the boundaries simultaneously.
Edge and corner cases need specific tests separate from the tests that apply across the whole problem.
Unit tests test the smallest units of functionality, usually functions.
Integration tests test that these units fit together correctly into larger programs.
|
Testing randomness
|
|
Continuous Integration
|
|
Code coverage
|
Use the --cov option to pytest to monitor what code is tested, and then use codecov to report on the results.
Having every line of code tested isn’t essential to have a good test suite—even one test is better than zero!
Having every line of code tested doesn’t guarantee that your code is bug free. In particular, edge cases and corner cases are often not guarded against.
codecov.io can connect to your GitHub account and pull coverage data to generate coverage reports from your CI workflows.
|
Putting it all together
|
Testing and CI work well together to identify problems in research software and allow them to be fixed quickly.
If anything is unclear, or you get stuck, please ask for help!
|