solver parameterised pytest #780
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Attempt to parametrise our pytest test-suite. The default behavior stays the same: generic tests get run on the default solver
OR-Toolsand the more solver specific tests / across solver tests check which backends are available on the current system.But sometimes you want to just run the testsuite on a single solver without having to uninstall all other solvers / create a fresh environment. Now you can pass an optional argument
--solver:This will have three consequences:
exactinstead of the defaultortoolsexactIn general, I opted for filtering instead of skipping tests. So the non-
exacttests will not count towards the total number of tests. I believe we should reserve "skipping" for tests which don't get run due reasons of which we want to inform the user, e.g. missing dependencies which they need to install. When the user provides the--solveroption, they already know that tests targeting other solvers won't be run so it would just clutter the results if we were to skip instead of filter those tests.To parameterise a unittest class for the "generic" tests, simply decorate it with:
@pytest.mark.usefixtures("solver")After which
self.solverwill be available, matching the user provided solver argument.All solver specific tests can now be decorated with:
@pytest.mark.requires_solver("<SOLVER_NAME>")And will automatically be skipped if a
--solverargument has bee provided which doesn't matchSOLVER_NAME.For the across-solver tests which use generators (those in
test_constraints.py), thepytest_collection_modifyitemshook will filter out parameterised pytest functions which have been instantiated with a solver different than the user provided one. Both the argumentsolverandsolver_nameget filtered on.There are still some smaller places (see
test_solveAll.py) wherecp.SolverLookup.base_solvers()is used more directly, which can't be filtered without making changes to the test itself (not possible with one of the decorators / callback functions)As a further improvement, it might be possible to merge the following two (Already did it ;) )
So do the skipping if solver is not available also through the first mark and skip the tests more centrally in
pytest_collection_modifyitems.Using this parameterisation with solver different from OR-Tools revealed some issues with our testsuite related to #779