testmon: selects tests affected by changed files and methods
pytest-testmon is a pytest plugin which selects and executes only tests you need to run. It does this by collecting dependencies between tests and all executed code (internally using Coverage.py) and comparing the dependencies against changes. testmon updates its database on each test execution, so it works independently of version control.
You are reading docs for testmon version 1.0.1 which is a significant rewrite of testmon. For 0.9.x go here. The description of differences is here.
# build the dependency database and save it to .testmondata pytest --testmon # change some of your code (with test coverage) # only run tests affected by the changes pytest --testmon
Before testmon can select the right subset of your tests you have to run all your tests with the --testmon option.
If you want to read more about how testmon collects and compares changes we wrote about it here.
If you want to run your tests on every file save we recommend using testmon with pytest-watch.
Testmon also remembers and re-reports the failures even if nothing in their execution path is changed.
pip install pytest-testmon
All command line options
|--testmon||(select and collect) select only tests affected by recent changes and update the testmon database. In some circumstances different options are also forced. See below.|
|--testmon-noselect||Don't deselect, execute all tests picked up by pytest and create/update respective records in .testmondata. Forced if you use --testmon with some test selector (-k, -m, --last-failed, test_file.py::test_x, etc.)|
|--testmon-nocollect||Don't track, just deselect based on existing database and changes. This is forced also when you use --testmon with debugger or Coverage.py.|
|--testmon-forceselect||Selects only tests which both reach changed code and satisfy pytest selectors.|
|--no-testmon||Turn off (even if activated from config by default).|
||This allows you to have separate coverage data within one .testmondata file, e.g. when using the same source code serving different endpoints or django settings.|
Add testmon to pytest.ini
[pytest] addopts = --testmon # you can make --testmon a default if you want # If you want to separate different environments running the same sources. testmon_env_expression = ''.join([s[5:] for s in sys.argv if s.startswith('--ds=')]) or os.environ.get('DJANGO_SETTINGS_MODULE')
More complex testmon_env_expression can be written: the os, sys and hashlib modules are available, and there is a helper function md5(s) that will return hashlib.md5(s.encode()).hexdigest().
Troubleshooting - usual problems
Testmon selects too many tests for execution: Depending you your change it most likely is by design. If you changed a method parameter name, you effectively changed the whole hierarchy parameter -> method -> class -> module, so any test using anything from that module will be re-executed.
Tests are failing when running under testmon: It's quite unlikely testmon influenced the execution of the test itself. However set of deselected and executed tests with testmon is highly variable, which means testmon is likely to expose undesired test dependencies. Please fix your test suite. We wrote down a couple of tips and tricks on how to tackle this challenge here.
You can also try if your test is influenced under pytest-cov (coverage) without testmon. For reporting a bug a repository with description of unexpected behavior is the best, but please don't hesitate to report even if your project is closed source. We'll try to fix it!
There are many things influencing the outcome of a test. Testmon keeps track of some of them, but not all:
- code inside the tested project itself, which presumably changes frequently
- environment variables (e.g. DJANGO_SETTINGS_MODULE), python version (you are able to separate testmon data by configuring run_variant_expression)
- code in third-party packages, which presumably change infrequently
- static files (txt, xml, other project assets)
- external services (reachable through network)
testmon so far deals with incrementally running tests when faced with the 1. and 2. category of changes.
Later versions can implement some detection of other categories.