Code Coverage

Index of All Documentation » Wing Pro Reference Manual » Unit Testing »


Wing can collect and display code coverage statistics while running unit tests, to make it easier to see whether your tests are doing a good job of testing all your code.

Collected coverage statistics also make it possible to identify and re-run only those unit tests that previously reached the code that you are editing, making it faster and easier to identify any problems that have been introduced.

Installing Coverage

Before you can use code coverage in Wing, you will need to install version 6.3 or newer of the coverage package into your Python installation, either using Wing's Packages tool or by invoking pip on the command line:

pip install coverage

Detailed documentation on installing coverage is available at https://coverage.readthedocs.io/.

Wing's code coverage features may not work with coverage versions older than 6.3, and Python 2.x is not supported at all for this feature, since it was added after Python 2.x end-of-life was reached.

Coverage versions older than 7.0 will cause Wing to consume more CPU for some time after unit tests finish running.

Collecting Coverage Data

Once coverage has been installed, collecting code coverage data can be enabled with the Testing > Use Code Coverage menu item. This tells Wing to start collecting code coverage data whenever any unit tests are run (but not debugged) from the Testing tool. After each set of tests is run, Wing merges coverage statisics from that run into all the statistics that have been collected so far.

You can clear out all stored code coverage data at any time with Clear Code Coverage Data in the Testing menu.

Important: If your testing code contains import wingdbstub, you will need to disable that when unit tests are being run from Wing. Otherwise, the debugger will conflict with and prevent collection of code coverage data.

Similarly, if you are using a testing framework like pytest that can enable coverage on its own, you will need to disable coverage in the testing framework's configuration while running the tests from Wing, because starting coverage twice will prevent proper functioning of the code coverage subsystem.

Integrated Coverage Data Display

While Use Code Coverage is enabled, Wing adds a narrow margin to editors, as a place to indicate lines that have been reached (with a green mark) and lines that have been missed (with a red mark).

In addition to the indicators in this margin, lines of code that were reached or missed may be highlighted by changing their background color. This additional markup is off by default but can be enabled with the Editor > Code Coverage > Set Visited Lines Background Color and Set Missed Lines Background Color preferences.

Whenever code coverage markup is visible on an editor, hovering the mouse cursor over a visited line of code will display a tooltip that lists the unit tests that reached that line of code. This behavior can be disabled with the Editor > Code Coverage > Show Editor Tooltips preference.

As you edit code, lines that are added or changed will be marked as unreached by code coverage, since those lines in their current form were in fact never tested. Once unit test are re-run, the marks will be updated according to newly available code coverage data.

Viewing and Running Stale Unit Tests

When you save changes to Python code to disk, Wing automatically invalidates the results for any unit tests that were previously seen to reach code that you have changed. These invalidated results are indicated in the Testing tool by changing the color of test result icons to yellow, rather than green for succeeded or red for failed.

All stale tests with invalidated results can be re-run with Run Stale Tests in the Testing menu.

This makes it easy to check whether edits you have made broke any existing unit tests.

The process of deciding which tests a change should invalidate is relatively complex, and should be treated as an approximation and not a final and complete determination of all tests that may be affected by the change. We strongly recommend re-running all tests before releasing changes into production.

See Test Invalidation in How Code Coverage Works for more information.

Exporting Data and Reports

Show HTML Coverage Report in the Testing menu generates an HTML code coverage report and displays it in your web browser.

Coverage data may also be exported with the Export Coverage Data menu item, in JSON, LCOV, XML, HTML, or raw (coverage.py native) format.

Section Contents