Tests
Tests make development easier for both veteran project contributors and newcomers alike. Most projects use the unittest framework for tests so you should familiarize yourself with this framework.
Writing tests can be a great way to get involved with a project. It’s an opportunity to get familiar with the codebase and the code submission and review process. Check the project’s code coverage and write a test for a piece of code missing coverage! |
Patches should be accompanied by one or more tests to demonstrate the feature or bugfix works. This makes the review process much easier since it allows the reviewer to run your code with very little effort, and it lets developers know when they break your code.
Test Organization
Having a standard test layout makes it easy to find tests. When adding new tests, follow the following guidelines:
-
Each module in the application should have a corresponding test module. These modules should be organized in the test package to mirror the package they test. That is, if the package contains the
<package>/server/push.py
module, the test module should be in a module called<test_root>/server/test_push.py
. -
Within each test module, follow the unittest code organization guidelines.
-
Include documentation blocks for each test case that explain the goal of the test.
-
Avoid using mock unless absolutely necessary. It’s easy to write tests using mock that only assert that mock works as expected. When testing code that makes HTTP requests, consider using vcrpy.
You may find projects that do not follow this test layout. In those cases, consider re-organizing the tests to follow the layout described here and follow the established conventions for that project until that happens. |
Test Runners
Projects should include a way to run the tests with ease locally and the steps to run the tests should be documented. This should be the same way the continuous integration (Jenkins, Zuul CI, etc.) tool runs the tests.
There are many test runners available that can discover unittest based tests. These include:
Projects should choose whichever runner best suits them.
Tox
Tox is an easy way to run your project’s tests (using a Python test runner) using multiple Python interpreters. It also allows you to define arbitrary test environments, so it’s an excellent place to run the code style tests and to ensure the project’s documentation builds without errors or warnings.
Here’s an example tox.ini
file that runs a project’s unit tests in
Python 2.7, Python 3.4, Python 3.5, and Python 3.6. It also runs
flake8 on the entire codebase and
builds the documentation with the warnings treated as errors
Sphinx
flag enabled. Finally, it enforces 100% coverage on lines edited by new
patches using diff-cover:
[tox] envlist = py27,py34,py35,py36,lint,diff-cover,docs # If the user is missing an interpreter, don't fail skip_missing_interpreters = True [testenv] deps = -rtest-requirements.txt # Substitute your test runner of choice commands = py.test # When running in OpenShift you don't have a username, so expanduser # won't work. If you are running your tests in CentOS CI, this line is # important so the tests can pass there, otherwise tox will fail to find # a home directory when looking for configuration files. passenv = HOME [testenv:diff-cover] deps = diff-cover commands = diff-cover coverage.xml --compare-branch=origin/master --fail-under=100 [testenv:docs] changedir = docs deps = sphinx sphinxcontrib-httpdomain -rrequirements.txt whitelist_externals = mkdir sphinx-build commands= mkdir -p _static sphinx-build -W -b html -d {envtmpdir}/doctrees . _build/html [testenv:lint] deps = flake8 > 3.0 commands = python -m flake8 {posargs} [flake8] show-source = True max-line-length = 100 exclude = .git,.tox,dist,*egg
Coverage
coverage is a good way to collect test coverage statistics. pytest has a pytest-cov plugin that integrates with coverage and nose-cov provides integration for the nose test runner. diff-cover can be used to ensure that all lines edited in a patch have coverage.
It’s possible (and recommended) to have the test suite fail if the
coverage percentage goes down. This example .coveragerc
:
[run] # Track what conditional branches are covered. branch = True include = my_python_package/* [report] # Fail if the coverage is not 100% fail_under = 100 # Display results with up 1/100th of a percent accuracy. precision = 2 exclude_lines = pragma: no cover # Don't complain if tests don't hit defensive assertion code raise AssertionError raise NotImplementedError if __name__ == .__main__.: omit = my_python_package/tests/*
To configure pytest
to collect coverage data on your project, edit
setup.cfg
and add this block, substituting yourpackage
with the name
of the Python package you are measuring coverage on:
[tool:pytest] addopts = --cov-config .coveragerc --cov=yourpackage --cov-report term --cov-report xml --cov-report html
This causes coverage (and any test running plugins using coverage) to fail if the coverage level is not 100%. New projects should enforce 100% test coverage. Existing projects should ensure test coverage does not drop to accept a pull request and should increase the minimum test coverage until it is 100%.
coverage has great exclusion support, so you can exclude individual lines, conditional branches, functions, classes, and whole source files from your coverage report. If you have code that doesn’t make sense to have tests for, you can exclude it from your coverage report. Remember to leave a comment explaining why it’s excluded! |
Licenses
The liccheck checker can verify that every dependency in your project has an acceptable license. The dependencies are checked recursively.
The licenses are validated against a set of acceptable licenses that you
define in a file called .license_strategy.ini
in your project
directory. Here is an example of such a file, that would accept Free
licenses:
[Licenses] authorized_licenses: bsd new bsd simplified bsd apache apache 2.0 apache software gnu lgpl gpl v2 gpl v3 lgpl with exceptions or zpl isc isc license (iscl) mit python software foundation zpl 2.1
The verification is case-insensitive, and is done on both the license
and the classifiers
metadata fields. See
liccheck's documentation for more
details.
You can automate the license check with the following snippet in your
tox.ini
file:
[testenv:licenses] deps = liccheck commands = liccheck -s .license_strategy.ini
Remember to add licenses
to your Tox envlist
.
Security
The bandit checker is designed to find common security issues in Python code.
You can add it to the tests run by Tox by adding the following snippet
to your tox.ini
file:
[testenv:bandit] deps = bandit commands = bandit -r your_project/ -x your_project/tests/ -ll
Remember to add bandit
to your Tox envlist
.
Want to help? Learn how to contribute to Fedora Docs ›