If you enjoyed this book considering buying a copy
- Buy a copy of the book on Lean Pub
- Buy a copy of the book on Kindle
- Buy a hard copy of the book on Amazon
- Buy the bundle Master Python on Lean Pub
Chapter 7: Pytest fixtures and plugins #
Alfredo Deza
In previous chapters, the extensibility and ease of use of Pytest have been praised. Running tests with its command-line tool, executing functions, or test methods, are all very straightforward. What happens when a project needs some tests to have a different setup or behave differently with command-line flags that are not offered as part of the default Pytest installation? The Pytest framework has extensive support for adding and manipulating almost every aspect of its engine.
At first, I felt intimidated by plugins. My experience with writing plugins for other frameworks and tools (even in Python tools) hasn’t been too good. Lots of quirks and oddities that are hard to remember. Not quite the case for Pytest, however. The project follows this pattern of “getting out of the way”: it allows to write plugins as a single function, or as complicated as a fully-fledged Python package that needs to hook into entry points at install time.
Start by writing functions, and when the functionality grows beyond a few functions, consider making them a separate package, especially if more than one project can benefit from the collection of functions that are extending the behavior.
Although plugins are essential, this chapter covers a few other vital aspects of the framework that are useful: fixtures, and parametrization. Reusing utilities and sharing them around, automatically sharing fixtures in tests, and using multiple inputs for the same test (instead of a loop) are all useful and will make tests less repetitive and easier to extend.
What are fixtures? #
Fixtures can be described as a couple of different things: a utility that provides behavior (for example, making an HTTP request), or something that provides some data for tests. These aspects of fixtures make them simple to reason about but they can also be large and highly functional. The framework even provides some built-in ones that are fantastic to work with!
After a few years of using the Pytest framework, I found myself not having invested any time in learning them or wanting to use them at all. I was coming from heavy usage of unittest
and felt that using fixtures was odd to say the least. If you have some experience with Python already, you know that arguments have to be passed in explicitly. It is common to expect things in Python to happen somewhat explicitly. That explicitness is what doesn’t quite fit the fixture model: by declaring the fixture name as an argument to a test function (or test method), the fixture will be injected at test time.
In the end, think of fixtures as flexible helpers, that can help automate repetitive setup or teardowns, and that can be extended to add some behavior.
Creating new fixtures #
There are a few different ways to create fixtures so that tests can use them. The simplest example is creating one in the same file where the tests consuming it exist. The code under test is a small function that will construct a Python dictionary with interesting information depending on the values from a response object. This response object is coming from an HTTP request made somewhere else in the project.
This is the function that interacts with the response object, that lives in a utils.py
module:
def build_message(response):
message = {
"success": True,
"error": "",
}
if response.status >= 400:
message["success"] = False
message["error"] = response.body
return message
The function is small enough to reason about, but presents the difficulty of having to pass a response
object that needs to have an attribute with some values. It is tempting to try and make an actual request, but this can quickly get problematic for unit testing since it might require an actual service to respond. Unit tests should be as contained and reproducible as possible. Reducing (or even eliminating) dependencies on external services or other software that needs to run in order to have a successful test is a good objective.
Create a new test file named test_utils.py
that is able to import this function and create a class that will act as the response object:
class FakeResponse:
def __init__(self, status=200):
self.status = status
The FakeResponse
class is going to behave as the response object after being instantiated. I only need to set the status
at this point so the class is only implementing that attribute. Next, the actual fixture!
import pytest
@pytest.fixture
def response():
def apply(status=200):
return FakeResponse(status=status)
return apply
The example code shows pytest
being imported, which is using the fixture
decorator to wrap the response()
function. If this looks complicated, think of it as a plain function that is getting marked so that the framework knows about its existence at runtime. Inside the fixture there is a nested function (sometimes referred to as closures). This is so that later when a test needs this fixture it can pass arguments used for creating the object, vs. creating the object early on. When used in tests, it will make sense why this is useful.
After a few years of writing Python and working as a Software Engineer, I went to interview in a large multinational company. The team I was interviewing for had a lot of senior engineers, a few of them were well known in the community and I certainly looked up to them. I felt very intimidated by all this but decided to still go through the interview process, what a great opportunity to be working with senior engineers! One of the questions was: “Do you know what a closure is?" and I had never heard the terminology. I shook my head, and the interviewer added “It is just a function”. Sometimes people like to use different or less common terminology. Don’t feel intimidated by it, almost always there is a simple way to reason about things!
At this point in our examples, we have a utility module which creates a Python dictionary with useful information, a class that will be used in place of an actual response object, and the first fixture. Now it is time to create a test function to see it all work together. The first test verifies that success behavior works without issue:
def test_build_message_success(response):
result = build_message(response())
assert result["success"] is True
The function looks somewhat different from a plain test function. It requires an argument called response
. Where is this coming from? The Pytest framework figures out that it is a fixture and that the name matches one of the fixtures that was registered at collection time. When the framework collected the tests, the decorated fixture (marked with @pytest.fixture
) was detected and later injected into the test that declared it as an argument. Behind the scenes, the framework maps the name of the test function argument to that of the fixture. If you have a typo and the names don’t match exactly, the fixture will not work!
The fixture was created with the ability of accepting other status codes as well. Take advantage of the flexibility by adding more tests with the same fixture, in this case verify the behavior when there is an error response:
def test_build_message_failure(response):
result = build_message(response(400))
assert result["success"] is False
A failure is reported when the test runs:
E AttributeError: 'FakeResponse' object has no attribute 'body'
The FakeResponse
class is missing the body
attribute. Add the ability to set custom body messages to the class and then make the fixture accept those as part of its arguments:
class FakeResponse:
def __init__(self, status=200, body=""):
self.status = status
self.body = body
Now modify the fixture so that tests can optionally set the body. Making it optional is important because it will not disrupt existing tests.
@pytest.fixture
def response():
def apply(status=200, body=""):
return FakeResponse(status=status, body=body)
return apply
Run the test again. It now passes because the body
attribute exists, and it is set by default to be an empty string which satisfies the requirements of the build_message
function in the utils
module. There is one more test that has to be added here, and that is one that has a custom body message. This is important if the utility function changes in the future and needs to craft custom wording depending on the body of the response:
def test_build_message_failure_body(response):
result = build_message(response(400, "not allowed here!"))
assert result["success"] is False
assert result["error"] == "not allowed here!"
This introduction to fixtures should demistify how they work. Although the first impression might be that they are too magical, it is only the framework passing functions around when tests require them as named arguments.
Built-in Fixtures #
The Pytest framework comes with a few built-in fixtures that are available to any test that defines it as a requirement in their arguments. This section goes through a few that are commonly used. To find out what fixtures are available, use the pytest
commandline tool to list them:
$ pytest -q --fixtures
Pytest version 5.3 comes with these built-in fixtures:
cache
capsys
capsysbinary
capfd
capfdbinary
doctest_namespace [session scope]
pytestconfig [session scope]
record_property
record_xml_attribute
record_testsuite_property [session scope]
caplog
monkeypatch
recwarn
tmpdir_factory [session scope]
tmp_path_factory [session scope]
tmpdir
tmp_path
The ones I most commonly use are: monkeypatch
, capsys
, and tmpdir
(tmp_path
is similar, it returns a different object). This chapter will cover capsys
and tmpdir
, but monkeypatch
involves a lot more and will have a whole chapter on its own (Chapter 8).
capsys #
It is great when the code under test returns some value so that we can write tests with expectations based on those returns. Sometimes, production code will alter some behavior and not return anything, or will produce informational output with no real side-effects. This is something that is common in command line tools. Throughout the years, I’ve maintained a lot of command line tools in Python, and I’ve had issues with untested command line output where a modification that broke the output went unnoticed. Capturing stdout
or stderr
is very useful but it is very painful to do without some sort of helper.
Create a command line tool and save it in a file called cli.py
:
import argparse
def main():
parser = argparse.ArgumentParser(prog='cli')
parser.add_argument('--foo', help='the foo option!')
parser.parse_args()
if __name__ == '__main__':
main()
This command line tool doesn’t do anything, but it does define some options and the argparse
module will create a help menu if the -h
or --help
flags are called, but it will not do anything when called without any flags:
$ python cli.py
$ python cli.py -h
usage: cli [-h] [--foo FOO]
optional arguments:
-h, --help show this help message and exit
--foo FOO the foo option!
Testing this is going to be a bit complicated, because the ArgumentParser
module raises a SystemExit
when producing help output even if there aren’t any errors. Additionally, it relies on values of sys.argv
to detect the flags.
Create a test function in a test file called test_cli.py
, and import the cli
module to verify the main()
function which is producing this help output. To set the right values for sys.argv
, import sys
and add the flags calling out for help, and import pytest
as well to use a helper that will catch SystemExit
, preventing the test from failure:
import pytest
import sys
import cli
def test_help_output_is_generated(capsys):
sys.argv = ['cli.py', '-h']
with pytest.raises(SystemExit):
cli.main()
out, err = capsys.readouterr()
assert 'usage: cli [-h] [--foo FOO]' in out
assert '--foo FOO the foo option!' in out
The capsys
fixture is required as an argument, and allows the test to call cli.main()
which is protected from failing by the pytest.raises(SystemExit)
line. After calling the code that produces the output capsys.readouterr()
is called to produce both stdout
and stderr
that may have been captured. In this test the out
variable was populated with all the stdout
produced, which the test later asserts.
tmpdir #
Some code is meant to deal with file-like objects. A file is read somewhere into Python code, which then gets transformed, or just read (for example, configuration files). Python has the configparser
module that reads configuration settings from a file, and tests need to ensure that the options defined there are read and parsed coorectly. If a parser in a project adds some error handling, then you want to have that tested to ensure the handler is working correclty.
Although it sounds enticing to create a file-like object (the io.StringIO
module helps with that), there are things that can’t be tested with an object like that. What if the code handles permission errors or ownership on a file? In some situations it is required to have an actual file to verify behavior.
from configparser import ConfigParser
def is_verbose(config_path):
config = ConfigParser()
config.read(config_path)
try:
return config.getboolean('main', 'verbose')
except Error:
return True
In this example, the function is trying to determine if the application should run in verbose mode, if there are any problems it will default to True
. Potential issues that should be tested is if there is no 'main'
section or if the verbose
option was never set.
Create the first test to check that it defaults to True
when the option is not set:
def test_is_verbose_fails(tmpdir):
path = tmpdir.mkdir('etc').join('app.conf')
path.write('[main]')
assert is_verbose(path.strpath) is True
The tmpdir
fixture can create directories and subdirectories, as well as files. The first line of the test does this in a path that is unique to the current test run and specific test. It then allows to write to that file, and in this case a single section with no values or sub-sections is used: only the [main]
line is written. Finally, the strpath
attribute is used to get the full (temporary) path to this file so that the is_verbose()
function can read it and try to parse it.
Other tests can be added to ensure the expected behavior. Create two more tests, one that does define the option and sets it to '1'
and the other one disables it by setting it to '2'
:
def test_is_verbose_succeeds_false(tmpdir):
path = tmpdir.mkdir('etc').join('app.conf')
path.write('[main]\nverbose = 0')
assert is_verbose(path.strpath) is False
def test_is_verbose_succeeds_false(tmpdir):
path = tmpdir.mkdir('etc').join('app.conf')
path.write('[main]\nverbose = 1')
assert is_verbose(path.strpath) is True
The same pattern is followed, and now the production test that is reading a file will be loading real files but from a temporary path.
Advanced Fixture usage #
So far, the fixtures examples are in their simplest form, but they can be mixed and depend on other fixtures, add their own teardown, and even configure them to limit the scope in which they operate.
Dependencies #
Fixtures can also depend on other fixtures, and as many as required. The API is similar to requiring a fixture in a test. In a fixture, by defining the argument that matches an existing fixture will cause the framework to pass it in at runtime.
One fixture that I tend to write a lot is a helper for creating a temporary file. This chapter has already showed the tmpdir
fixture which allows to create temporary directories and files with the ability to write to them. Sometimes all a test needs is a file with some pre-baked contents. Instead of creating the file and adding the contents use a fixture so that the repetitive code gets abstracted away.
Let’s reuse the example code that requires an INI-style format, and reduce the repetitive code that tests are doing. The tests are creating a directory, a file, and finally writing some contents. Create a fixture that uses tmpdir
so that it abstracts away the boiler-plate code:
@pytest.fixture
def config(tmpdir):
template = """
[main]
verbose = '{verbose_value}'
"""
def apply(verbose_value='1'):
path = tmpdir.mkdir('etc').join('app.conf')
path.write(template.format(verbose_value=verbose_value))
return path.strpath
return apply
This fixture is requiring the tmpdir
fixture as a test would, and using it in the nested function called apply()
that will create the etc
directory and app.conf
file, and optionally allowing to tweak the value on that verbose
option. The test function can now harness this new abstraction:
def test_is_verbose_succeeds_false(config):
config_path = config()
assert is_verbose(config_path) is True
Teardown #
A nice feature of fixtures is that they can have an especific way of cleaning up that is not possible with a plain function. This functionality can be done through the request
fixture that can be declared as an argument and then using the addfinalizer()
call to it.
I recently had to add functional tests for a container-based project. This fixture starts a container, but I want to have the container stopped and removed at the end of the session:
@pytest.fixture(scope='session', autouse=True)
def inline_scan(client):
container = start_container(
client,
image='anchore/inline-scan',
name='pytest_inline_scan',
environment={},
detach=True,
ports={'8228/tcp': 8228}
)
return container
The fixture is modified by declaring its dependency on the request
fixture, which is an internal framework fixture that allows runtime operations. In this case, we need to tell the framework that the fixture has a cleanup function that needs to happen:
@pytest.fixture(scope='session', autouse=True)
def inline_scan(client, request):
# If the container is already running, this will return the running
# container identified with `pytest_inline_scan`
container = start_container(
client,
image='anchore/inline-scan',
name='pytest_inline_scan',
environment={},
detach=True,
ports={'8228/tcp': 8228}
)
# Do not leave the container running and tear it down at the end of the session
request.addfinalizer(lambda: teardown_container(client, container))
return container
The request.addfinalizer()
call accepts the callable that gets executed after the test run is complete (as indicated by scope=session
). The tear_down()
container is a helper that exists in the same module, and the lambda
usage enforces a lazy call. Without the lambda there, it would cause the teardown_container
to be called right away, which we don’t want.
Scope #
In this other project, I had the following steps to test operations with Git:
cd <repo name>
git init
touch A
git config user.email 'you@example.com'
git config user.name 'Your Name'
git add A
git commit -m 'A' A
These steps had to be done once for the module, so it was initially implemented as an enormous class with a setup_class
method that did this once for the whole class. But then other classes needed a similar helper and it caused code to be copied over. Not good.
The first take on implementing a fixture for this looked like this:
@pytest.fixture
def git_repository():
d = tempfile.mkdtemp()
os.system("""
cd {d}
git init
touch A
git config user.email 'you@example.com'
git config user.name 'Your Name'
git add A
git commit -m 'A' A
""".format(d=d))
return d
This is useful already, and gets called every time a test needs it. The problem is that it does this for every test, and there is no cleanup being done (even though it is using mkdtemp()
). The Pytest framework allows to add a teardown which requires defining request
as an argument to the fixture, and then adding the cleanup function. This is how the fixture looks now:
@pytest.fixture
def git_repository(request):
d = tempfile.mkdtemp()
os.system("""
cd {d}
git init
touch A
git config user.email 'you@example.com'
git config user.name 'Your Name'
git add A
git commit -m 'A' A
""".format(d=d))
def finalizer():
os.system("rm -fr " + d)
request.addfinalizer(fin)
return d
These changes improves things, it adds a finalizer
function which will get called at the end of the test (just like a teardown
method). But there is something else that can be improved, it could use the tmpdir
fixture instead of having to import tempfile
and do this with a Python module:
@pytest.fixture
def git_repository(tmpdir):
path = tmpdir.mkdir('repo')
os.system("""
cd {d}
git init
touch A
git config user.email 'you@example.com'
git config user.name 'Your Name'
git add A
git commit -m 'A' A
""".format(d=path.strpath))
return path.strpath
I removed the finalizer function. But why? Although the idea is to demonstrate how useful finalizer functions are, it is good to understand that in this case the tmpdir
fixture has a finalizer
of its own which cleans up after the fixture is used. No need to define it again with a potentially dangerous rm -rf
command. One last improvement can be done here. I need this to run for the whole module, not for every test. This is where scoping comes in:
@pytest.fixture(scope='module')
def git_repository(tmpdir):
path = tmpdir.mkdir('repo')
os.system("""
cd {d}
git init
touch A
git config user.email 'you@example.com'
git config user.name 'Your Name'
git add A
git commit -m 'A' A
""".format(d=path.strpath))
return path.strpath
Using scope='module'
tells the framework that this fixture should run once for every test file (or Python module). Exactly what I needed, preventing copying code around and improving code re-use. There are other scopes possible too:
'function'
Fixture is called once per test.'class'
Fixture is called once per class of tests'module'
Fixture is called once per module'session'
Fixture is called once per session
Parametrizing #
This is another feature of Pytest that took me a while to get used to or to even find an interest in. It felt very unusual and had a hard time grasping what the intent was with it. It became very clear once day I found myself writing lots of tests that where asserting the same thing, just changing the inputs. Whenever you have a test that looks repetitive because the assertions are the same and the inputs are changing, it is a prime candidate for using parametrization.
The production code in this case is a function that tries to convert a string into a boolean. For example a '1'
would mean True
and a '0'
would be False
. This is the function:
def strtobool(val):
true_vals = ['yes', 'y', '', '1']
false_vals = ['no', 'n', '0']
try:
val = val.lower()
except AttributeError:
val = str(val).lower()
if val in true_vals:
return True
elif val in false_vals:
return False
else:
raise ValueError("Invalid input value: %s" % val)
It has some guards for case insensitivity, and it raises an exception if the input is completely unacceptable. Initially the tests were very repetitive and looked like this:
class TestStrToBool(object):
def test_y_is_true(self):
assert strtobool('y') is True
def test_y_is_true(self):
assert strtobool('1') is True
def test_y_is_true(self):
assert strtobool('Y') is True
def test_y_is_true(self):
assert strtobool('yes') is True
def test_y_is_true(self):
assert strtobool('YES') is True
These tests are all making the same assertion for different inputs: they all should return True
. That makes it an ideal candidate for parametrization. It allows writing a single test that will inject the various inputs needed:
import pytest
class TestStrToBool(object):
@pytest.mark.parametrize('user_input', ['y', 'Y', 'YES', 'yes', '1'])
def test_true_values(self, user_input):
assert strtobool(user_input) is True
A decorator is required to configure the test method (this would work for functions too). The decorator takes two arguments in this case: the first one is a string to name the argument of the variable that will get injected in the test, I am using 'user_input'
here but can be anything that makes sense. This argument is followed by an iterable of the items I want to pass in. The framework will test each one of those items as if they were tests on their own.
Another situation that tells me a test would benefit from parametrization is when a test is using a for
loop. Loops for making assertions are not good, because it obscures what assertions are being made, and when there is a failure, it stops execution of anything pending. Parametrization fixes all of that. This is the example output of running the above test in verbose mode:
============================= test session starts ==============================
platform darwin -- Python 3.6.2, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
cachedir: .pytest_cache
rootdir: /Users/alfredo/python/testing-in-python/chapter7/parametrize
collecting ... collected 5 items
user_inputs.py::TestStrToBool::test_true_values[y] PASSED [ 20%]
user_inputs.py::TestStrToBool::test_true_values[Y] PASSED [ 40%]
user_inputs.py::TestStrToBool::test_true_values[YES] PASSED [ 60%]
user_inputs.py::TestStrToBool::test_true_values[yes] PASSED [ 80%]
user_inputs.py::TestStrToBool::test_true_values[1] PASSED [100%]
============================== 5 passed in 0.02s ===============================
Using the -v
flag shows the inputs passed for the one test we have in that class (test_true_values
), demystifying what is it testing. If a failure would happen, all the other test inputs would still be tried, further improving the robustness of the test suite.