Chapter04 Test Classes

If you find this content useful, consider buying this book:

If you enjoyed this book considering buying a copy

Chapter 4: Test Classes #

Noah Gift

The devil is in the details with most of the technology. It is easy to say things like TDD (Test-driven development) is the “only” way to write software. Test-driven development is a method of writing software where a test creates; then the code will complete it. It is also easy to talk about the benefits of object-oriented programming with Unit Testing.

In theory, yes, TDD seems like a good idea. Who wouldn’t want 100% test coverage? In practice, though, this is an expensive operation and not worth the ROI. Even further, it may not also be possible to do realistically. How many real-world examples are there of projects that have been around for several years and completed TDD? Highly experienced and competent software engineers are not zealots; they are pragmatic minimalists. They realize that some testing is proper, and no testing is foolish, and they strike a balance.

A good comparison to TDD is counting calories. There have been decades of bad advice in the Nutrition industry about counting calories. In theory, this is nonsense. If counting calories worked, diets would work, and they don’t. Heuristics do work, though, like Intermittent Fasting or habit-changing like eliminating all highly processed foods. Similarly, technical zealotry feels good and accomplishes little to nothing. Heuristics work because of the reduced complexity of actually doing something day in and day out. The “perfect” solution rarely works in the real world because it isn’t sustainable daily. Just ask the person who goes to the gym in January with a sophisticated new fitness routine. Complexity, purity, and extremism are the exact formula to ensure something will not work in the long term. If you wanted to sabotage a new software project, or any project, as quickly as possible, fill it with the latest fad and make sure it implements with religious rigor. It probably won’t exist in three months.

Likewise highly experienced developers take a similar approach with Object-Oriented programming (OOP). They use it when appropriate, but often they decide that YAGNI (“You Ain’t Gonna Need it”). A good use case for OOP is building traditional GUI programs, like a desktop application or an iOS application. The paradigm fits quite well. A model where OOP does not fit so nicely is distributed computing. In distributed computing scenarios, a function that maps to a unit of work is the right solution for the problem.

The built-in Unit testing framework in the Python standard library is unittest, and it is heavily reliant on OOP. The upside of this is that there are some powerful things it can do. The downside is taking more lines of code in many situations to test something. It has been my experience that for unit testing, a functional approach is much simpler. The default path to testing in pytest is simple and effective, even in the case where some object-oriented programming is used.

Setting up and teardown of xunit-style tests #

Many testers are familiar with writing xunit-style tests. The pytest framework has support for xunit. Typically the structure is to use a setup_function and a teardown_function. Let’s take a look at how this works with a simple hello.py and a test_hello.py.

The following hello.py has three functions inside. Notice that they all accept a positional argument x.

{caption: “Example hello.py"}

def toyou(x):
    return f"hi {x}"


def add(x):
    return x + 1


def subtract(x):
    return x - 1

Next, to set up the test for this project, I would configure a Makefile.

{caption: “Example Makefile"}

install:
    pip install --upgrade pip &&\
        pip install -r requirements.txt

test:
    python -m pytest -vv test_hello.py


lint:
    pylint --disable=R,C hello.py

all: install lint test

The requirements file has the necessary packages as well.

{caption: “Example requirements.txt"}

pylint
pytest
black

Next up I create a setup_function(function) and a teardown_function(function). Notice how I put an optional print statement in both the setup and the teardown method. This step can be instructive in showing that the function parameter is the actual name of the test run. For example, when the test_hello_add is run the setup_function has the function test_hello_add passed into it.

{caption: “Example test_hello.py"}

from hello import toyou, add, subtract


def setup_function(function):
    print(f" Running Setup: {function.__name__}")
    function.x = 10


def teardown_function(function):
    print(f" Running Teardown: {function.__name__}")
    del function.x


def test_hello_add():
    assert add(test_hello_add.x) == 11

Let’s take a look at how this works. What happens is that function.x a variable is 10 for each test run. Afterward, the variable deletes in the teardown_function. This step ensures there isn’t a mutation of a test variable that could cause a test to pass or fail for the wrong reason.

(.tip) ➜  chapter4 git:(master) ✗ make test
python -m pytest -vv test_hello.py
============================= test session starts ==============================
platform darwin -- Python 3.7.6, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
cachedir: .pytest_cache
rootdir: /Users/noahgift/src/testing-in-python/chapter4
plugins: cov-2.8.1
collected 1 item

test_hello.py::test_hello_add PASSED                                     [100%]

============================== 1 passed in 0.01s ===============================

Things get more interesting by adding a failing test and a second test. This time I add a check for the subtract function, and I also change the test_hello_add test to assert 12, which will cause the test to fail.

{caption: “Example of wrong test_hello.py"}

from hello import toyou, add, subtract


def setup_function(function):
    print(f" Running Setup: {function.__name__}")
    function.x = 10


def teardown_function(function):
    print(f" Running Teardown: {function.__name__}")
    del function.x


def test_hello_add():
    assert add(test_hello_add.x) == 12

def test_hello_subtract():
    assert subtract(test_hello_subtract.x) == 9

Looking at the output from the tests, you can see that the print statements now show up. This step is helpful when first using the setup_function and teardown_function because of the magic they provide. You can see the message in the following output: Running Setup: test_hello_add and Running Teardown: test_hello_add.

(.tip) ➜  chapter4 git:(master) ✗ make test
python -m pytest -vv test_hello.py
============================= test session starts ==============================
on
cachedir: .pytest_cache
rootdir: /Users/noahgift/src/testing-in-python/chapter4
plugins: cov-2.8.1
collected 2 items

test_hello.py::test_hello_add FAILED                                     [ 50%]
test_hello.py::test_hello_subtract PASSED                                [100%]

=================================== FAILURES ===================================
________________________________ test_hello_add ________________________________

    def test_hello_add():
>       assert add(test_hello_add.x) == 12
E       assert 11 == 12
E         -11
E         +12

test_hello.py:15: AssertionError
--------------------------- Captured stdout setup ------------------------------
 Running Setup: test_hello_add
-------------------------- Captured stdout teardown ----------------------------
 Running Teardown: test_hello_add
======================= 1 failed, 1 passed in 0.05s ============================
make: *** [test] Error 1

The main takeaway is that setup and teardown methods add a lot of power to testing. They can remove boilerplate code, and they are easy to use if you follow a few conventions. Finally, by selectively using print statements, you can easily debug then.