Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Testing Your Code With pytest
pytest
is a popular testing framework for Python that simplifies the process of writing and executing tests. To start using pytest
, install it with pip
in a virtual environment. pytest
offers several advantages over unittest
that ships with Python, such as less boilerplate code, more readable output, and a rich plugin ecosystem.
pytest
comes packed with features to boost your productivity. Its fixtures allow for explicit dependency declarations, making tests more understandable and reducing implicit dependencies. Parametrization in pytest
helps prevent redundant test code by enabling multiple test cases from a single test function definition. This framework is highly customizable, so you can tailor it to your project’s needs.
By the end of this tutorial, you’ll understand that:
- Using
pytest
requires installing it withpip
in a virtual environment to set up thepytest
command. pytest
allows for less code, easier readability, and more features compared tounittest
.- Managing test dependencies and state with
pytest
is made efficient through the use of fixtures, which provide explicit dependency declarations. - Parametrization in
pytest
helps avoid redundant test code by allowing multiple test scenarios from a single test function. - Assertion introspection in
pytest
provides detailed information about failures in the test report.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
Take the Quiz: Test your knowledge with our interactive “Effective Testing with Pytest” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Effective Testing with PytestIn this quiz, you'll test your understanding of pytest, a Python testing tool. With this knowledge, you'll be able to write more efficient and effective tests, ensuring your code behaves as expected.
How to Install pytest
To follow along with some of the examples in this tutorial, you’ll need to install pytest
. As most Python packages, pytest
is available on PyPI. You can install it in a virtual environment using pip
:
The pytest
command will now be available in your installation environment.
What Makes pytest
So Useful?
If you’ve written unit tests for your Python code before, then you may have used Python’s built-in unittest
module. unittest
provides a solid base on which to build your test suite, but it has a few shortcomings.
A number of third-party testing frameworks attempt to address some of the issues with unittest
, and pytest
has proven to be one of the most popular. pytest
is a feature-rich, plugin-based ecosystem for testing your Python code.
If you haven’t had the pleasure of using pytest
yet, then you’re in for a treat! Its philosophy and features will make your testing experience more productive and enjoyable. With pytest
, common tasks require less code and advanced tasks can be achieved through a variety of time-saving commands and plugins. It’ll even run your existing tests out of the box, including those written with unittest
.
As with most frameworks, some development patterns that make sense when you first start using pytest
can start causing pains as your test suite grows. This tutorial will help you understand some of the tools pytest
provides to keep your testing efficient and effective even as it scales.
Less Boilerplate
Most functional tests follow the Arrange-Act-Assert model:
- Arrange, or set up, the conditions for the test
- Act by calling some function or method
- Assert that some end condition is true
Testing frameworks typically hook into your test’s assertions so that they can provide information when an assertion fails. unittest
, for example, provides a number of helpful assertion utilities out of the box. However, even a small set of tests requires a fair amount of boilerplate code.
Imagine you’d like to write a test suite just to make sure that unittest
is working properly in your project. You might want to write one test that always passes and one that always fails:
test_with_unittest.py
from unittest import TestCase
class TryTesting(TestCase):
def test_always_passes(self):
self.assertTrue(True)
def test_always_fails(self):
self.assertTrue(False)
You can then run those tests from the command line using the discover
option of unittest
:
(venv) $ python -m unittest discover
F.
======================================================================
FAIL: test_always_fails (test_with_unittest.TryTesting)
----------------------------------------------------------------------
Traceback (most recent call last):
File "...\effective-python-testing-with-pytest\test_with_unittest.py",
line 10, in test_always_fails
self.assertTrue(False)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (failures=1)
As expected, one test passed and one failed. You’ve proven that unittest
is working, but look at what you had to do:
- Import the
TestCase
class fromunittest
- Create
TryTesting
, a subclass ofTestCase
- Write a method in
TryTesting
for each test - Use one of the
self.assert*
methods fromunittest.TestCase
to make assertions
That’s a significant amount of code to write, and because it’s the minimum you need for any test, you’d end up writing the same code over and over. pytest
simplifies this workflow by allowing you to use normal functions and Python’s assert
keyword directly:
test_with_pytest.py
def test_always_passes():
assert True
def test_always_fails():
assert False
That’s it. You don’t have to deal with any imports or classes. All you need to do is include a function with the test_
prefix. Because you can use the assert
keyword, you don’t need to learn or remember all the different self.assert*
methods in unittest
, either. If you can write an expression that you expect to evaluate to True
, and then pytest
will test it for you.
Not only does pytest
eliminate a lot of boilerplate, but it also provides you with a much more detailed and easy-to-read output.
Nicer Output
You can run your test suite using the pytest
command from the top-level folder of your project:
(venv) $ pytest
============================= test session starts =============================
platform win32 -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
rootdir: ...\effective-python-testing-with-pytest
collected 4 items
test_with_pytest.py .F [ 50%]
test_with_unittest.py F. [100%]
================================== FAILURES ===================================
______________________________ test_always_fails ______________________________
def test_always_fails():
> assert False
E assert False
test_with_pytest.py:7: AssertionError
________________________ TryTesting.test_always_fails _________________________
self = <test_with_unittest.TryTesting testMethod=test_always_fails>
def test_always_fails(self):
> self.assertTrue(False)
E AssertionError: False is not true
test_with_unittest.py:10: AssertionError
=========================== short test summary info ===========================
FAILED test_with_pytest.py::test_always_fails - assert False
FAILED test_with_unittest.py::TryTesting::test_always_fails - AssertionError:...
========================= 2 failed, 2 passed in 0.20s =========================
pytest
presents the test results differently than unittest
, and the test_with_unittest.py
file was also automatically included. The report shows:
- The system state, including which versions of Python,
pytest
, and any plugins you have installed - The
rootdir
, or the directory to search under for configuration and tests - The number of tests the runner discovered
These items are presented in the first section of the output:
============================= test session starts =============================
platform win32 -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
rootdir: ...\effective-python-testing-with-pytest
collected 4 items
The output then indicates the status of each test using a syntax similar to unittest
:
- A dot (
.
) means that the test passed. - An
F
means that the test has failed. - An
E
means that the test raised an unexpected exception.
The special characters are shown next to the name with the overall progress of the test suite shown on the right:
test_with_pytest.py .F [ 50%]
test_with_unittest.py F. [100%]
For tests that fail, the report gives a detailed breakdown of the failure. In the example, the tests failed because assert False
always fails:
================================== FAILURES ===================================
______________________________ test_always_fails ______________________________
def test_always_fails():
> assert False
E assert False
test_with_pytest.py:7: AssertionError
________________________ TryTesting.test_always_fails _________________________
self = <test_with_unittest.TryTesting testMethod=test_always_fails>
def test_always_fails(self):
> self.assertTrue(False)
E AssertionError: False is not true
test_with_unittest.py:10: AssertionError
This extra output can come in extremely handy when debugging. Finally, the report gives an overall status report of the test suite:
=========================== short test summary info ===========================
FAILED test_with_pytest.py::test_always_fails - assert False
FAILED test_with_unittest.py::TryTesting::test_always_fails - AssertionError:...
========================= 2 failed, 2 passed in 0.20s =========================
When compared to unittest, the pytest
output is much more informative and readable.
In the next section, you’ll take a closer look at how pytest
takes advantage of the existing assert
keyword.
Less to Learn
Being able to use the assert
keyword is also powerful. If you’ve used it before, then there’s nothing new to learn. Here are a few assertion examples so you can get an idea of the types of test you can make:
test_assert_examples.py
def test_uppercase():
assert "loud noises".upper() == "LOUD NOISES"
def test_reversed():
assert list(reversed([1, 2, 3, 4])) == [4, 3, 2, 1]
def test_some_primes():
assert 37 in {
num
for num in range(2, 50)
if not any(num % div == 0 for div in range(2, num))
}
They look very much like normal Python functions. All of this makes the learning curve for pytest
shallower than it is for unittest
because you don’t need to learn new constructs to get started.
Note that each test is quite small and self-contained. This is common—you’ll see long function names and not a lot going on within a function. This serves mainly to keep your tests isolated from each other, so if something breaks, you know exactly where the problem is. A nice side effect is that the labeling is much better in the output.
To see an example of a project that creates a test suite along with the main project, check out the Build a Hash Table in Python With TDD tutorial. Additionally, you can work on Python practice problems to try test-driven development yourself while you get ready for your next interview or parse CSV files.
In the next section, you’re going to be examining fixtures, a great pytest feature to help you manage test input values.
Easier to Manage State and Dependencies
Your tests will often depend on types of data or test doubles that mock objects your code is likely to encounter, such as dictionaries or JSON files.
With unittest
, you might extract these dependencies into .setUp()
and .tearDown()
methods so that each test in the class can make use of them. Using these special methods is fine, but as your test classes get larger, you may inadvertently make the test’s dependence entirely implicit. In other words, by looking at one of the many tests in isolation, you may not immediately see that it depends on something else.
Over time, implicit dependencies can lead to a complex tangle of code that you have to unwind to make sense of your tests. Tests should help to make your code more understandable. If the tests themselves are difficult to understand, then you may be in trouble!
pytest
takes a different approach. It leads you toward explicit dependency declarations that are still reusable thanks to the availability of fixtures. pytest
fixtures are functions that can create data, test doubles, or initialize system state for the test suite. Any test that wants to use a fixture must explicitly use this fixture function as an argument to the test function, so dependencies are always stated up front:
fixture_demo.py
import pytest
@pytest.fixture
def example_fixture():
return 1
def test_with_fixture(example_fixture):
assert example_fixture == 1
Looking at the test function, you can immediately tell that it depends on a fixture, without needing to check the whole file for fixture definitions.
Note: You usually want to put your tests into their own folder called tests
at the root level of your project.
For more information about structuring a Python application, check out the video course on that very topic.
Fixtures can also make use of other fixtures, again by declaring them explicitly as dependencies. That means that, over time, your fixtures can become bulky and modular. Although the ability to insert fixtures into other fixtures provides enormous flexibility, it can also make managing dependencies more challenging as your test suite grows.
Later in this tutorial, you’ll learn more about fixtures and try a few techniques for handling these challenges.
Easy to Filter Tests
As your test suite grows, you may find that you want to run just a few tests on a feature and save the full suite for later. pytest
provides a few ways of doing this:
- Name-based filtering: You can limit
pytest
to running only those tests whose fully qualified names match a particular expression. You can do this with the-k
parameter. - Directory scoping: By default,
pytest
will run only those tests that are in or under the current directory. - Test categorization:
pytest
can include or exclude tests from particular categories that you define. You can do this with the-m
parameter.
Test categorization in particular is a subtly powerful tool. pytest
enables you to create marks, or custom labels, for any test you like. A test may have multiple labels, and you can use them for granular control over which tests to run. Later in this tutorial, you’ll see an example of how pytest
marks work and learn how to make use of them in a large test suite.
Allows Test Parametrization
When you’re testing functions that process data or perform generic transformations, you’ll find yourself writing many similar tests. They may differ only in the input or output of the code being tested. This requires duplicating test code, and doing so can sometimes obscure the behavior that you’re trying to test.
unittest
offers a way of collecting several tests into one, but they don’t show up as individual tests in result reports. If one test fails and the rest pass, then the entire group will still return a single failing result. pytest
offers its own solution in which each test can pass or fail independently. You’ll see how to parametrize tests with pytest
later in this tutorial.
Has a Plugin-Based Architecture
One of the most beautiful features of pytest
is its openness to customization and new features. Almost every piece of the program can be cracked open and changed. As a result, pytest
users have developed a rich ecosystem of helpful plugins.
Although some pytest
plugins focus on specific frameworks like Django, others are applicable to most test suites. You’ll see details on some specific plugins later in this tutorial.
Fixtures: Managing State and Dependencies
pytest
fixtures are a way of providing data, test doubles, or state setup to your tests. Fixtures are functions that can return a wide range of values. Each test that depends on a fixture must explicitly accept that fixture as an argument.
When to Create Fixtures
In this section, you’ll simulate a typical test-driven development (TDD) workflow.
Imagine you’re writing a function, format_data_for_display()
, to process the data returned by an API endpoint. The data represents a list of people, each with a given name, family name, and job title. The function should output a list of strings that include each person’s full name (their given_name
followed by their family_name
), a colon, and their title
:
format_data.py
def format_data_for_display(people):
... # Implement this!
In good TDD fashion, you’ll want to first write a test for it. You might write the following code for that:
test_format_data.py
def test_format_data_for_display():
people = [
{
"given_name": "Alfonsa",
"family_name": "Ruiz",
"title": "Senior Software Engineer",
},
{
"given_name": "Sayid",
"family_name": "Khan",
"title": "Project Manager",
},
]
assert format_data_for_display(people) == [
"Alfonsa Ruiz: Senior Software Engineer",
"Sayid Khan: Project Manager",
]
While writing this test, it occurs to you that you may need to write another function to transform the data into comma-separated values for use in Excel:
format_data.py
def format_data_for_display(people):
... # Implement this!
def format_data_for_excel(people):
... # Implement this!
Your to-do list grows! That’s good! One of the advantages of TDD is that it helps you plan out the work ahead. The test for the format_data_for_excel()
function would look awfully similar to the format_data_for_display()
function:
test_format_data.py
def test_format_data_for_display():
# ...
def test_format_data_for_excel():
people = [
{
"given_name": "Alfonsa",
"family_name": "Ruiz",
"title": "Senior Software Engineer",
},
{
"given_name": "Sayid",
"family_name": "Khan",
"title": "Project Manager",
},
]
assert format_data_for_excel(people) == """given,family,title
Alfonsa,Ruiz,Senior Software Engineer
Sayid,Khan,Project Manager
"""
Notably, both the tests have to repeat the definition of the people
variable, which is quite a few lines of code.
If you find yourself writing several tests that all make use of the same underlying test data, then a fixture may be in your future. You can pull the repeated data into a single function decorated with @pytest.fixture
to indicate that the function is a pytest
fixture:
test_format_data.py
import pytest
@pytest.fixture
def example_people_data():
return [
{
"given_name": "Alfonsa",
"family_name": "Ruiz",
"title": "Senior Software Engineer",
},
{
"given_name": "Sayid",
"family_name": "Khan",
"title": "Project Manager",
},
]
# ...
You can use the fixture by adding the function reference as an argument to your tests. Note that you don’t call the fixture function. pytest
takes care of that. You’ll be able to use the return value of the fixture function as the name of the fixture function:
test_format_data.py
# ...
def test_format_data_for_display(example_people_data):
assert format_data_for_display(example_people_data) == [
"Alfonsa Ruiz: Senior Software Engineer",
"Sayid Khan: Project Manager",
]
def test_format_data_for_excel(example_people_data):
assert format_data_for_excel(example_people_data) == """given,family,title
Alfonsa,Ruiz,Senior Software Engineer
Sayid,Khan,Project Manager
"""
Each test is now notably shorter but still has a clear path back to the data it depends on. Be sure to name your fixture something specific. That way, you can quickly determine if you want to use it when writing new tests in the future!
When you first discover the power of fixtures, it can be tempting to use them all the time, but as with all things, there’s a balance to be maintained.
When to Avoid Fixtures
Fixtures are great for extracting data or objects that you use across multiple tests. However, they aren’t always as good for tests that require slight variations in the data. Littering your test suite with fixtures is no better than littering it with plain data or objects. It might even be worse because of the added layer of indirection.
As with most abstractions, it takes some practice and thought to find the right level of fixture use.
Nevertheless, fixtures will likely be an integral part of your test suite. As your project grows in scope, the challenge of scale starts to come into the picture. One of the challenges facing any kind of tool is how it handles being used at scale, and luckily, pytest
has a bunch of useful features that can help you manage the complexity that comes with growth.
How to Use Fixtures at Scale
As you extract more fixtures from your tests, you might see that some fixtures could benefit from further abstraction. In pytest
, fixtures are modular. Being modular means that fixtures can be imported, can import other modules, and they can depend on and import other fixtures. All this allows you to compose a suitable fixture abstraction for your use case.
For example, you may find that fixtures in two separate files, or modules, share a common dependency. In this case, you can move fixtures from test modules into more general fixture-related modules. That way, you can import them back into any test modules that need them. This is a good approach when you find yourself using a fixture repeatedly throughout your project.
If you want to make a fixture available for your whole project without having to import it, a special configuration module called conftest.py
will allow you to do that.
pytest
looks for a conftest.py
module in each directory. If you add your general-purpose fixtures to the conftest.py
module, then you’ll be able to use that fixture throughout the module’s parent directory and in any subdirectories without having to import it. This is a great place to put your most widely used fixtures.
Another interesting use case for fixtures and conftest.py
is in guarding access to resources. Imagine that you’ve written a test suite for code that deals with API calls. You want to ensure that the test suite doesn’t make any real network calls even if someone accidentally writes a test that does so.
pytest
provides a monkeypatch
fixture to replace values and behaviors, which you can use to great effect:
conftest.py
import pytest
import requests
@pytest.fixture(autouse=True)
def disable_network_calls(monkeypatch):
def stunted_get():
raise RuntimeError("Network access not allowed during testing!")
monkeypatch.setattr(requests, "get", lambda *args, **kwargs: stunted_get())
By placing disable_network_calls()
in conftest.py
and adding the autouse=True
option, you ensure that network calls will be disabled in every test across the suite. Any test that executes code calling requests.get()
will raise a RuntimeError
indicating that an unexpected network call would have occurred.
Your test suite is growing in numbers, which gives you a great feeling of confidence to make changes and not break things unexpectedly. That said, as your test suite grows, it might start taking a long time. Even if it doesn’t take that long, perhaps you’re focusing on some core behavior that trickles down and breaks most tests. In these cases, you might want to limit the test runner to only a certain category of tests.
Marks: Categorizing Tests
In any large test suite, it would be nice to avoid running all the tests when you’re trying to iterate quickly on a new feature. Apart from the default behavior of pytest
to run all tests in the current working directory, or the filtering functionality, you can take advantage of markers.
pytest
enables you to define categories for your tests and provides options for including or excluding categories when you run your suite. You can mark a test with any number of categories.
Marking tests is useful for categorizing tests by subsystem or dependencies. If some of your tests require access to a database, for example, then you could create a @pytest.mark.database_access
mark for them.
Pro tip: Because you can give your marks any name you want, it can be easy to mistype or misremember the name of a mark. pytest
will warn you about marks that it doesn’t recognize in the test output.
You can use the --strict-markers
flag to the pytest
command to ensure that all marks in your tests are registered in your pytest
configuration file, pytest.ini
. It’ll prevent you from running your tests until you register any unknown marks.
For more information on registering marks, check out the pytest
documentation.
When the time comes to run your tests, you can still run them all by default with the pytest
command. If you’d like to run only those tests that require database access, then you can use pytest -m database_access
. To run all tests except those that require database access, you can use pytest -m "not database_access"
. You can even use an autouse
fixture to limit database access to those tests marked with database_access
.
Some plugins expand on the functionality of marks by adding their own guards. The pytest-django
plugin, for instance, provides a django_db
mark. Any tests without this mark that try to access the database will fail. The first test that tries to access the database will trigger the creation of Django’s test database.
The requirement that you add the django_db
mark nudges you toward stating your dependencies explicitly. That’s the pytest
philosophy, after all! It also means that you can much more quickly run tests that don’t rely on the database, because pytest -m "not django_db"
will prevent the test from triggering database creation. The time savings really add up, especially if you’re diligent about running your tests frequently.
pytest
provides a few marks out of the box:
skip
skips a test unconditionally.skipif
skips a test if the expression passed to it evaluates toTrue
.xfail
indicates that a test is expected to fail, so if the test does fail, the overall suite can still result in a passing status.parametrize
creates multiple variants of a test with different values as arguments. You’ll learn more about this mark shortly.
You can see a list of all the marks that pytest
knows about by running pytest --markers
.
On the topic of parametrization, that’s coming up next.
Parametrization: Combining Tests
You saw earlier in this tutorial how pytest
fixtures can be used to reduce code duplication by extracting common dependencies. Fixtures aren’t quite as useful when you have several tests with slightly different inputs and expected outputs. In these cases, you can parametrize a single test definition, and pytest
will create variants of the test for you with the parameters you specify.
Imagine you’ve written a function to tell if a string is a palindrome. An initial set of tests could look like this:
def test_is_palindrome_empty_string():
assert is_palindrome("")
def test_is_palindrome_single_character():
assert is_palindrome("a")
def test_is_palindrome_mixed_casing():
assert is_palindrome("Bob")
def test_is_palindrome_with_spaces():
assert is_palindrome("Never odd or even")
def test_is_palindrome_with_punctuation():
assert is_palindrome("Do geese see God?")
def test_is_palindrome_not_palindrome():
assert not is_palindrome("abc")
def test_is_palindrome_not_quite():
assert not is_palindrome("abab")
All of these tests except the last two have the same shape:
def test_is_palindrome_<in some situation>():
assert is_palindrome("<some string>")
This is starting to smell a lot like boilerplate. pytest
so far has helped you get rid of boilerplate, and it’s not about to let you down now. You can use @pytest.mark.parametrize()
to fill in this shape with different values, reducing your test code significantly:
@pytest.mark.parametrize("palindrome", [
"",
"a",
"Bob",
"Never odd or even",
"Do geese see God?",
])
def test_is_palindrome(palindrome):
assert is_palindrome(palindrome)
@pytest.mark.parametrize("non_palindrome", [
"abc",
"abab",
])
def test_is_palindrome_not_palindrome(non_palindrome):
assert not is_palindrome(non_palindrome)
The first argument to parametrize()
is a comma-delimited string of parameter names. You don’t have to provide more than one name, as you can see in this example. The second argument is a list of either tuples or single values that represent the parameter value(s). You could take your parametrization a step further to combine all your tests into one:
@pytest.mark.parametrize("maybe_palindrome, expected_result", [
("", True),
("a", True),
("Bob", True),
("Never odd or even", True),
("Do geese see God?", True),
("abc", False),
("abab", False),
])
def test_is_palindrome(maybe_palindrome, expected_result):
assert is_palindrome(maybe_palindrome) == expected_result
Even though this shortened your code, it’s important to note that in this case you actually lost some of the more descriptive nature of the original functions. Make sure you’re not parametrizing your test suite into incomprehensibility. You can use parametrization to separate the test data from the test behavior so that it’s clear what the test is testing, and also to make the different test cases easier to read and maintain.
Durations Reports: Fighting Slow Tests
Each time you switch contexts from implementation code to test code, you incur some overhead. If your tests are slow to begin with, then overhead can cause friction and frustration.
You read earlier about using marks to filter out slow tests when you run your suite, but at some point you’re going to need to run them. If you want to improve the speed of your tests, then it’s useful to know which tests might offer the biggest improvements. pytest
can automatically record test durations for you and report the top offenders.
Use the --durations
option to the pytest
command to include a duration report in your test results. --durations
expects an integer value n
and will report the slowest n
number of tests. A new section will be included in your test report:
(venv) $ pytest --durations=5
...
============================= slowest 5 durations =============================
3.03s call test_code.py::test_request_read_timeout
1.07s call test_code.py::test_request_connection_timeout
0.57s call test_code.py::test_database_read
(2 durations < 0.005s hidden. Use -vv to show these durations.)
=========================== short test summary info ===========================
...
Each test that shows up in the durations report is a good candidate to speed up because it takes an above-average amount of the total testing time. Note that short durations are hidden by default. As spelled out in the report, you can increase the report verbosity and show these by passing -vv
together with --durations
.
Be aware that some tests may have an invisible setup overhead. You read earlier about how the first test marked with django_db
will trigger the creation of the Django test database. The durations
report reflects the time it takes to set up the database in the test that triggered the database creation, which can be misleading.
You’re well on your way to full test coverage. Next, you’ll be taking a look at some of the plugins that are part of the rich pytest
plugin ecosystem.
Useful pytest
Plugins
You learned about a few valuable pytest
plugins earlier in this tutorial. In this section, you’ll be exploring those and a few others in more depth—everything from utility plugins like pytest-randomly
to library-specific ones, like those for Django.
pytest-randomly
Often the order of your tests is unimportant, but as your codebase grows, you may inadvertently introduce some side effects that could cause some tests to fail if they were run out of order.
pytest-randomly
forces your tests to run in a random order. pytest
always collects all the tests it can find before running them. pytest-randomly
just shuffles that list of tests before execution.
This is a great way to uncover tests that depend on running in a specific order, which means they have a stateful dependency on some other test. If you built your test suite from scratch in pytest
, then this isn’t very likely. It’s more likely to happen in test suites that you migrate to pytest
.
The plugin will print a seed value in the configuration description. You can use that value to run the tests in the same order as you try to fix the issue.
pytest-cov
If you want to measure how well your tests cover your implementation code, then you can use the coverage package. pytest-cov
integrates coverage, so you can run pytest --cov
to see the test coverage report and boast about it on your project front page.
pytest-django
pytest-django
provides a handful of useful fixtures and marks for dealing with Django tests. You saw the django_db
mark earlier in this tutorial. The rf
fixture provides direct access to an instance of Django’s RequestFactory
. The settings
fixture provides a quick way to set or override Django settings. These plugins are a great boost to your Django testing productivity!
If you’re interested in learning more about using pytest
with Django, then check out How to Provide Test Fixtures for Django Models in Pytest.
pytest-bdd
pytest
can be used to run tests that fall outside the traditional scope of unit testing. Behavior-driven development (BDD) encourages writing plain-language descriptions of likely user actions and expectations, which you can then use to determine whether to implement a given feature. pytest-bdd helps you use Gherkin to write feature tests for your code.
You can see which other plugins are available for pytest
with this extensive list of third-party plugins.
Conclusion
pytest
offers a core set of productivity features to filter and optimize your tests along with a flexible plugin system that extends its value even further. Whether you have a huge legacy unittest
suite or you’re starting a new project from scratch, pytest
has something to offer you.
In this tutorial, you learned how to use:
- Fixtures for handling test dependencies, state, and reusable functionality
- Marks for categorizing tests and limiting access to external resources
- Parametrization for reducing duplicated code between tests
- Durations to identify your slowest tests
- Plugins for integrating with other frameworks and testing tools
Install pytest
and give it a try. You’ll be glad you did. Happy testing!
If you’re looking for an example project built with pytest
, then check out the tutorial on building a hash table with TDD, which will not only get you up to speed with pytest
, but also help you master hash tables!
Frequently Asked Questions
Now that you have some experience with pytest
, you can use the questions and answers below to check your understanding and recap what you’ve learned.
These FAQs are related to the most important concepts you’ve covered in this tutorial. Click the Show/Hide toggle beside each question to reveal the answer.
You can install pytest
using pip
within a virtual environment by running the command python -m pip install pytest
. Once installed, you can run your test suite using the pytest
command from the top-level folder of your project.
pytest
offers numerous benefits. They include less boilerplate code, more informative output, and easier management of test state and dependencies through fixtures. Other powerful features are test parametrization and a plugin-based architecture that allows for extensive customization and integration with other tools.
pytest
simplifies the testing workflow by allowing the use of normal functions and Python’s assert
keyword, rather than relying on classes and specific assertion methods required by unittest
. It also provides more detailed output, supports fixtures for managing dependencies, and has a rich ecosystem of plugins, making it more flexible and easier to scale than unittest
.
In pytest
, you can manage test dependencies and state using fixtures. Fixtures are functions that return data, test doubles, or initialize system state, and are declared as dependencies in your test functions. This ensures that dependencies are explicit and reusable across your test suite.
You can use the @pytest.mark.parametrize
decorator to run a single test function with different sets of parameters. This allows you to test multiple input-output scenarios without duplicating test code, making your tests more concise and easier to maintain.
Take the Quiz: Test your knowledge with our interactive “Effective Testing with Pytest” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Effective Testing with PytestIn this quiz, you'll test your understanding of pytest, a Python testing tool. With this knowledge, you'll be able to write more efficient and effective tests, ensuring your code behaves as expected.
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Testing Your Code With pytest