Write Unit Tests for Your Python Code With ChatGPT

Write Unit Tests for Your Python Code With ChatGPT

by Leodanis Pozo Ramos Apr 22, 2024 intermediate testing tools

Having a good battery of tests for your code may be a requirement for many Python projects. In practice, writing unit tests is hard and can take a lot of time and effort. Therefore, some developers don’t like to write them. However, with large language models (LLMs) and tools like ChatGPT, you can quickly create robust and complete sets of tests for your Python code.

In Python, you can use multiple different tools for writing tests. The most commonly used tools include doctest, unittest, and pytest. ChatGPT can be of great help in writing tests with any of these tools.

In this tutorial, you’ll:

  • Prompt ChatGPT to create tests using doctest
  • Use ChatGPT to write unittest tests, fixtures, and suites
  • Craft ChatGPT prompts to write pytest tests and fixtures
  • Use alternative prompts for cases where the code isn’t available

To get the most out of this tutorial, you should set up a ChatGPT account and know the basics of interacting with this tool using prompt engineering. You should also know the basics of how to test code in Python.

Benefits of Using ChatGPT for Testing Python Code

Having good and up-to-date unit tests for your code is a must for any Python project. Poorly tested code or code without tests may end up being unreliable and weak. With automated tests, you can ensure and show that your code works correctly in different scenarios. So, having tests is important from the technical and commercial point of view.

Writing good tests is hard and can take a lot of time. That’s why some developers don’t like to write them at all. Using large language models (LLM) like ChatGPT can be a viable alternative for providing your projects and code with proper tests.

Some of the benefits of using ChatGPT to write tests for your Python code include the following:

  • Efficiency and speed: It can generate unit tests based on specifications or code snippets. This possibility significantly reduces the time that you need to spend writing tests. So you can focus on writing application logic.
  • Coverage improvement: It can suggest tests for edge cases or scenarios that developers might not immediately consider. This way, you can improve your code’s test coverage.
  • Error reduction: It can reduce human error in writing repetitive or boilerplate test code.
  • Learning and onboarding: It can serve as an educational tool for developers who are new to testing frameworks or unfamiliar with best practices in unit testing. The generated tests can help developers learn about testing patterns, assertions, and ways to effectively write tests.

With ChatGPT, you can generate unit tests for your Python code in almost no time. However, you must note that even though the generated tests can look good, you should still review and possibly refine the generated tests.

In the following sections, you’ll learn the basics of using ChatGPT as an assistant for creating coherent unit tests for your Python projects using different test frameworks and libraries.

Writing doctest Tests With ChatGPT

Python’s doctest module provides a lightweight testing framework that you can use for test automation. It can read the test cases from your project’s documentation and your code’s docstrings. This framework comes with the Python interpreter, so you’ll have it at your disposal with any Python installation, which is great.

With doctest, you can write code that provides examples of how your code works, simulating REPL sessions. These examples work as documentation and also as test cases or unit tests. Each doctest test needs a statement or expression and an expected result.

You can use ChatGPT to help you out with writing doctest tests. First, you need some code to test. The following function tries to tackle the FizzBuzz challenge, where you return "fizz" for numbers divisible by three, "buzz" for those divisible by five, and "fizz buzz" for those divisible by both three and five.

Go ahead and create a fizzbuzz.py file and add the following code to it:

Python fizzbuzz.py
def fizzbuzz(number):
    if number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    elif number % 15 == 0:
        return "fizz buzz"
    else:
        return number

This function works okay for numbers divisible by 3 or 5. However, there is a slight issue with numbers that are divisible by both. Even though you know this, go ahead and start a new chat thread. Then, prompt ChatGPT to write doctest test for the function:

You: Write doctest tests for the following function:

Python
def fizzbuzz(number):
    if number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    elif number % 15 == 0:
        return "fizz buzz"
    else:
        return number

ChatGPT:

Certainly! The function you’ve provided implements a basic FizzBuzz algorithm but with a slight logical error. The check for divisibility by 15 should come before checks for 3 and 5, as every number divisible by 15 is also divisible by both 3 and 5, but your current order will never reach the "fizz buzz" case. Here’s a corrected version of your function with the logical order fixed, followed by doctest tests for it:

Python
def fizzbuzz(number):
    """
    Returns 'fizz' if number is divisible by 3,
    'buzz' if number is divisible by 5,
    'fizz buzz' if number is divisible by 15,
    and the number itself otherwise.

    >>> fizzbuzz(3)
    'fizz'
    >>> fizzbuzz(5)
    'buzz'
    >>> fizzbuzz(15)
    'fizz buzz'
    >>> fizzbuzz(4)
    4
    >>> fizzbuzz(30)
    'fizz buzz'
    >>> fizzbuzz(9)
    'fizz'
    >>> fizzbuzz(10)
    'buzz'
    >>> fizzbuzz(2)
    2
    """
    if number % 15 == 0:
        return "fizz buzz"
    elif number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    else:
        return number

To use these doctest tests, you’ll need to include them in the docstring of your function as shown above. Then, you can run the tests with the doctest module by adding the following to your script:

Python
if __name__ == "__main__":
    import doctest
    doctest.testmod()

This code will automatically run the tests defined in the docstrings when you execute the Python file. Remember, the corrected order of conditions in the function ensures that numbers divisible by both 3 and 5 return "fizz buzz", which is crucial for the FizzBuzz logic.

Wow! That great! The chat not only wrote the doctest tests for you. It also fixed the issue, explained important concepts around doctest, and provided instructions on how to run the tests. Note that the tests cover both regular and edge cases, which is pretty convenient.

Go ahead and run the following command from your terminal window to execute the tests and see their results:

Shell
$ python fizzbuzz.py -v
Trying:
    fizzbuzz(3)
Expecting:
    'fizz'
ok
Trying:
    fizzbuzz(5)
Expecting:
    'buzz'
ok
Trying:
    fizzbuzz(15)
Expecting:
    'fizz buzz'
ok
Trying:
    fizzbuzz(4)
Expecting:
    4
ok
Trying:
    fizzbuzz(30)
Expecting:
    'fizz buzz'
ok
Trying:
    fizzbuzz(9)
Expecting:
    'fizz'
ok
Trying:
    fizzbuzz(10)
Expecting:
    'buzz'
ok
1 items had no tests:
    __main__
1 items passed all tests:
   7 tests in __main__.fizzbuzz
7 tests in 2 items.
7 passed and 0 failed.
Test passed.

After running the above command, you’ll see a verbose output with detailed information about every doctest test. The -v command-line option is responsible for generating the verbose output. As you can see, all the tests passed, so your function now works okay. The LLM has done a great job.

Creating unittest Tests With ChatGPT

The unittest framework is another standard library tool that you can use to write unit tests for your code. This testing framework uses an object-oriented approach and supports some essential concepts that facilitate code testing:

  • Test case: An individual unit of testing. It examines the output for a given input set.
  • Test suite: A collection of test cases, test suites, or both. They’re grouped together and executed as a whole.
  • Test fixtures: A test fixture refers to a group of actions required to set up an environment for testing. They also include the teardown processes after the tests run.
  • Test runner: A component that handles the execution of tests and communicates the results to the user.

In the following sections, you’ll write some quick prompts for ChatGPT to provide test cases, fixtures, and test suites for you.

Writing Test Cases

The unittest module provides the TestCase base class, which you can use to write test cases through subclassing. Unlike doctest, unittest is a tool for creating elaborate test cases in independent modules that you can group in a tests/ folder under your project’s root directory or in any other location.

Again, you can ask ChatGPT to write unittest test cases for you. Here’s a quick prompt to do this. Remember to start a new chat thread before running the prompt:

You: Write unittest test cases for the following function:

Python
def fizzbuzz(number):
    if number % 15 == 0:
        return "fizz buzz"
    elif number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    else:
        return number

ChatGPT:

To write unittest test cases for the fizzbuzz() function, you need to follow these steps:

  1. Import the unittest module.
  2. Define a test class that inherits from unittest.TestCase.
  3. Write test methods within this class to test different aspects of the fizzbuzz() function.
    • Each test method should start with the word test.
    • Use assertions like self.assertEqual() to check the function’s output against expected results.
  4. Finally, call unittest.main() to run the tests when the script is executed.

Here’s how you can structure your test cases for the fizzbuzz() function:

Python
import unittest

def fizzbuzz(number):
    if number % 15 == 0:
        return "fizz buzz"
    elif number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    else:
        return number

class TestFizzBuzz(unittest.TestCase):
    def test_fizz(self):
        """Test numbers divisible by 3 but not by 5"""
        self.assertEqual(fizzbuzz(3), "fizz")
        self.assertEqual(fizzbuzz(6), "fizz")
        self.assertNotEqual(fizzbuzz(15), "fizz")

    def test_buzz(self):
        """Test numbers divisible by 5 but not by 3"""
        self.assertEqual(fizzbuzz(5), "buzz")
        self.assertEqual(fizzbuzz(10), "buzz")
        self.assertNotEqual(fizzbuzz(15), "buzz")

    def test_fizz_buzz(self):
        """Test numbers divisible by both 3 and 5"""
        self.assertEqual(fizzbuzz(15), "fizz buzz")
        self.assertEqual(fizzbuzz(30), "fizz buzz")

    def test_neither(self):
        """Test numbers not divisible by 3 or 5"""
        self.assertEqual(fizzbuzz(1), 1)
        self.assertEqual(fizzbuzz(2), 2)
        self.assertEqual(fizzbuzz(4), 4)

if __name__ == "__main__":
    unittest.main()

To run these tests, save the code in a .py file and execute it with Python. The unittest framework will automatically discover the test methods in TestFizzBuzz and execute them, reporting the results.

This response is also great! The chat gives you a great explanation of how to organize your test cases when you’re using unittest. Then, it generates the code for the tests.

Note that every test includes multiple assertions with different arguments and outputs. This approach might be appropriate when you need to test multiple related properties on an object. In your example, this approach works fine because the combined assertions test for a single return value.

Using multiple assertions helps reduce boilerplate code. In general, this approach has the following pros:

  • Efficiency: Multiple assertions in a single test can reduce repetitive code. It can also make tests run faster in those scenarios where you have setup and teardown requirements for each test.
  • Contextual testing: Multiple assertions might be necessary to check that a function behaves correctly in a specific context.
  • Convenience: Multiple assertions in a test can be more straightforward and less tedious to write compared to writing multiple single-assertion tests.

This approach also has its cons:

  • Clarity and isolation: When a test with multiple assertions fails, it can be harder to immediately identify which assertion caused the failure. This can go against your debugging process.
  • Breakage risk: When an early assertion in a test fails, subsequent assertions are not executed. This can hide additional issues.
  • Test purpose blurring: When a test has multiple assertions, it can become less focused. This can make the test harder to understand.

Finally, ChatGPT gives you quick instructions on running the tests. You already have a fizzbuzz.py file containing the fizzbuzz() function. So go ahead and create the file called test_fizzbuzz_unittest.py and paste only the test-related code:

Python test_fizzbuzz_unittest.py
import unittest

from fizzbuzz import fizzbuzz

class TestFizzBuzz(unittest.TestCase):
    def test_fizz(self):
        """Test numbers divisible by 3 but not by 5"""
        self.assertEqual(fizzbuzz(3), "fizz")
        self.assertEqual(fizzbuzz(6), "fizz")
        self.assertNotEqual(fizzbuzz(15), "fizz")

    def test_buzz(self):
        """Test numbers divisible by 5 but not by 3"""
        self.assertEqual(fizzbuzz(5), "buzz")
        self.assertEqual(fizzbuzz(10), "buzz")
        self.assertNotEqual(fizzbuzz(15), "buzz")

    def test_fizz_buzz(self):
        """Test numbers divisible by both 3 and 5"""
        self.assertEqual(fizzbuzz(15), "fizz buzz")
        self.assertEqual(fizzbuzz(30), "fizz buzz")

    def test_neither(self):
        """Test numbers not divisible by 3 or 5"""
        self.assertEqual(fizzbuzz(1), 1)
        self.assertEqual(fizzbuzz(2), 2)
        self.assertEqual(fizzbuzz(4), 4)

if __name__ == "__main__":
    unittest.main()

In this file, you’ve copied the test-related code as the chat generated it. Note that you’ve imported the fizzbuzz() function from your fizzbuzz.py file for the tests to run. Now, go ahead and run the following command to execute the tests:

Shell
$ python -m unittest -v test_fizzbuzz_unittest.py
test_buzz (test_fizzbuzz_unittest.TestFizzBuzz.test_buzz)
Test numbers divisible by 5 but not by 3 ... ok
test_fizz (test_fizzbuzz_unittest.TestFizzBuzz.test_fizz)
Test numbers divisible by 3 but not by 5 ... ok
test_fizz_buzz (test_fizzbuzz_unittest.TestFizzBuzz.test_fizz_buzz)
Test numbers divisible by both 3 and 5 ... ok
test_neither (test_fizzbuzz_unittest.TestFizzBuzz.test_neither)
Test numbers not divisible by 3 or 5 ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

This command runs the test from the test_fizzbuzz_unittest.py using the command-line interface of unittest. The -v command-line option generates a verbose output that takes advantage of the tests’ docstrings to provide context on what each test is about.

Because the code in the chat response includes the name-main idiom wrapping the unittest.main() function call, you can also run the tests by running the test_fizzbuzz_unittest.py as a regular Python script.

Creating Fixtures

A test fixture is a preparation that you need to perform for one or more tests and any associated cleanup actions. The setup code may involve the creation of temporary files, objects, databases, datasets, connections, and so on.

To illustrate how fixtures work in unittest, say that you’ve created a readers.py file that implements a JSONReader class that looks something like the following:

Python readers.py
import json

class JSONReader:
    def __init__(self, filename):
        self.filename = filename

    def read(self):
        with open(self.filename, encoding="utf-8") as file:
            return json.load(file)

You need to write unittest tests for your class. These tests will need preparation. For example, you can write a setup fixture to create a temporary file with some sample content. Then, you can remove the file after the tests run in the teardown stage.

Here’s a quick prompt that you can use with ChatGPT to write the fixtures and the appropriate tests:

You: Write unittest test cases with fixtures for the following class:

Python
import json

class JSONReader:
    def __init__(self, filename):
        self.filename = filename

    def read(self):
        with open(self.filename, encoding="utf-8") as file:
            return json.load(file)

ChatGPT:

To write unittest test cases with fixtures for the JSONReader class, you’ll need to follow these steps:

  1. Use setup and teardown methods to prepare and clean up test fixtures. These can be .setUp() (runs before each test method) and .tearDown() (runs after each test method) for resource management.
  2. In the setup method, create a temporary JSON file that the JSONReader class will read.
  3. Write test methods to verify that the JSONReader class correctly reads JSON content from a file.
  4. Use the teardown method to clean up any resources, such as deleting the temporary file created for testing.

Here’s how you can implement these steps in a unittest test case:

Python
import json
import os
import tempfile
import unittest

class JSONReader:
    def __init__(self, filename):
        self.filename = filename

    def read(self):
        with open(self.filename, encoding="utf-8") as file:
            return json.load(file)

class TestJSONReader(unittest.TestCase):
    def setUp(self):
        # Create a temporary file and write some JSON data to it
        self.temp_file, self.temp_file_path = tempfile.mkstemp(
            suffix=".json"
        )
        self.test_data = {"name": "Test", "value": 123}
        with os.fdopen(self.temp_file, "w", encoding="utf-8") as file:
            json.dump(self.test_data, file)

    def tearDown(self):
        # Clean up by removing the temporary file
        os.remove(self.temp_file_path)

    def test_read_json(self):
        # Test reading the JSON data
        reader = JSONReader(self.temp_file_path)
        data = reader.read()
        self.assertEqual(data, self.test_data)

    def test_file_not_found(self):
        # Test file not found error handling
        reader = JSONReader("non_existent_file.json")
        with self.assertRaises(FileNotFoundError):
            _ = reader.read()

if __name__ == "__main__":
    unittest.main()

This example demonstrates how to:

  • Use .setUp() to prepare a temporary JSON file that the JSONReader class will use for reading.
  • Write a basic test (.test_read_json()) to verify that the JSONReader can correctly read JSON data from a file.
  • Write a test (.test_file_not_found()) to check how the JSONReader behaves when the file does not exist, expecting a FileNotFoundError.
  • Use .tearDown() to clean up by deleting the temporary file after each test.

To run these tests, save the code in a .py file and execute it with Python. The unittest framework will automatically discover and run the tests, reporting the results.

In its response, the chat first lists the required steps to write test cases and fixtures for the target class. Then, it generates the test code. The .setUp() method creates the required temporary file with some sample data using the tempfile module from the standard library. The .tearDown() method removes the temporary file as a clean-up action. In general, the code looks good and well organized.

Note that you’ll find some coding style issues in the generated code. For example, instead of formal docstrings, the test functions start with a comment. Additionally, some attributes may not have descriptive names and, in some cases, you’ll find unneeded constructs like using a throwaway variable in the last test. You can manually fix these issues to fulfill your coding style requirements.

You already have a readers.py file containing your JSONReader class. Go ahead and create a file called test_readers_unittest.py and copy the generated code. Remember to remove the class and import it from your readers.py file. Then, run the following command to execute the tests:

Shell
$ python -m unittest -v test_readers_unittest.py
test_file_not_found (
test_readers_unittest.TestJSONReader.test_file_not_found
) ... ok
test_read_json (test_readers_unittest.TestJSONReader.test_read_json) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.001s

OK

Great! The tests passed, and the generated code works correctly.

Creating Test Suites

Using unittest, you can create test suites for grouping tests. This way, you can selectively run a set of tests on your code, which can be convenient in those situations where your tests take a lot of time to run. In this case, you may need a shortcut that allows you to run only the test that relates to the code that you’re working on.

To illustrate how to create tests and group them with test suites using ChatGPT, say that you have a module called calculations.py that defines basic arithmetic and statistics operations:

Python calculations.py
from collections import Counter

def add(x, y):
    return x + y

def subtract(x, y):
    return x - y

def multiply(x, y):
    return x * y

def divide(x, y):
    if y == 0:
        raise ValueError("Cannot divide by zero.")
    return x / y

def mean(data):
    return sum(data) / len(data)

def median(data):
    n = len(data)
    index = n // 2
    if n % 2:
        return sorted(data)[index]
    return sum(sorted(data)[index - 1 : index + 1]) / 2

def mode(data):
    c = Counter(data)
    return [
        k for k, v in c.items() if v == c.most_common(1)[0][1]
    ]

You need to generate unittest tests for all the functions in this module. Here’s a quick prompt that you can use to complete this task:

You: Write unittest test cases for the following module:

Python
from collections import Counter

def add(x, y):
    return x + y

def subtract(x, y):
    return x - y

def multiply(x, y):
    return x * y

def divide(x, y):
    if y == 0:
        raise ValueError("Cannot divide by zero.")
    return x / y

def mean(data):
    return sum(data) / len(data)

def median(data):
    n = len(data)
    index = n // 2
    if n % 2:
        return sorted(data)[index]
    return sum(sorted(data)[index - 1 : index + 1]) / 2

def mode(data):
    c = Counter(data)
    return [
        k for k, v in c.items() if v == c.most_common(1)[0][1]
    ]

ChatGPT:

To test the module you provided, which contains functions for basic arithmetic operations and statistical calculations, you can follow the structure outlined below. This structure includes tests for each function, ensuring that they handle both typical cases and edge cases, such as division by zero or calculating the mode of a dataset with multiple modes:

Python
import unittest
from collections import Counter

# Assuming the module's functions are defined here or imported

class TestArithmeticOperations(unittest.TestCase):
    def test_add(self):
        self.assertEqual(add(10, 5), 15)
        self.assertEqual(add(-1, 1), 0)
        self.assertEqual(add(-1, -1), -2)

    def test_subtract(self):
        self.assertEqual(subtract(10, 5), 5)
        self.assertEqual(subtract(-1, 1), -2)
        self.assertEqual(subtract(-1, -1), 0)

    def test_multiply(self):
        self.assertEqual(multiply(10, 5), 50)
        self.assertEqual(multiply(-1, 1), -1)
        self.assertEqual(multiply(-1, -1), 1)

    def test_divide(self):
        self.assertEqual(divide(10, 5), 2)
        self.assertEqual(divide(-1, 1), -1)
        self.assertEqual(divide(-1, -1), 1)
        with self.assertRaises(ValueError):
            divide(10, 0)

class TestStatisticalOperations(unittest.TestCase):
    def test_mean(self):
        self.assertEqual(mean([1, 2, 3, 4, 5]), 3)
        self.assertEqual(mean([1, 2, 3, 4, 5, 6]), 3.5)

    def test_median_odd(self):
        self.assertEqual(median([1, 3, 3, 6, 7, 8, 9]), 6)

    def test_median_even(self):
        self.assertEqual(median([1, 2, 3, 4, 5, 6, 8, 9]), 4.5)

    def test_median_unsorted(self):
        self.assertEqual(median([7, 1, 3, 3, 2, 6]), 3)

    def test_mode_single(self):
        self.assertEqual(mode([1, 2, 2, 3, 4, 4, 4, 5]), [4])

    def test_mode_multiple(self):
        self.assertEqual(set(mode([1, 1, 2, 3, 4, 4, 5, 5])), {1, 4, 5})

if __name__ == "__main__":
    unittest.main()

This test suite covers each function in the provided module. The arithmetic operations are tested with positive, negative, and zero values. The statistical operations are tested with lists of various lengths and compositions, including edge cases:

  • For divide(), it tests division by zero, expecting a ValueError.
  • For mean(), it tests with lists of both even and odd lengths.
  • For median(), it tests both odd and even number of elements in the list, including a test where the data isn’t initially sorted.
  • For mode(), it tests both when there is a single mode and multiple modes in the list.

To run these tests, ensure the functions from the module are either defined in the same file or properly imported into the test file, and then execute the test file with Python. The unittest framework will automatically discover and run the tests, reporting the results.

Great! You have test cases for all the functions in your calculations.py module. The different tests are grouped into two TestCase subclasses. The first one groups the basic arithmetic functions and the second one groups the statistical functions. This is a great way to structure your tests.

There’s a tiny detail that you must note in this generated code. The chat is importing the Counter class from the collections module. This import isn’t needed because the test code doesn’t use the class. So you can safely remove this import.

Before trying to run the tests, go ahead and import all the functions from your calculations.py module. Then run the following command, which executes all the tests from the test_calculations.py file:

Shell
$ python -m unittest test_calculations.py
..........
----------------------------------------------------------------------
Ran 10 tests in 0.000s

OK

All the tests passed! In this case, you didn’t use the -v command-line option so you don’t get the verbose output.

Now, say that you want to run your test independently. For example, you might want to run the tests on the arithmetic functions separately from the tests on statistical functions. You can use unittest test suites for this purpose.

Here’s a prompt to ask the chat to help you out. Remember to stay in the same chat thread:

You: Write unittest test suites to group the arithmetic functions and the statistical functions.

ChatGPT:

To group the tests for arithmetic and statistical functions into separate test suites using Python’s unittest framework, you can use the TestSuite class:

Python
import unittest
from collections import Counter

# Assuming the module's functions are defined here or imported

class TestArithmeticOperations(unittest.TestCase):
    def test_add(self):
        self.assertEqual(add(10, 5), 15)
        self.assertEqual(add(-1, 1), 0)
        self.assertEqual(add(-1, -1), -2)

    def test_subtract(self):
        self.assertEqual(subtract(10, 5), 5)
        self.assertEqual(subtract(-1, 1), -2)
        self.assertEqual(subtract(-1, -1), 0)

    def test_multiply(self):
        self.assertEqual(multiply(10, 5), 50)
        self.assertEqual(multiply(-1, 1), -1)
        self.assertEqual(multiply(-1, -1), 1)

    def test_divide(self):
        self.assertEqual(divide(10, 5), 2)
        self.assertEqual(divide(-1, 1), -1)
        self.assertEqual(divide(-1, -1), 1)
        with self.assertRaises(ValueError):
            divide(10, 0)

class TestStatisticalOperations(unittest.TestCase):
    def test_mean(self):
        self.assertEqual(mean([1, 2, 3, 4, 5]), 3)
        self.assertEqual(mean([1, 2, 3, 4, 5, 6]), 3.5)

    def test_median_odd(self):
        self.assertEqual(median([1, 3, 3, 6, 7, 8, 9]), 6)

    def test_median_even(self):
        self.assertEqual(median([1, 2, 3, 4, 5, 6, 8, 9]), 4.5)

    def test_median_unsorted(self):
        self.assertEqual(median([7, 1, 3, 3, 2, 6]), 3)

    def test_mode_single(self):
        self.assertEqual(mode([1, 2, 2, 3, 4, 4, 4, 5]), [4])

    def test_mode_multiple(self):
        self.assertEqual(set(mode([1, 1, 2, 3, 4, 4, 5, 5])), {1, 4, 5})

def load_tests(loader, tests, pattern):
    suite = unittest.TestSuite()
    suite.addTests(
        loader.loadTestsFromTestCase(TestArithmeticOperations)
    )
    suite.addTests(
        loader.loadTestsFromTestCase(TestStatisticalOperations)
    )
    return suite

if __name__ == "__main__":
    unittest.main()

This script defines two test classes, TestArithmeticOperations and TestStatisticalOperations, each containing tests relevant to their category. The load_tests() function is used to create a test suite that combines tests from both classes, managing a total of 10 tests. This function is a hook provided by unittest for customizing test loading and suite creation. It’s called automatically when using unittest.main().

To run the tests, ensure that the functions from your module are either defined in the same file or properly imported into the test file, and then execute the test file with Python. The unittest framework will discover the load_tests() function, use it to construct the test suite, and run the tests, reporting the results.

In the above response, the chat generated two TestCase subclasses, which are similar to the ones created before. Then, it generates the load_tests() function. In this example, the function defines a test suite, including the two test cases.

If you run the code as is, then you’ll be executing all the tests. However, if you comment out one of the calls to .addTests() in load_tests() function, then you’ll be executing specific tests. To try these updated tests, go ahead and replace the content of your test_calculations.py with the above code. Remember to import the functions from calculations.py.

Then, comment out the line that adds the TestArithmeticOperations and run the following command:

Shell
$ python -m unittest -v test_calculations.py
test_mean (test_calculations.TestStatisticalOperations.test_mean) ... ok
test_median_even (test_calculations.TestStatisticalOperations.test_median_even) ... ok
test_median_odd (test_calculations.TestStatisticalOperations.test_median_odd) ... ok
test_median_unsorted (test_calculations.TestStatisticalOperations.test_median_unsorted) ... ok
test_mode_multiple (test_calculations.TestStatisticalOperations.test_mode_multiple) ... ok
test_mode_single (test_calculations.TestStatisticalOperations.test_mode_single) ... ok

----------------------------------------------------------------------
Ran 6 tests in 0.000s

OK

When you comment out the line that adds the arithmetic tests, your code only runs the statistical tests.

Using test suites has several benefits. Some of the most relevant include the following:

  • Organize your tests: You can use test suites to group tests logically so that they’re more maintainable and easy to understand.
  • Run your test selectively: In the development or debugging phases, you might want to run only a subset of tests that are relevant to the part of the code you are working on. You can do this with test suites.
  • Improve the performance when running your test: You can have test suites for different sets of features in your application, so you can avoid running all the tests every time you update the code.

Test suites are a powerful feature of unittest that enhance the organization, execution, and maintenance of tests, making them particularly useful in complex testing scenarios.

Writing pytest Tests With ChatGPT

If you’ve dived deep into Python tests, then you’ll know the pytest framework, which is a popular testing framework for Python. It allows you to test all kinds of Python applications and libraries. It’s well-known for its simplicity, flexibility, and powerful features, which include some of the following:

  • Auto-discovery of tests: It automatically discovers test modules and functions. By default, it looks for files named test_*.py or *_test.py and runs all functions and methods whose names start with test_*.
  • Test fixtures: It offers a powerful fixture model that you can use for setting up and tearing down the test environment. You can use fixtures to create objects, manage resources, or establish states before a test run and clean up afterward.
  • Parameterized tests: It supports parameterized tests, which allow you to run tests with multiple different arguments. This feature helps to reduce code duplication and facilitates testing functions with various inputs.
  • Rich plugin architecture: It has a wide plugin ecosystem that extends its functionality, including parallel test execution, test coverage reporting, and integration with other tools.
  • Detailed reports: It issues detailed reports on test execution. The reports include information about the tests that passed, failed, or were skipped. It also reports the reasons for failures and errors.

In short, pytest is a full-featured and flexible testing framework that simplifies the process of writing, organizing, and running tests in Python. In the following sections, you’ll use ChatGPT to write pytest tests, create parametrized tests, and write test fixtures.

Writing Tests

In pytest, you can write tests as regular functions. You don’t need to subclass a base class or use specific methods from a framework’s API. So, writing tests with pytest is quick and flexible.

To write some pytest tests with ChatGPT, go ahead and start a new chat thread. Then issue the following prompt:

You: Write pytest test cases for the following function:

Python
def fizzbuzz(number):
    if number % 15 == 0:
        return "fizz buzz"
    elif number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    else:
        return number

ChatGPT:

Certainly! To write pytest test cases for the fizzbuzz() function, you first need to make sure you have pytest installed. If it’s not installed, you can install it using pip install pytest.

Here are some example test cases that cover all branches of the fizzbuzz() function:

Python
# Import the pytest package and the fizzbuzz function
import pytest
# Assume fizzbuzz is defined in 'your_module.py'
from your_module import fizzbuzz

def test_fizzbuzz_with_number_divisible_by_15():
    assert fizzbuzz(30) == "fizz buzz"

def test_fizzbuzz_with_number_divisible_by_3():
    assert fizzbuzz(9) == "fizz"

def test_fizzbuzz_with_number_divisible_by_5():
    assert fizzbuzz(10) == "buzz"

def test_fizzbuzz_with_number_not_divisible_by_3_or_5():
    assert fizzbuzz(4) == 4

def test_fizzbuzz_with_zero():
    assert fizzbuzz(0) == "fizz buzz"

To run these tests, you’ll save them in a file (let’s say test_fizzbuzz.py) and then execute pytest in the terminal in the directory where your test file is located. Pytest will automatically discover tests that are prefixed with test_ and execute them.

In this response, you have five separate test functions. Each function tests for a different scenario. It’s important to note that you don’t need to import pytest at the beginning of your module for the code to work.

Note that ChatGPT has generated functions with a single assertion each. This approach differs from what it did with the unittest tests. In general, using a single assertion per test has the following pros:

  • Clarity and focus: Tests with single assertions tend to be more focused on a specific expected result.
  • Debugging and maintenance: Single-assertion tests fails and it’s immediately clear what went wrong, as there’s only one reason for the failure.
  • Granularity in test results: Single-assertion tests provide a granular view of what functionality the test is checking.

Single-assertion tests also have some cons:

  • Increased number of tests: Having one assertion per test can lead to a large number of test functions, which might clutter the test suite and make it harder to navigate and maintain.
  • Performance overhead: Running a large number of tests can be slower than running fewer. For example, repeating the same setup code across multiple tests can cause performance overhead.

As ChatGPT says, you need to install pytest to be able to run these tests. So, go ahead and run the following command in your Python environment:

Shell
$ python -m pip install pytest

This command will download pytest from the Python Package Index (PyPI) and install it in your current Python environment.

Now, you’re ready to run the generated tests. Go ahead and create a file called test_fizzbuzz_pytest.py and add the above code to it. Remember to import your fizzbuzz() function. You can also remove the pytest import. Once you have the file in place, run the following command:

Shell
$ python -m pytest test_fizzbuzz_pytest.py
======================== test session starts ========================
platform darwin -- ...
rootdir: .../examples
collected 5 items

test_fizzbuzz_pytest.py .....                                  [100%]

========================= 5 passed in 0.01s =========================

Your tests work as expected. They test four different scenarios, and your code successfully passes all of them.

Parametrizing Tests

As you already learned, pytest allows you to parametrize test functions. This parametrization is pretty handy because it frees you from writing one test function for every set of parameters. So, you can write fewer test functions and have more tests at the same time. This reduces the repetitive boilerplate code and increases your productivity.

You’ll use the @pytest.mark.parametrize decorator to define multiple sets of arguments for a test function or class.

Assuming that you’re following the example of the previous section and have the same chat thread active, you can use the following prompt to ask ChatGPT to update the test for you:

You: Update the previous tests using parametrization to avoid repetition.

ChatGPT:

To update the previous test cases for the fizzbuzz function using pytest’s parametrization feature, we can consolidate them into a single test function that takes different input values and expected outputs as parameters. This approach reduces repetition and makes the test suite more concise and maintainable. Here’s how you can do it:

Python
import pytest
# Assuming fizzbuzz is defined in 'your_module.py'
from your_module import fizzbuzz

@pytest.mark.parametrize(
    "input,expected",
    [
        (30, "fizz buzz"),  # Divisible by 15
        (9, "fizz"),  # Divisible by 3
        (10, "buzz"),  # Divisible by 5
        (4, 4),  # Not divisible by 3 or 5
        (0, "fizz buzz"),  # Edge case: 0 (divisible by 15)
        (33, "fizz"),  # Additional case: Divisible by 3
        (55, "buzz"),  # Additional case: Divisible by 5
        (98, 98),  # Additional case: Not divisible by 3 or 5
    ],
)
def test_fizzbuzz(input, expected):
    assert (
           fizzbuzz(input) == expected
    ), f"Expected {expected} for input {input}"

In this updated version, the @pytest.mark.parametrize decorator is used to run the test_fizzbuzz() function multiple times with different sets of arguments (input and expected). Each tuple in the list passed to the decorator represents a test case, where the first element is the input to the fizzbuzz() function, and the second element is the expected output.

This approach allows you to cover a wide range of test scenarios with a single test function, making the test code cleaner and reducing redundancy. Plus, adding new test cases is as simple as adding a new tuple to the list of parameters, enhancing the maintainability of the test suite.

Now, you have a single test function called test_fizzbuzz(). However, with the parametrization, this function works as eight different tests—one for each pair of parameters.

Using parametrization in tests offers several advantages. It reduces code duplication, making your test suite more concise. It also improves test coverage and simplifies code maintenance. In short, parametrization is a great feature of pytest that you’ll end up loving.

Update your test_fizzbuzz_pytest.py file with the above code. Then import fizzbuzz() and execute the following command to run the tests:

$ python -m pytest test_fizzbuzz_pytest.py
======================== test session starts ========================
platform darwin -- ...
rootdir: .../examples
collected 8 items

test_fizzbuzz_pytest.py .............                          [100%]

========================= 8 passed in 0.03s =========================

In the output, you can see that pytest collected eight tests in total. This means that it counts every set of parameters as an individual test.

Creating Fixtures

You can also create code fixtures in pytest to handle setup and teardown logic. To do this, you can use the @pytest.fixture decorator.

To continue with the code examples you’ve seen so far, get back to your JSONReader. Now, issue the following prompt into your ChatGPT window. Remember to continue in the pytest thread:

You: Write pytest test cases with fixtures for the following class:

Python
import json

class JSONReader:
    def __init__(self, filename):
        self.filename = filename

    def read(self):
        with open(self.filename, encoding="utf-8") as file:
            return json.load(file)

ChatGPT:

To write pytest test cases for the JSONReader class, including the use of fixtures for setting up necessary test files, follow the steps below. First, ensure you have pytest installed.

  1. Fixture Setup: Create a fixture that sets up a temporary JSON file, which will be used by the tests. This fixture will create a file before each test runs and delete it afterward to ensure tests do not interfere with each other.
  2. Test Cases: Write test cases to verify that the JSONReader class can correctly read JSON files. This includes testing for correct content reading and handling of possible exceptions (e.g., file not found).

Here is an example of how the test file might look:

Python
import json
import os

import pytest

# Assuming JSONReader is defined in 'your_module.py'
from your_module import JSONReader

@pytest.fixture
def setup_json_file(tmp_path):
    # Create a temporary JSON file
    data = {"name": "John Doe", "age": 30}
    file_path = tmp_path / "test_data.json"
    with open(file_path, "w", encoding="utf-8") as file:
        json.dump(data, file)
    return file_path

def test_read_json_correctly(setup_json_file):
    # Test that JSONReader reads the file correctly
    reader = JSONReader(setup_json_file)
    data = reader.read()
    assert data == {
        "name": "John Doe",
        "age": 30,
    }, "Should correctly read JSON content"

def test_file_not_found():
    # Test for handling of file not found exception
    reader = JSONReader("non_existent_file.json")
    with pytest.raises(FileNotFoundError):
        reader.read()

In this example, tmp_path is a built-in pytest fixture that provides a temporary directory unique to the test function that requests it. We use it to create a temporary JSON file for testing. The setup_json_file fixture creates a temporary file with some JSON content and returns the path to this file, which is then used by the test_read_json_correctly test function.

The test_file_not_found function tests the behavior of the JSONReader class when attempting to read a non-existent file, expecting a FileNotFoundError to be raised.

This setup ensures that tests are isolated and do not depend on or interfere with external files, making your tests more reliable and portable.

In this response, you have a test fixture that creates a temporary file to save the JSON sample content. In pytest, the tmp_path fixture provides a quick way to create temporary directories. You don’t need to explicitly remove the temporary directory because the fixture takes care of this task for you.

The response isn’t perfect. Instead of formal docstrings, the test functions start with a comment, but that’s a coding style issue. Another detail that pops up is that the response includes an unneeded import. The fixture and tests don’t use the os module, so you can remove the import, and your test code will be ready for a try. In this case, import pytest is necessary, since it’s used to define your fixture.

Here’s how to run the tests:

Shell
$ python -m pytest test_readers_pytest.py
======================== test session starts ========================
platform darwin -- Python 3.11.3, pytest-7.4.0, pluggy-1.2.0
rootdir: .../examples
collected 1 item

test_readers_pytest.py .                                        [100%]

========================= 1 passed in 0.01s ==========================

The tests run successfully! So the fixture is working correctly.

Using Other Types of Prompts for Writing Tests

Up to now, you’ve included the code that you want to write tests for in every ChatGPT prompt. Sometimes, you can’t do this because you must fulfill privacy requirements or keep your company’s code safe.

In this scenario, you can create prompts that only use the signature of your classes, methods, and functions. You can also use general descriptions or specifications of your code to develop working prompts.

In the following section, you’ll explore examples of creating prompts with signatures and descriptions.

Including Only Signatures in Your Prompts

To illustrate how to create prompts that use the signature of your classes, methods, and functions, consider the signature of a function that checks if a number is in a given interval. Remember to start a new chat thread before running the prompt below:

You: Write pytest tests for a function with the following signature:

Python
def in_interval(number: int, start: int, end: int) -> bool:

ChatGPT:

To write pytest tests for the function in_interval(number: int, start: int, end: int) -> bool, we first need to define the function based on its signature. This function should return True if number is between start and end (inclusive), and False otherwise. Let’s define this function and then write several pytest test cases to cover various scenarios, including edge cases.

First, here’s a simple implementation of the in_interval function:

Python
def in_interval(number: int, start: int, end: int) -> bool:
    return start <= number <= end

Now, let’s write the pytest tests. Create a new Python file for your tests, e.g., test_in_interval.py, and include the following code:

Python
import pytest
from your_module import in_interval

@pytest.mark.parametrize(
    "number, start, end, expected",
    [
        (5, 1, 10, True),  # Test case inside the interval
        (1, 1, 10, True),  # Edge case: number equals start
        (10, 1, 10, True),  # Edge case: number equals end
        (0, 1, 10, False),  # Number below the interval
        (11, 1, 10, False),  # Number above the interval
        (5, 5, 5, True),  # Edge case: start equals end equals number
        (-1, -5, 5, True),  # Test case with negative numbers
        (-6, -5, 5, False),  # Number below the interval with negatives
    ],
)
def test_in_interval(number, start, end, expected):
    assert in_interval(number, start, end) == expected

This test file uses pytest.mark.parametrize to define multiple test cases for the in_interval function. Each test case specifies a number, start, end, and the expected result. This approach allows you to efficiently test your function with a variety of inputs to ensure it behaves correctly under different conditions.

In this example, the function’s signature is the only context that you provide to ChatGPT. The language model is smart enough to examine the function’s name and arguments and generate a suitable implementation. This may not be the case with real-world functions.

Then, the chat generates a test function with eight sets of parameters. Each set of parameters tests the function in one specific scenario.

Note that having descriptive function and argument names is critical in this kind of prompt. Adding type hints to the function’s signature will also improve the quality of the generated docstring.

Writing Prompts That Use Code Descriptions

You can also craft a prompt by just describing the input and output of your code or by providing a detailed description of what your code does. Again, these prompts are helpful when you’re not allowed to upload your code as part of your prompt.

To illustrate, here’s a prompt that asks ChatGPT to write tests for a function that determines whether an integer number is prime:

You: Write pytest tests for a function that takes an integer as an argument and determines whether it’s a prime number.

ChatGPT:

To write pytest tests for a function that determines whether a given integer is a prime number, we first need to define the function. Here’s a simple implementation of such a function, which we’ll name is_prime. This function returns True if the number is a prime, and False otherwise:

Python
def is_prime(number: int) -> bool:
    if number <= 1:
        return False
    for i in range(2, int(number ** 0.5) + 1):
        if number % i == 0:
            return False
    return True

Now, let’s write the pytest tests. Create a new Python file for your tests, for example, test_is_prime.py, and include the following code:

Python
import pytest
from your_module import is_prime

@pytest.mark.parametrize(
    "number, expected",
    [
        (2, True),  # Smallest prime
        (3, True),  # Prime
        (4, False),  # Composite (2*2)
        (5, True),  # Prime
        (11, True),  # Prime
        (12, False),  # Composite (2*6)
        (13, True),  # Prime
        (25, False),  # Composite (5*5)
        (29, True),  # Prime
        (1, False),  # Not prime by definition
        (0, False),  # Not prime
        (-1, False),  # Negative number
        (-11, False),  # Negative number
    ],
)
def test_is_prime(number, expected):
    assert is_prime(number) == expected

This test file uses pytest.mark.parametrize to specify multiple test cases for the is_prime function. Each test case provides a number to test and the expected result (True or False). This setup allows you to comprehensively test your function against a variety of inputs, including edge cases such as the smallest prime number, composite numbers, the number one, zero, and negative numbers.

Again, the chat has generated a possible implementation for the function. After that, it generates the tests. You should provide more detailed descriptions for more complex functions, classes, or other pieces of code so that ChatGPT can generate better responses.

It’s also important to note that these tests are just a starting point. It’s up to you to modify and tweak them as needed to match your specific code. In any case, they can save you from writing a lot of boilerplate code.

Conclusion

Now you know how to leverage the power of tools like ChatGPT to write unit tests for your Python code. You’ve used the LLM to generate tests using different tools, including doctest, unittest, and pytest. In the process, you’ve learned how to create test cases, fixtures, test suites, and more.

In this tutorial, you’ve learned how to:

  • Ask ChatGPT to create quick tests using doctest
  • Use ChatGPT to generate unittest tests, fixtures, and suites
  • Craft ChatGPT prompts to write pytest tests and fixtures
  • Use alternative prompts for cases where the code isn’t available

With these skills, you’re ready to start creating robust and complete test batteries for your Python code and projects.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About Leodanis Pozo Ramos

Leodanis is an industrial engineer who loves Python and software development. He's a self-taught Python developer with 6+ years of experience. He's an avid technical writer with a growing number of articles published on Real Python and other sites.

» More about Leodanis

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Master Real-World Python Skills With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

Master Real-World Python Skills
With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

What Do You Think?

Rate this article:

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. Get tips for asking good questions and get answers to common questions in our support portal.


Looking for a real-time conversation? Visit the Real Python Community Chat or join the next “Office Hours” Live Q&A Session. Happy Pythoning!