Python 3.14 Preview: Lazy Annotations

Python 3.14 Preview: Lazy Annotations

by Bartosz Zaczyński Publication date Aug 27, 2025 Reading time estimate 52m intermediate python

Recent Python releases have introduced several small improvements to the type hinting system, but Python 3.14 brings a single major change: lazy annotations. This change delays annotation evaluation until explicitly requested, improving performance and resolving issues with forward references. Library maintainers might need to adapt, but for regular Python users, this change promises a simpler and faster development experience.

By the end of this tutorial, you’ll understand that:

  • Although annotations are used primarily for type hinting in Python, they support both static type checking and runtime metadata processing.
  • Lazy annotations in Python 3.14 defer evaluation until needed, enhancing performance and reducing startup time.
  • Lazy annotations address issues with forward references, allowing types to be defined later.
  • You can access annotations via the .__annotations__ attribute or use annotationlib.get_annotations() and typing.get_type_hints() for more robust introspection.
  • typing.Annotated enables combining type hints with metadata, facilitating both static type checking and runtime processing.

Explore how lazy annotations in Python 3.14 streamline your development process, offering both performance benefits and enhanced code clarity. If you’re just looking for a brief overview of the key changes in 3.14, then expand the collapsible section below:

Python 3.14 introduces lazy evaluation of annotations, solving long-standing pain points with type hints. Here’s what you need to know:

  • Annotations are no longer evaluated at definition time. Instead, their processing is deferred until you explicitly access them.
  • Forward references work out of the box without needing string literals or from __future__ import annotations.
  • Circular imports are no longer an issue for type hints because annotations don’t trigger immediate name resolution.
  • Startup performance improves, especially for modules with expensive annotation expressions.
  • Standard tools, such as typing.get_type_hints() and inspect.get_annotations(), still work but now benefit from the new evaluation strategy.
  • inspect.get_annotations() becomes deprecated in favor of the enhanced annotationlib.get_annotations().
  • You can now request annotations at runtime in alternative formats, including strings, values, and proxy objects that safely handle forward references.

These changes make type hinting faster, safer, and easier to use, mostly without breaking backward compatibility.

Take the Quiz: Test your knowledge with our interactive “Python Annotations” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Annotations

Test your knowledge of annotations and type hints, including how different Python versions evaluate them at runtime.

Python Annotations in a Nutshell

Before diving into what’s changed in Python 3.14 regarding annotations, it’s a good idea to review some of the terminology surrounding annotations. In the next sections, you’ll learn the difference between annotations and type hints, and review some of their most common use cases. If you’re already familiar with these concepts, then skip straight to lazy evaluation of annotations for details on how the new annotation processing works.

Annotations vs Type Hints

Arguably, type hints are the most common use case for annotations in Python today. However, annotations are a more general-purpose feature with broader applications. They’re a form of syntactic metadata that you can optionally attach to your Python functions and variables.

Although annotations can convey arbitrary information, they must follow the language’s syntax rules. In other words, you won’t be able to define an annotation representing a piece of syntactically incorrect Python code.

To be even more precise, annotations must be valid Python expressions, such as string literals, arithmetic operations, or even function calls. On the other hand, annotations can’t be simple or compound statements that aren’t expressions, like assignments or conditionals, because those might have unintended side effects.

Python supports two flavors of annotations, as specified in PEP 3107 and PEP 526:

  1. Function annotations: Metadata attached to signatures of callable objects, including functions and methods—but not lambda functions, which don’t support the annotation syntax.
  2. Variable annotations: Metadata attached to local, nonlocal, and global variables, as well as class and instance attributes.

The syntax for function and variable annotations looks almost identical, except that functions support additional notation for specifying their return value. Below is the official syntax for both types of annotations in Python. Note that <annotation> is a placeholder, and you don’t need the angle brackets when replacing this placeholder with the actual annotation:

Python Syntax Python 3.6+
class Class:
    # These two could be either class or instance attributes:
    attribute1: <annotation>
    attribute2: <annotation> = value

    def method(
        self,
        parameter1,
        parameter2: <annotation>,
        parameter3: <annotation> = default_value,
        parameter4=default_value,
    ) -> <annotation>:
        self.instance_attribute1: <annotation>
        self.instance_attribute2: <annotation> = value
        ...

def function(
    parameter1,
    parameter2: <annotation>,
    parameter3: <annotation> = default_value,
    parameter4=default_value,
) -> <annotation>:
    ...

variable1: <annotation>
variable2: <annotation> = value

To annotate a variable, attribute, or function parameter, put a colon (:) just after its name, followed by the annotation itself. Conversely, to annotate a function’s return value, place the right arrow (->) symbol after the closing parenthesis of the parameter list. The return annotation goes between that arrow and the colon denoting the start of the function’s body.

As shown, you can mix and match function and method parameters, including optional parameters, with or without annotations. You can also annotate a variable without assigning it a value, effectively making a declaration of an identifier that might be defined later.

Declaring a variable doesn’t allocate memory for its storage or even register it in the current namespace. Still, it can be useful for communicating the expected type to other people reading your code or a static type checker. Another common use case is instructing the Python interpreter to generate boilerplate code on your behalf, such as when working with data classes. You’ll explore these scenarios in the next section.

To give you a better idea of what Python annotations might look like in practice, below are concrete examples of syntactically correct variable annotations:

Python Python 3.6+
>>> temperature: float
>>> pressure: {"unit": "kPa", "min": 220, "max": 270}

You annotate the variable temperature with float to indicate its expected type. For the variable pressure, you use a Python dictionary to specify the air pressure unit along with its minimum and maximum values. This kind of metadata could be used to validate the actual value at runtime, generate documentation based on the source code, or even automatically build a command-line interface for a Python script.

Now, contrast this with a few failed attempts at writing incorrect annotations:

Python Python 3.6+
>>> lambda x: int: x + 1
  ...
SyntaxError: illegal target for annotation

>>> valve: (1 => "Schrader", 2 => "Presta", 3 => "Dunlop");
  ...
SyntaxError: invalid syntax

>>> point: type Point = tuple[float, float]
  ...
SyntaxError: invalid syntax

First, you try to annotate the parameter of an anonymous lambda function but quickly discover that lambdas don’t support annotations by design. The second annotation adopts Perl’s syntax for associative arrays, causing a syntax error in Python. Finally, the third annotation is a valid Python statement rather than an expression.

Note that you can still use this statement as a standalone instruction outside of the annotation context:

Python Python 3.12+
>>> type Point = tuple[float, float]

>>> Point.__class__
<class 'typing.TypeAliasType'>

>>> Point.__name__
'Point'

>>> Point.__value__
tuple[float, float]

The type statement was introduced in Python 3.12 as a new way of defining type aliases.

Python specifies the syntax for annotations but doesn’t give them inherent meaning. It’s up to you or third-party tools to interpret and use annotations as needed. In practice, Python annotations are used almost exclusively as type hints for static type checking. While they became synonymous with type hints in everyday speech, annotations technically provide a more universal mechanism, which often ends up being used for type hinting.

Notably, annotations are optional, enabling gradual typing in an otherwise dynamically typed language like Python. While the interpreter evaluates annotations at runtime, incurring additional computational cost, it doesn’t enforce them. Python ignores annotations, remaining backward compatible and sticking to its duck typing philosophy. At the same time, Python supports static duck typing through protocols, which build upon annotations.

Now you know that type hints are specialized annotations used for one specific purpose. But annotations have much more to offer. So, what are other common uses of annotations in Python?

Common Uses of Annotations

Even though the syntax for Python annotations was intentionally put into place without implying any particular semantics, it has always been implicitly motivated by the desire to facilitate static type checking, which remains their primary use case. The goal of annotations was to standardize a variety of informal documentation conventions that had emerged within the Python community to express type expectations and establish interfaces.

Before annotations came into existence, developers would rely on external tools and libraries to help document and validate their Python functions. Tools like typecheck, PyContracts, or mypy piggybacked on existing metaprogramming features of the language.

To illustrate a few alternative approaches to type checking before the introduction of annotations, have a look at this sample code in the legacy Python 2 :

Python Python 2
from contracts import contract
from typecheck import accepts, returns

# Using typecheck
@accepts(int, int)
@returns(int)
def add(a, b):
    return a + b

# Using PyContracts
@contract
def sub(a, b):
    """
    :type a: int,>0
    :type b: int
    :rtype: int
    """
    return a - b

# Using mypy
def mul(a, b):
    # type: (int, int) -> int
    return a * b

Both typecheck and PyContracts provided custom decorators to enforce types at runtime. The former library also allowed specifying more complex constraints directly in docstrings, using a syntax based on a domain-specific language. Meanwhile, mypy continues to respect inline comments in older Python versions that lack native support for annotations.

Nearly a decade later, it became abundantly clear that most Python developers tend to use annotations for type checking. This led to the creation of the PEP 484 document, which outlined type hints—a higher-level abstraction built on top of annotations. At the same time, that document didn’t change anything about Python’s runtime behavior, nor did it prevent other uses of annotations. What are those uses exactly?

Broadly speaking, you can group the use cases for Python annotations into two main categories:

  1. Static code analysis
    • Checking types statically
    • Documenting code
    • Refactoring code safely
    • Suggesting autocompletions
  2. Runtime processing
    • Enforcing types at runtime
    • Generating code
    • Validating data
    • Facilitating dependency injection
    • Parsing command-line arguments
    • Mapping database queries
    • Marshaling parameters in RPC

With this high-level overview in mind, you’ll now explore how static tools leverage annotations without executing Python code.

Static Processing of Annotations

Static processing of annotations occurs outside of Python. Therefore, it requires a separate tool to read and analyze the source code, such as a type checker or a code editor. In this context, annotations often become type hints carrying information about the expected types of various code objects.

Type hints can optionally live in separate stub files (.pyi), where they provide type information without the corresponding implementation. In that sense, they’re analogous to header files in C and C++, allowing you to annotate code beyond your control. Stub files can help with C extension modules, third-party libraries, or code utilizing Python’s dynamic features, which prevent the source code from being annotated directly.

Here’s a sample stub file for an imaginary calculator.py module with missing or conflicting type hints:

Python calculator.pyi
type Number = int | float

class Calculator:
    history: list[Number]

    def __init__(self) -> None: ...

    def add(self, a: Number, b: Number) -> Number: ...

def add(a: Number, b: Number) -> Number: ...

You’d put this stub file in the same directory as the corresponding Python module. To mark an empty body, you use an ellipsis (...) instead of the pass statement, which serves as a placeholder for the implementation.

In their simplest form, type hints can refer to any built-in or custom type, including your own classes. To express more sophisticated constraints, like type unions, aliases, guards, covariant, contravariant, and invariant types, as well as generics, you can leverage the building blocks provided by the typing module in the standard library.

While annotations are most commonly used for static type checking, they can also be accessed while your program is running. This is where their dynamic processing comes into the picture.

Dynamic Processing of Annotations

Because the Python interpreter effectively ignores annotations, processing them at runtime will generally require installing a third-party library that can access and interpret them somehow. Once you learn how to introspect annotations in Python, you’ll be able to write your own annotation processors, should you ever need to.

A familiar dynamic processing of annotations that you often get exposed to occurs in data classes and class-based named tuples. Their type hints serve a dual purpose. On the one hand, static code analysis tools can use them to perform ordinary type checking, while on the other hand, Python interprets these type hints at runtime to register fields and automatically generate code for you:

Python Python 3.10+
>>> from dataclasses import dataclass, field
>>> from typing import ClassVar

>>> @dataclass(order=True, unsafe_hash=True)
... class User:
...     num_instances: ClassVar[int] = 0
...     id: int = field(init=False)
...     email: str
...     password: str | None = field(repr=False, default=None)
...
...     def __post_init__(self) -> None:
...         type(self).num_instances += 1
...         self.id = type(self).num_instances
...
>>> vars(User)
mappingproxy({
    '__module__': '__main__',

    '__replace__': <function _replace at 0x7c237a2db6a0>,
    '__hash__': <function User.__hash__ at 0x7c2378b08680>,
    '__init__': <function User.__init__ at 0x7c2378d460c0>,
    '__repr__': <function User.__repr__ at 0x7c2378b08360>,
    '__eq__': <function User.__eq__ at 0x7c2378b08220>,
    '__lt__': <function User.__lt__ at 0x7c2378b08400>,
    '__le__': <function User.__le__ at 0x7c2378b084a0>,
    '__gt__': <function User.__gt__ at 0x7c2378b08540>,
    '__ge__': <function User.__ge__ at 0x7c2378b085e0>,
    '__match_args__': ('email', 'password')
})

The highlighted lines indicate special methods that Python synthesized out of thin air based on the type hints you provided in the class definition. Additionally, you got a few special attributes like .__match_args__ or .__static_attributes__ for free. If you don’t believe this, then just call help(User) in the Python REPL to see a full list of methods and attributes of your data class. There will be many that you never explicitly defined.

Apart from conventional type hints such as ClassVar[int] or str | None, you may want to process domain-specific annotations that serve purposes beyond type checking. For example, libraries like Pydantic, FastAPI, and Typer deliver rich sets of custom classes and functions designed for use as annotations.

Before the release of Pydantic 2.0, you’d use some of its validators directly as annotations:

Python Python 3.7+
>>> from pydantic.v1 import BaseModel, NameEmail, constr

>>> class GitCommit(BaseModel):
...     author: NameEmail
...     sha1_hash: constr(regex=r"^[0-9a-f]{40}$")
...     message: constr(min_length=1, strip_whitespace=True)
...
>>> GitCommit(author="John Doe", message="")
Traceback (most recent call last):
  ...
pydantic.v1.error_wrappers.ValidationError: 3 validation errors for GitCommit
author
  value is not a valid email address (type=value_error.email)
sha1_hash
  field required (type=value_error.missing)
message
  ensure this value has at least 1 characters (type=...; limit_value=1)

Here, you declare your class fields using the library’s NameEmail type and the constr() wrapper function, constraining values according to specific rules. When you instantiate your model later, Pydantic automatically validates the data and raises an error if any constraint is violated.

Historically, the static and dynamic use cases for annotations have been at odds. In other words, you could either use annotations for type hinting or to supply metadata for runtime processors like Pydantic. Choosing one approach would prevent the other from working as intended. Fortunately, things have changed with the introduction of Python’s annotated type hints, which combine the best of both worlds.

Type Hints Annotated With Metadata

Pydantic was one of the major forces behind the real-world adoption of typing.Annotated. This special form allows for an elegant composition of many orthogonal features, such as validation, schema generation, and serialization, on top of type hinting.

By wrapping your type hint and one or more pieces of structured metadata in Annotated, you can satisfy the type checkers as well as runtime processors in one go. The first argument to Annotated must be a regular type hint, whereas the rest can be arbitrary annotations:

Python Syntax Python 3.9+
variable: Annotated[<type>, <annotation>, <annotation>, ...]

For type checkers, this declaration is the same as variable: <type>, while runtime processors can access a tuple of metadata fields through the special form’s .__metadata__ attribute. When you decide to use Annotated, you must provide at least one annotation.

To continue with the data validation theme, consider the following example:

Python Python 3.9+
>>> from typing import Annotated

>>> from pydantic import (
...     AfterValidator,
...     BaseModel,
...     Field,
...     PlainSerializer,
...     StringConstraints
... )
>>> from pydantic.json_schema import Examples

>>> Price = Annotated[
...     float,
...     Field(strict=True, gt=0, description="Unit price in USD"),
...     Field(deprecated="Will be replaced by decimal.Decimal in the future"),
...     PlainSerializer(lambda value: f"${value:.2f}", return_type=str)
... ]

>>> class Product(BaseModel):
...     ean: Annotated[str, StringConstraints(pattern=r"^\d{13}$")]
...     name: Annotated[str, AfterValidator(str.title), Examples(["milk"])]
...     price: Price
...
>>> Product(ean="4056489255475", name="hass avocado", price=2.1)
Product(ean='4056489255475', name='Hass Avocado', price=2.1)

>>> _.model_dump()
{'ean': '4056489255475', 'name': 'Hass Avocado', 'price': '$2.10'}

To allow for code reuse and improve readability, you can assign your annotated type hint to a variable. This helps in more complicated cases where multiple annotations wouldn’t fit on a single line. On the other hand, you can use Annotated directly in the field definition if it’s still readable and you don’t need to reference it later.

At this point, you understand the difference between annotations and type hints, along with their use cases, and you know how they’re processed. Now it’s time to see how different Python versions evaluate annotations at runtime.

Runtime Evaluation of Annotations

Python largely ignores annotations at runtime, leaving their processing to type checkers and third-party libraries. However, that doesn’t mean the interpreter remains completely oblivious to them. In fact, quite the opposite is true!

Recall that annotations must be valid Python expressions, which the interpreter evaluates into fixed values. As soon as they aren’t valid, the interpreter will start complaining to you because it actually executes the underlying code. In this section, you’ll learn when annotations are evaluated at runtime.

Eagerly Evaluated Annotations

Since their inception in Python 3.0, annotations have been evaluated eagerly as soon as you define them. This worked similarly to default argument values in functions and methods, which also execute immediately upon definition. Such a greedy evaluation of annotations remained the default behavior up to and including Python 3.13:

Python Python 3.6–3.13
>>> variable1: " ".join(["This", "has", "no", "side", "effects"])

>>> variable2: print("But this one has")
But this one has

Although the first annotation has no visible side effects, Python still evaluates it by joining the strings under the surface. In contrast, the second annotation is a call to the Python print() function, which evaluates to None while displaying a text message in your terminal.

You can access the computed values at runtime via the .__annotations__ attribute, which is available on modules, classes, and callables. And, because module attributes are visible in the global namespace, you can use the __annotations__ variable instead. That might be particularly handy when you’re working in the Python REPL:

Python Python 3.6–3.13
>>> __annotations__
{'variable1': 'This has no side effects', 'variable2': None}

>>> import sys; sys.modules[__name__].__annotations__
{'variable1': 'This has no side effects', 'variable2': None}

Both names refer to the same object in memory, which is a mutable dictionary that maps global identifiers to their eagerly evaluated annotations. Notice how the second annotation was replaced with None, which print() returns implicitly.

Since annotations are expressions, their evaluation boils down to running Python code. This approach has both pros and cons, which you’ll explore in more detail later. However, two notable pain points related to using annotations for type hinting were the following:

  1. Performance overhead: Complex expressions take time to compute, slowing down startup time and module imports.
  2. Name errors: Referencing types that haven’t been defined yet results in runtime errors.

These issues have led to the introduction of postponed evaluation of annotations, which provided only a partial solution while bringing some new challenges. You’ll learn about this mechanism next.

Automatically Stringified Annotations

Starting with Python 3.7, you could opt into an alternative strategy for evaluating annotations, which aimed to address the problems you’d often encounter in the type-hinting scenario. With the postponed evaluation of annotations (PEP 563) turned on, Python would no longer evaluate them at runtime. Instead, it would immortalize annotations as strings, leaving their interpretation up to you.

This was an optional behavior, which could be enabled on a per-module basis using the following future directive, typically placed at the top of the script:

Python Python 3.7+
>>> variable: print("This will show up now")
This will show up now
>>> __annotations__
{'variable': None}

>>> from __future__ import annotations
>>> variable: print("This won't show up")
>>> __annotations__
{'variable': 'print("This won\'t show up")'}

Look at how drastically the behavior and results change! Enabling postponed evaluation prevents the print() call from firing and alters the __annotations__ dictionary. This approach reduces the runtime cost of annotations and fixes the problem of referencing undefined names, which is pretty common in the static type checking scenario.

Turning Python objects into strings makes no difference to static type checkers, which can still correctly interpret them. In fact, you can achieve a similar effect by wrapping your annotations in string literals by hand.

Simultaneously, stringified annotations mean that you no longer risk name errors at runtime when referencing types that are defined later. However, this subtle shift can easily break existing code that relies on the dynamic processing of annotations, where tools require the computed objects rather than strings.

In relatively simple cases, you can call eval() on the stringified annotations to do the evaluation yourself:

Python Python 3.7+
>>> __annotations__["variable"]
'print("This won\'t show up")'

>>> eval(__annotations__["variable"])
This won't show up

This works, but isn’t ideal. Calling eval() is considered unsafe since it can execute arbitrary code, leading to security vulnerabilities if the input isn’t properly sanitized. Alternatively, you can use one of the introspection functions from the standard library, which you’ll learn about later.

Either way, reconstructing complex objects from strings can be surprisingly tricky. Plus, you may need to import one or more dependencies to do this correctly. And, since you perform a postponed evaluation, the required modules may no longer be easily reachable from where the evaluation happens.

This makes postponed evaluation of annotations backward-incompatible, causing a lot of headaches to library maintainers, such as those working on Pydantic:

The problem however is that trying to evaluate those strings to get the real annotation objects is really hard, perhaps impossible to always do correctly. (Source)

Furthermore, you might find yourself running into edge cases. One surprising artifact of the automatically stringified annotations is the doubly-wrapped string literals:

Python Python 3.7+
>>> name: "str"

>>> __annotations__
{'name': "'str'"}

In this case, the original annotation "str" becomes "'str'".

The postponed evaluation of annotations was supposed to become the default in Python 3.10, but its inclusion was rolled back and postponed (no pun intended) until Python 3.11. Then, it was put on hold again until finally being superseded by a more elegant, simple, and versatile solution that made its way to Python 3.14.

Next up, you’ll learn about the ultimate solution that addresses the challenges of both type hinting and dynamic processing of annotations.

Annotations as Data Descriptors

In Python 3.14, annotations are no longer evaluated eagerly, nor do they become strings. Instead, Python evaluates them lazily only when you explicitly request them, reducing their computational cost. If you don’t ask for annotations, then they won’t be evaluated at all, even when they would normally produce an error:

Python Python 3.14+
>>> variable: 1 / 0

>>> 1 / 0
Traceback (most recent call last):
  ...
ZeroDivisionError: division by zero

When used as a variable annotation, the expression 1 / 0 is disregarded by Python, but otherwise, it raises a ZeroDivisionError.

To trigger the evaluation of annotations, you can access the .__annotations__ attribute of a module, class, or function at least once:

Python Python 3.14+
>>> import sys
>>> module = sys.modules[__name__]
>>> module.__annotations__
Traceback (most recent call last):
  ...
ZeroDivisionError: division by zero

It’s only now that Python executes your faulty annotation, raising the familiar exception. But there’s something even more interesting happening behind the scenes:

Python Python 3.14+
>>> variable: print("Python is calling me now")

>>> module.__annotations__
Python is calling me now
{'variable': None}

>>> module.__annotations__
{'variable': None}

Notice that annotations are computed only when you first access them, while subsequent reads retrieve the cached values without executing the underlying expressions again. That’s because modules, functions, and classes implement .__annotations__ as a data descriptor, which stores the resulting values on each instance:

Python Python 3.14+
>>> descriptor = type(module).__dict__["__annotations__"]

>>> descriptor
<attribute '__annotations__' of 'module' objects>

>>> type(descriptor)
<class 'getset_descriptor'>

For the record, earlier Python versions also implemented .__annotations__ as a data descriptor, but it would merely return a dictionary of the eagerly evaluated annotations. In Python 3.14, this descriptor calls a new special method, .__annotate__(), which is responsible for computing the annotations on demand.

The Python compiler automatically generates a default implementation of .__annotate__() when it sees annotations being declared. Otherwise, it’ll set .__annotate__ to None. At the same time, you can provide your own implementation by overriding .__annotate__() if you need to customize the evaluation logic.

This new special method takes a mandatory parameter, which determines the desired format of the evaluated annotations to support various use cases. The new annotationlib module provides a Format enumeration with the available formats:

Python Python 3.14+
>>> from annotationlib import Format
>>> list(Format)
[<Format.VALUE: 1>,
 <Format.VALUE_WITH_FAKE_GLOBALS: 2>,
 <Format.FORWARDREF: 3>,
 <Format.STRING: 4>]

The compiler-generated .__annotate__() method supports only the first format, whereas other formats are not implemented. They’re mainly intended for libraries that must process annotations at runtime. So, when you access the .__annotations__ attribute, Python invokes .__annotate__() with the VALUE format, producing conventional objects.

Although it’s generally recommended to use a higher-level abstraction, you can still call .__annotate__() directly if needed:

Python Python 3.14+
>>> module.__annotate__(Format.VALUE)
Python is calling me now
{'variable': None}

>>> module.__annotate__(Format.VALUE)
Python is calling me now
{'variable': None}

When you do, Python re-evaluates the annotation expressions, producing a new dictionary each time. That said, don’t call this method manually, as it’s meant for internal use only. It bypasses the cache, leading to unpredictable behavior due to potential side effects. There are better ways to obtain annotations, which you’ll learn about soon.

Deferred evaluation solves many of the annotation-related issues that plagued codebases before Python 3.14. Simultaneously, it remains mostly backward-compatible with existing code. Now that you understand how the evaluation of annotations has evolved over the years, it’s time to look at the classic problems and limitations that existed prior to this improvement.

Flaws of Annotations Before Python 3.14

You’ve already examined the various mechanisms Python introduced to address long-standing issues with type hinting and the dynamic processing of annotations. In this section, you’ll take a closer look at those issues and their workarounds, some of which might still be valid in Python 3.14.

Forward References

You’d often require forward references when using annotations to declare a type that hasn’t been fully defined yet. As an example, consider two classes representing a linked list and its nodes. Intuitively, you might want to create the LinkedList class first because it’s a higher-level abstraction than Node, which the former depends on. Unfortunately, this leads to an error:

Python Python 3.7–3.13
>>> from dataclasses import dataclass

>>> @dataclass
... class LinkedList:
...     head: Node
...
Traceback (most recent call last):
  ...
NameError: name 'Node' is not defined. Did you mean: 'None'?

Python doesn’t recognize the type you’re referring to in the annotation for the .head attribute because it hasn’t seen the definition of Node yet.

You can try to fix this issue by reordering your class definitions so that subordinate classes always appear first:

Python Python 3.7–3.13
>>> from typing import Any, Optional

>>> @dataclass
... class Node:
...     value: Any
...     next: Optional[Node] = None
...
Traceback (most recent call last):
  ...
NameError: name 'Node' is not defined. Did you mean: 'None'?

Alas, you’re back to square one. While your Node class doesn’t depend on custom classes defined later, it’s a self-referential type that contains a circular reference to itself. Since Python evaluates type annotations at class definition time, the name Node isn’t yet fully defined when it appears in its own body.

As you can see, refactoring your code so that classes follow a logical order isn’t always possible. When classes have intricate relationships, there might be no solution at all. Besides, placing lower-level classes before their higher-level counterparts can feel a bit jarring, as it goes against the natural top-down reading order.

In those cases, you can fall back to one of the following workarounds before Python 3.14:

  1. Wrap the type hints of undefined types into string literals
  2. Use the from __future__ import annotations directive

Both approaches achieve the same goal by introducing forward references, which type checkers can interpret correctly. The key difference is that the first method requires a manual process, whereas the second automatically stringifies all type hints in the given module.

Here’s how you might apply the first solution:

Python forward_references_strings.py
from dataclasses import dataclass
from typing import Any, Optional

@dataclass
class LinkedList:
    head: "Node"

@dataclass
class Node:
    value: Any
    next: Optional["Node"] = None

When you wrap the type hints in string literals, Python no longer complains about unresolved references at runtime. Instead, it treats the stringified annotations just like any other strings.

The other solution entails placing a future directive at the beginning of your script without wrapping any type hints by hand:

Python forward_references_future.py
from __future__ import annotations

from dataclasses import dataclass
from typing import Any, Optional

@dataclass
class LinkedList:
    head: Node

@dataclass
class Node:
    value: Any
    next: Optional[Node] = None

This works similarly to the previous example, addressing the challenges of type hinting through forward references. Unfortunately, neither of these patterns plays nicely with libraries performing runtime processing of annotations.

In Python 3.14, you don’t need to do anything extra for this to work across the board. Thanks to the deferred evaluation of annotations, the interpreter will resolve all references automatically, even if the referenced types are defined later in the file. This means that references to incorrect or missing types will no longer cause runtime errors until the annotation is used.

Meanwhile, third-party tools and libraries don’t need to evaluate annotations themselves. Python 3.14 provides a built-in mechanism that produces consistent results while supporting various use cases.

Circular Imports

The problem of circular imports is a more extreme version of the forward reference issue. It occurs when referenced types reside in separate modules. Python modules can’t import each other without causing a circular dependency error, which is often a sign of poor design.

Yet, in some cases, you may legitimately need mutual imports solely for type hinting purposes. That’s because the type checker can’t infer where a given symbol is defined by itself. Consider the following validators module as an example:

Python validators.py
from models import User

def validate_email(user: User):
    if user.email is None:
        raise ValueError("email is required")

Notice that you only need the User class to declare the expected type of a function parameter. You never actually access the User class at runtime.

This contrasts with models, where you import the validate_email() function to be called:

Python models.py
from dataclasses import dataclass
from validators import validate_email

@dataclass
class User:
    email: str
    password: str

    def __post_init__(self):
        validate_email(self)

Now, when you try to import either of the two modules, you’ll get the notorious ImportError:

Python Python 3.7+
>>> import models
Traceback (most recent call last):
  ...
ImportError: cannot import name 'User' from 'models' (...)

In this situation, you attempt to import the models module, which itself imports validate_email() from validators. In turn, validators imports the User class from models, but that fails. At this point, Python hasn’t finished loading the models module, so the User class isn’t yet defined. If you reverse the order and import validators first, then you’ll encounter a similar issue, but in the opposite direction.

To break such a circular reference, you can leverage the typing.TYPE_CHECKING constant, which evaluates to False at runtime but allows type checkers to find the necessary code for static analysis:

Python validators.py
from __future__ import annotations

from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from models import User

def validate_email(user: User):
    if user.email is None:
        raise ValueError("email is required")

Python will now ignore your import statement, which causes a circular dependency problem. Simultaneously, the type checker can still warn you ahead of time when you accidentally pass an object without an .email attribute.

Importing modules conditionally with the TYPE_CHECKING constant also addresses the performance overhead of evaluating annotations. You’ll learn more about it in the next section.

Performance Overhead

Annotations used as type hints usually have a negligible performance overhead, even when Python evaluates them eagerly. More complex annotations—for example, involving generic types—might incur a slightly bigger cost at the module import time. That’s because such annotations involve multiple name lookups when the function or class is defined. Still, their impact shouldn’t be very noticeable these days.

However, if your annotations contain expressions that are expensive to compute, or when they require importing modules with costly side effects, then it’s a different story. Here’s a contrived example that illustrates the problem:

Python fib.py
from typing import Annotated

def fib(n: int) -> int:
    return n if n < 2 else fib(n - 2) + fib(n - 1)

def increment(x: Annotated[int, fib(35)]) -> int:
    return x + 1

In this case, you annotate the parameter x of your increment() function with a call to fib(35). This annotation computes the 35th element of the Fibonacci sequence using a deliberately inefficient recursive implementation.

Unless you use a future directive to postpone the evaluation of annotations, you’ll observe a significant delay when running or importing this module on earlier Python versions:

Shell
$ time -p python3.13 -c 'import fib'
real 1.93
user 1.88
sys 0.06

It takes almost two seconds for Python 3.13 to load the function definitions from fib, even though the module itself doesn’t execute any code. In contrast, running the same code through Python 3.14 is almost instantaneous:

Shell
$ time -p python3.14 -c 'import fib'
real 0.03
user 0.02
sys 0.00

The difference is that Python 3.14 doesn’t execute annotations immediately, but defers their evaluation until necessary.

Additionally, performance may suffer when you rely on third-party libraries that make heavy use of annotations at runtime. One such example is typeguard, which reads runtime type-checking information (RTTI) to enforce types during execution:

Python rtti.py
from typeguard import typechecked

@typechecked
def add(a: int, b: int) -> int:
    return a + b

Using this library can slow down your code because it checks types on every function call. It’s great for development, testing, and debugging, but you might want to disable it in production for performance-sensitive code.

Lazy Evaluation of Annotations in Python 3.14

Most of this tutorial so far has laid the groundwork for understanding annotations, including their purpose and connection to type hinting. Along the way, you’ve explored a bit of historical context to see how their evaluation evolved. Now it’s time to dive deeper and understand what these changes mean from a practical standpoint for you as a developer.

Static Type Checking of Annotated Code

When you only use annotations as type hints, you no longer need to wrap forward references in string literals, nor do you need to enable stringified annotations through a future directive. This is what your earlier example of a linked list implementation might look like in Python 3.14:

Python linked_list.py
from dataclasses import dataclass
from typing import Any, Optional

@dataclass
class LinkedList:
    head: Node

@dataclass
class Node:
    value: Any
    next: Optional[Node] = None

Because Python evaluates annotations lazily now, you can safely refer to a type, such as Node, defined much later. This feels natural and intuitive, allowing you to write cleaner and more readable code without worrying about the order of class or function definitions.

Otherwise, you can keep writing type hints exactly as before. The deferred evaluation of annotations in Python 3.14 has a minimal direct impact on your code. There’s no change in syntax or meaning of type hints. Static type checkers will continue to rely on their own logic to read and interpret annotations.

Runtime Processing of Annotations

If you’re a maintainer of a library that processes annotations dynamically, then you’ll mostly benefit from Python’s new runtime behavior. Some of the improvements include:

  • Elimination of manual evaluation for stringified type hints
  • Automatic resolution of forward references
  • Improved performance of runtime type introspection
  • Greater flexibility in using annotations for different purposes
  • A consistent evaluation mechanism provided by Python

But, it’s not all sunshine and roses. If you intend to maintain backward compatibility with older Python releases, then you should audit your code for reliance on legacy quirks. Besides, Python will continue to support automatically stringified annotations for the foreseeable future.

The most visible change in Python 3.14 is that the .__annotations__ dictionary will almost always contain evaluated Python objects—since you don’t need to wrap them in string literals anymore. Therefore, you can start using those objects immediately without having to parse the corresponding strings first.

Another benefit is that Python will automatically resolve any forward references for free. Previously, you’d have to do it yourself by searching the current namespace with globals() or by calling utility functions from the standard library, like below:

Python Python 3.6–3.13
>>> from typing import Optional, get_type_hints

>>> class Person:
...     best_friend: Optional["Person"]
...
>>> Person.__annotations__
{'best_friend': typing.Optional[ForwardRef('Person')]}

>>> get_type_hints(Person)
{'best_friend': typing.Optional[__main__.Person]}

Depending on how you declare your .best_friend attribute and whether you enable automatically stringified annotations, the corresponding type might appear as a string 'Person' wrapped in a ForwardRef. To deal with such situations—and many other surprising edge cases—you should use typing.get_type_hints(), which untangles stringified type hints and resolves forward references.

But this is just one tool in Python’s toolbox. You’ll now explore the broader set of techniques you can use to introspect and work with annotations directly from your code.

Introspection of Annotations From Within Python

In this section, you’ll explore the various ways to access and evaluate annotations from within Python. You’ll start with low-level dictionary access and advance to higher-level utility functions in the standard library designed for more robust and flexible introspection.

Access the Low-Level Dictionary With .__annotations__

As you know by now, Python modules, classes, functions, and methods define a special attribute called .__annotations__. This attribute is a dictionary that stores the annotations associated with variables, parameters, and return types.

Below is an example illustrating what this attribute looks like on various code objects from the Polars library:

Python Python 3.7+
>>> import polars

>>> module = polars
>>> function = polars.read_csv
>>> class_ = polars.Array
>>> method = polars.Array.__init__

>>> module.__annotations__
{'__version__': <class 'str'>}

>>> function.__annotations__
{'source': 'str | Path | IO[str] | IO[bytes] | bytes',
 'has_header': 'bool',

 'return': 'DataFrame'}

>>> class_.__annotations__
{'inner': 'PolarsDataType',
 'size': 'int',
 'shape': 'tuple[int, ...]'}

>>> method.__annotations__
{'inner': 'PolarsDataType | PythonDataType',
 'shape': 'int | tuple[int, ...] | None',
 'width': 'int | None',
 'return': 'None'}

In each case, annotations serve as type hints to document expected types and let type checkers flag mismatches before the code runs.

Note that module-level annotations captured by .__annotations__ include only annotated global variables, such as variable1 and variable3 in the example below:

Python module.py
variable1: int = 42
variable2 = 42
variable3: int

When you define a variable like variable2 without annotating it, then it won’t show up in the module’s .__annotations__ dictionary:

Python Python 3.6+
>>> import module
>>> vars(module)
{'__name__': 'module',

 '__annotations__': {
     'variable1': <class 'int'>,
     'variable3': <class 'int'>
 },
 'variable1': 42,
 'variable2': 42}

On the other hand, variables that are actually registered in the global namespace must be assigned some initial value. If you declare a variable with an annotation but without a value, like variable3 in this example, then it’ll remain undefined.

Even though you can annotate local variables in a function body, Python doesn’t store their annotations at runtime. These kinds of annotations are discarded during bytecode compilation without being preserved in any special attribute:

Python Python 3.6+
>>> def function():
...     local_variable: int = 42
...
>>> function.__annotations__
{}

Type checkers can generally infer the types of local variables from the context. Still, there are a few reasons why you might want to add explicit annotations to your local variables:

Python Python 3.8+
from typing import Final

def function():
    # Suppress ambiguous inference of generic types
    fruits: list[str] = []

    # Mark variables as read-only with Final
    max_connections: Final[int] = 10

    # Document intent for maintainability
    user_ids: list[int] = extract_user_ids(records)

    # Use forward declarations in conditional branches
    result: str
    if condition:
        result = "yes"
    else:
        result = "no"

    # Help infer types of dynamic code
    config: dict[str, int] = {}
    for key, value in data.items():
        if isinstance(value, int):
            config[key] = value

For example, types of highly dynamic code may not be trivial to infer, or there might be ambiguities if there’s not enough contextual information.

Before Python 3.14, the .__annotations__ dictionary was computed eagerly at the time of class or function definition. This caused problems with forward references and reduced performance, which you could sidestep by enabling stringified annotations or by manually wrapping select type hints in string literals.

Because these fixes solved problems for type checkers but created new challenges for libraries, Python 3.14 introduces a better approach: annotations are now evaluated lazily. This means that the .__annotations__ dictionary isn’t created upfront. Instead, Python populates it with the evaluated annotations on demand and caches the result.

That said, because accessing this attribute directly is subject to many surprising quirks that have changed over time, you should only use it in the most simple cases. For the most robust processing of annotations at runtime, prefer one of the helper functions from the standard library, which you’ll learn about next.

Evaluate Annotations With annotationlib.get_annotations()

Python offers a safer and more versatile way to introspect annotations at runtime than accessing the .__annotations__ attribute directly. The annotationlib.get_annotations() utility function is considered best practice for reliably fetching annotations from any Python object.

Originally introduced as inspect.get_annotations() in Python 3.10, this function is being moved to the new annotationlib module in Python 3.14. The existing function will remain an alias for backward compatibility. The motivation behind introducing a smaller, more focused module was to avoid the overhead of importing inspect, which has grown too large serving purposes far beyond runtime type introspection.

A key reason to prefer get_annotations() over .__annotations__ is that it addresses common pitfalls when working with annotations. The Python documentation contains the following warning:

Accessing the __annotations__ attribute of a class object directly may yield incorrect results in the presence of metaclasses. In addition, the attribute may not exist for some classes. Use inspect.get_annotations() to retrieve class annotations safely. (Source)

Unlike directly accessing .__annotations__, get_annotations() is guaranteed to return a dictionary. This ensures your code won’t break when inspecting classes, modules, or callable objects that lack annotations altogether. Take a partial function defined with functools as an example:

Python Python 3.10+
>>> import functools

>>> def add(a: int, b: int) -> int:
...     return a + b
...

>>> increment = functools.partial(add, a=1)
>>> increment.__annotations__
Traceback (most recent call last):
  ...
AttributeError: 'functools.partial' object has no attribute '__annotations__'

>>> import inspect
>>> inspect.get_annotations(increment)
{}

>>> inspect.get_annotations(increment.func)
{'a': <class 'int'>, 'b': <class 'int'>, 'return': <class 'int'>}

The object returned by partial() doesn’t preserve the original function’s annotations, so trying to access them through .__annotations__ raises an error. However, get_annotations() gracefully handles this situation by returning an empty dictionary. To retrieve the actual annotations from the wrapped function, you can inspect the .func attribute of the partial object.

In addition to being safer and more consistent, get_annotations() provides a useful feature, allowing you to evaluate stringified annotations:

Python Python 3.10+
>>> def div(a: "int", b: int) -> "float":
...     return a / b
...

>>> inspect.get_annotations(div)
{'a': 'int', 'b': <class 'int'>, 'return': 'float'}

>>> inspect.get_annotations(div, eval_str=True)
{'a': <class 'int'>, 'b': <class 'int'>, 'return': <class 'float'>}

By flipping the eval_str flag to True, you make Python unstringify annotations into corresponding Python objects with eval() behind the curtain. If an annotation is already a non-string object, then no additional evaluation is applied.

Compared to its prototype in the inspect module, the new annotationlib.get_annotations() function has been revised and updated. It now supports an optional and keyword-only format parameter, similar to the one used by .__annotate__(). This parameter lets you control the format of the returned annotations:

Python Python 3.14+
>>> from annotationlib import Format, get_annotations
>>> from dataclasses import dataclass

>>> @dataclass
... class LinkedList:
...     head: Node
...
>>> get_annotations(LinkedList, format=Format.STRING)
{'head': 'Node'}

>>> get_annotations(LinkedList, format=Format.FORWARDREF)
{'head': ForwardRef('Node', is_class=True, owner=<class '__main__.LinkedList'>)}

>>> get_annotations(LinkedList, format=Format.VALUE)
Traceback (most recent call last):
  ...
NameError: name 'Node' is not defined. Did you mean: 'None'?

When you request the Format.STRING or Format.FORWARDREF formats, get_annotations() returns either the raw string representation or a ForwardRef wrapper object for the declared types, respectively. However, attempting to resolve forward references to actual values with Format.VALUE raises a NameError if the referenced types haven’t been defined yet.

Once you define the missing classes, you’ll be able to obtain evaluated Python objects using either Format.VALUE or Format.FORWARDREF:

Python Python 3.14+
>>> from typing import Any

>>> @dataclass
... class Node:
...     value: Any
...     next: Node | None = None
...
>>> get_annotations(LinkedList, format=Format.VALUE)
{'head': <class '__main__.Node'>}

>>> get_annotations(LinkedList, format=Format.FORWARDREF)
{'head': <class '__main__.Node'>}

In most cases, the Format.FORWARDREF option will produce an evaluated object, only falling back to the ForwardRef placeholder for undefined types.

The get_annotations() function is a general-purpose utility that gives you the ultimate control over handling annotations in Python. It can evaluate stringified annotations for you, optionally avoiding side effects with ForwardRef placeholders. At the same time, it can turn them into strings for further manual processing.

However, when you’re specifically using annotations as type hints, then Python offers a more convenient tool tailored to that purpose. You’ll explore it next.

Obtain Runtime Type Information With typing.get_type_hints()

When you treat annotations purely as type hints, then you should use Python’s specialized typing.get_type_hints() utility. This function focuses on runtime type introspection for functions, classes, and modules. It can evaluate stringified types and follows inheritance hierarchies, making it ideal for tools like linters and runtime type checkers.

As the name suggests, get_type_hints() interprets annotations as type hints, expecting them to represent actual types rather than arbitrary metadata. So, if your annotations don’t correspond to valid data types, then they won’t evaluate properly:

Python Python 3.14+
>>> from typing import get_type_hints

>>> class Broken1:
...     field: UndefinedType
...
>>> class Broken2:
...     field: "42 is not a type"
...
>>> get_type_hints(Broken1)
Traceback (most recent call last):
  ...
NameError: name 'UndefinedType' is not defined

>>> get_type_hints(Broken2)
Traceback (most recent call last):
  ...
SyntaxError: Forward reference must be an expression -- got '42 is not a type'

Python raises a NameError when the evaluated type isn’t defined in the available namespace, and a SyntaxError when an annotation wrapped in a string literal isn’t valid Python code.

Those were artificial examples. Here’s a more realistic one:

Python Python 3.14+
>>> def main(size: {S, M, L, XL}, debug: -d = False):
...     pass
...
>>> get_type_hints(main)
Traceback (most recent call last):
  ...
NameError: name 'S' is not defined

The main() function above uses annotations to describe command-line arguments with expressions that don’t reflect any meaningful types. Calling get_type_hints() on this function triggers their evaluation, and since S can’t be found in the current namespace, Python raises an error.

In this case, you might prefer to leverage get_annotations() with format=Format.STRING to retrieve stringified expressions, avoiding their premature evaluation:

Python Python 3.14+
>>> from annotationlib import Format, get_annotations

>>> get_annotations(main, format=Format.STRING)
{'size': '{S, XL, M, L}', 'debug': '-d'}

This lets you process the annotations in custom ways, for example, by building a command-line argument parser and injecting the parsed values back into the main() function.

Alternatively, when working with legacy code that follows a different annotation style, you may still want to introduce type hints elsewhere for better readability and tooling support. To make these mixed styles coexist, you can selectively mark certain functions or classes to be ignored by type checkers.

You do this by applying the @no_type_check decorator from the typing module, which indicates annotations that aren’t type hints:

Python Python 3.14+
>>> from typing import no_type_check

>>> @no_type_check
... def main(size: {S, M, L, XL}, debug: -d = False):
...     pass
...
>>> get_type_hints(main)
{}

When this decorator is present, get_type_hints() will skip all annotations in that object, effectively opting them out of type checking.

However, a better approach to harmonize the two might be to use typing.Annotated, which get_type_hints() supports. It allows you to combine optional metadata with type hints in a more deliberate and expressive way:

Python Python 3.12+
>>> from typing import Annotated, Literal

>>> type Size = Literal["S", "M", "L", "XL"]

>>> def main(
...     size: Annotated[Size, set(Size.__value__.__args__)],
...     debug: Annotated[bool, "-d", "--debug"] = False,
... ):
...     pass
...

The first argument to Annotated is the type hint, such as Size or bool. The rest are metadata, like a fixed set of value choices or a list of Boolean flag names.

By default, get_type_hints() discards the metadata attached to annotations through typing.Annotated. The function simply returns the underlying types:

Python Python 3.5+
>>> get_type_hints(main)
{'size': Size,
 'debug': <class 'bool'>}

To reveal the additional details, you must explicitly set include_extras=True:

Python Python 3.9+
>>> get_type_hints(main, include_extras=True)
{'size': typing.Annotated[Size, {'M', 'S', 'L', 'XL'}],
 'debug': typing.Annotated[bool, '-d', '--debug']}

Having the flexibility to preserve this metadata lets you support use cases like runtime validation or schema generation, where these extra details play a crucial role.

Unlike get_annotations(), which gives you a choice, get_type_hints() always evaluates stringified annotations, treating them as forward references. This makes it convenient when types are declared later in the code, especially in class bodies:

Python Python 3.6+
>>> class Book:
...     title: str
...     isbn: "str"
...     author: "Person"
...
... class Person:
...     pass
...
>>> get_type_hints(Book)
{'title': <class 'str'>,
 'isbn': <class 'str'>,
 'author': <class '__main__.Person'>}

Whether you declare your type hints using native Python objects or wrap them in string literals, get_type_hints() resolves them correctly.

Another subtle different between get_annotations() and get_type_hints() is their handling of None as an annotation:

Python Python 3.14+
>>> class Empty:
...     field: None
...
>>> get_annotations(Empty)
{'field': None}

>>> get_type_hints(Empty)
{'field': <class 'NoneType'>}

The former returns None as is, while the latter converts it to NoneType.

One of the most powerful features of get_type_hints() is how it resolves annotations in class hierarchies. Unlike naive attribute lookups, it merges annotations from base classes by walking the method resolution order (MRO). This ensures you get a complete and consistent view of type hints even in multiple inheritance scenarios:

Python Python 3.14+
>>> class Person:
...     first_name: str
...     last_name: str
...
... class User(Person):
...     email: str
...     password: str
...
... class Employee(User):
...     employee_id: int
...     department: str
...
>>> get_annotations(Employee)
{'employee_id': <class 'int'>,
 'department': <class 'str'>}

>>> get_type_hints(Employee)
{'first_name': <class 'str'>,
 'last_name': <class 'str'>,
 'email': <class 'str'>,
 'password': <class 'str'>,
 'employee_id': <class 'int'>,
 'department': <class 'str'>}

This behavior sets get_type_hints() apart from manual introspection or even get_annotations(), which treats annotations as belonging only to the immediate object.

In summary, typing.get_type_hints() is your go-to tool for interpreting annotations as type hints. It automatically evaluates forward references, filters out metadata unless requested, and unifies type hints from base classes. Use it when you want strict, type-focused introspection that aligns with the intent of PEP 484.

Conclusion

With Python 3.14, annotations become evaluated only when needed, bringing a welcome improvement to both their performance and usability. This change resolves long-standing issues with forward references, circular imports, and runtime overhead, all while preserving backward compatibility.

Throughout this tutorial, you’ve learned how Python annotations work, why they matter, and how their evaluation strategy has evolved over time. More importantly, you’ve seen how this affects your daily coding.

In this tutorial, you’ve learned how to:

  • Distinguish annotations from type hints and understand their broader purpose
  • Apply annotations for both static type checking and runtime processing
  • Avoid common pitfalls like forward references and circular imports
  • Introspect annotations safely using standard library tools
  • Understand the practical impact of Python 3.14’s lazy evaluation

Python’s dynamic nature doesn’t mean you have to sacrifice structure or clarity. Lazy annotations help bring the best of both worlds—maintainable code that’s easier to reason about and faster to load.

Ready to give lazy annotations a try? Fire up Python 3.14 and see the difference for yourself.

Frequently Asked Questions

Now that you have some experience with lazy annotations in Python in 3.14, you can use the questions and answers below to check your understanding and recap what you’ve learned.

These FAQs are related to the most important concepts you’ve covered in this tutorial. Click the Show/Hide toggle beside each question to reveal the answer.

You use Python annotations primarily for type hinting, which assists with static type checking, code documentation, and providing runtime metadata for libraries to process.

In Python 3.14, lazy annotations improve performance by deferring their evaluation until you explicitly request them, reducing startup time and avoiding unnecessary computations.

Lazy annotations solve issues with forward references and circular imports by not evaluating annotations until necessary, allowing references to types defined later in the code.

You can access annotations using the .__annotations__ attribute or by using utility functions like annotationlib.get_annotations() and typing.get_type_hints() for safer and more flexible introspection.

Using typing.Annotated allows you to combine type hints with metadata, enabling both static type checking and runtime processing without conflict.

Take the Quiz: Test your knowledge with our interactive “Python Annotations” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Annotations

Test your knowledge of annotations and type hints, including how different Python versions evaluate them at runtime.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About Bartosz Zaczyński

Bartosz is an experienced software engineer and Python educator with an M.Sc. in Applied Computer Science.

» More about Bartosz

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Master Real-World Python Skills With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

Master Real-World Python Skills
With Unlimited Access to Real Python

Locked learning resources

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Level Up Your Python Skills »

What Do You Think?

Rate this article:

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. Get tips for asking good questions and get answers to common questions in our support portal.


Looking for a real-time conversation? Visit the Real Python Community Chat or join the next “Office Hours” Live Q&A Session. Happy Pythoning!