Caching in Python With lru_cache (Summary)
Caching is an essential optimization technique for improving the performance of any software system. Understanding how caching works is a fundamental step toward incorporating it effectively in your applications.
In this video course, you learned:
- What the different caching strategies are and how they work
- How to use Python’s
@lru_cache
decorator - How to create a new decorator to extend the functionality of
@lru_cache
- How to measure your code’s runtime using the
time
module - What recursion is and how to solve a problem using it
The next step to implementing different caching strategies in your applications is looking at the cachetools
module. This library provides several collections and decorators covering some of the most popular caching strategies that you can start using right away.
For more information on the concepts covered in this course, check out:
- Cache replacement policies
- functools Module - Lesson in Python Coding Interviews: Tips & Best Practices
- Primer on Python Decorators
- The Real Python Podcast Episode 68: Exploring the functools Module and Complex Numbers in Python
- Python Timer Functions: Three Ways to Monitor Your Code
- Recursion in Python: An Introduction
- Exploring the Fibonacci Sequence With Python
Congratulations, you made it to the end of the course! What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment in the discussion section and let us know.
00:00 In the previous lesson, I showed you how to augment the LRU cache to add time-based expiration. In this lesson, I’ll summarize the course and point you at some areas of further investigation.
00:13 You’ve seen how caching can make a big performance difference speed-wise. In fact, you saw a six order of magnitude difference when calculating the first thirty-six values of the Fibonacci sequence. This speed-up comes at the cost of memory.
00:26 Life is all about trade-offs. Early on in the course, I showed you several different caching policies but focused on LRU, the Least Recently Used policy.
00:36
This policy keeps things that were recently accessed, attempting to maximize the locality with a compromise on cost. Python provides an LRU cache through a decorator in the functools
library.
00:48 As long as your function’s arguments are hashable, it can be cached with the addition of an import and a single line of code. And as decorators are just functions, you can compose and add to them by calling them inside of other functions.
01:01 You saw how to implement an LRU cache that also had a time expiration feature by calling the LRU decorator inside of your very own.
01:12
For more information on cache policies, you could see this Wikipedia article. If you’d like to learn more about functools
, there’s a lesson inside of this longer course called Python Coding Interviews: Tips & Best Practices that goes into some commonly used parts of this library.
01:29 And if you’d like to learn more about decorators, this is a very detailed course with better explanations than you got in my little tangent. That’s all for me.
Become a Member to join the conversation.