More lru_cache Features
@lru_cache decorator takes a couple of extra arguments.
maxsize specifies how many items are kept in the cache. It defaults to 128. And
typed changes the behavior of the decorator, creating a new cache for each type of argument passed in.
00:39 You might recall I made a snide comment about Europeans getting credit for writing something down. Well, let’s correct that. Let me introduce you to Pingala, an Indian mathematician who wrote down the concept called a Fibonacci sequence 1400 years before Fibonacci. Yep, you heard that right: 200 BC. In honor of Pingala, I’ve written a Fibonacci-like function that, instead of being based on the previous two values in the sequence, this one takes the previous seven values.
01:21 Instead of using the command-line argument to specify what digit to calculate, I’m now using it to specify the size of the cache. I’ve also added a bit more debug information, this time tracking the depth of the call in the stack.
Since I’m using the command-line argument to mean cache size, the
for loop is now hard-coded to calculate the seventh through eleventh values in the sequence. At the bottom here, I’m taking advantage of a feature built into the decorator.
The result for the eighth number is a value of
7, which took about twice as long to calculate. But now the cache is primed. The ninth through eleventh values can rely on the cache, and speedy is their name.
size. Lots more to unpack this time around. Let me scroll back up. The calculation for
7 was the same, as was
9 is a little different. Because the cache is limited to five values, you start to get misses.
Because I’m still only calculating low values, this is all still happening quickly, but you can see how it could get out of control. There were only
4 hits this time versus
57 misses, and the cache is full.
By using the
.CacheInfo() method, you can fine-tune your caching needs. If you’re finding that your cache is eating up a lot of memory, turning the
maxsize parameter down will help you—of course, at the potential cost of some of that speed. I haven’t shown it here, but when playing around to write the course, I ran this same code without a cache. Even with small values being calculated, there was a four times speed-up gained by attaching the LRU.
How is all this built? Well, you can see for yourself if you’re really curious. The underlying decorator is actually written in both Python and C. If you dig around on your computer and find the
functools library, you’ll find the
lru_cache decorator, followed by an import of the same thing from the
_functools library. That second thing is the C version, which overloads the Python version if it is available.
04:58 Even if your system is using the C version, you can still have a look at how the Python version is implemented. The code is more complicated than you might first guess. For starters, it’s thread-safe.
05:09 That means two threads can call your function and not get into a race condition problem because they both invoke the decorator at the same time. As I mentioned before, the underlying cache is stored as a dictionary.
This gives good performance when you’re trying to determine if a call is a hit or a miss. The function’s arguments are the keys to the dictionary. The arguments are mushed together into a sequence inside of a function logically enough named
Become a Member to join the conversation.