# Fibonacci and Optimizations

In this lesson, you’ll go over the **Fibonacci** series and see how to solve it recursively. You’ll also learn how to use a **cache** to drastically improve runtime:

```
from functools import lru_cache
def fibonacci_recursive(n):
# base cases
if n == 0:
return 0
elif n == 1:
return 1
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2)
@lru_cache(maxsize=None)
def fibonacci_recursive_optimized(n):
# base cases
if n == 0:
return 0
elif n == 1:
return 1
return fibonacci_recursive_optimized(n - 1) + fibonacci_recursive_optimized(n - 2)
```

**00:00**
Let’s talk about the Fibonacci numbers. The Fibonacci numbers were originally defined by the Italian mathematician Fibonacci to model growth of rabbit populations.

**00:10**
The number of pairs of rabbits born in a given year is equal to the number of pairs of rabbits born in each of the two previous years. So for any *n*-th year, *F*(*n*) is equal to *F*(*n* - 1) + *F*(*n* - 2). Base cases are *F*(0)—so, the number of pairs of rabbits on the zeroth year is 0, and the number of pairs of rabbits in the first year is 1.

**00:39**
The first 10 Fibonacci numbers are 0; 1; 1; 2; 3, which is 1 + 2; 5, 2 + 3; 8, 3 + 5; et cetera, et cetera. Let’s code this up using recursion and then talk about some optimizations.

**00:55**
So let’s define the `fibonacci_recursive()`

function. Fib-o-na-cci.

**01:04**
So, we already know the base cases. Base cases is `if n == 0:`

`return 0`

, `elif n == 1:`

`return 1`

. We could combine these into one case, like if `n == 0 or n == 1:`

`return n`

, but we’ll just leave it separated just to make it really clear.

**01:23**
And then we call `fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2)`

. Let’s actually just put these on multiple lines so it’s easier to see. Let’s load this file, call `fibonacci_recursive()`

of `1`

is `1`

, `2`

is `1`

, `3`

is `2`

, `4`

is `3`

.

**01:45**
And there we go. It seems pretty good. So, before we look at optimizations, let’s actually look at how many times the function is called with different arguments.

**02:01**
So let’s say `"fibonacci_recursive called with n ="`

, like that. So now when I load my file, `fibonacci_recursive(1)`

,

**02:17**
it’s called with `1`

. It hits the base case, returns `1`

. Then `fibonacci_recursive(2)`

goes here, calls `fibonacci_recursive(1)`

then `fibonacci_recursive(0)`

. `fibonacci_recursive(3)`

goes here.

**02:30**
`fibonacci_recursive(3)`

goes down here, `fibonacci_recursive(2)`

, which prints out this segment because we already know that that’s what that’s going to do.

**02:41**
Then we do `fibonacci_recursive(3 - 2)`

is `1`

. `fibonacci_recursive(5)`

—a lot of duplicate work. Like, look, we already evaluated `n`

equals `1`

here—why are we evaluating it again here?

**02:57**
`fibonacci_recursive(10)`

—just, yeah. A lot of duplicate work. What’s even more interesting, let’s try a really large number—`1000`

. Okay, `1000`

is not that large. But, you know, a pretty large number when we’re doing a lot of duplicate work like this.

**03:12**
And we’re seeing it’s taking a really, really long time. I’m just going to stop it and exit out of here. That was just taking way too long. So, how do we make it better? Well, let’s use something called a cache.

**03:24**
A cache is useful if you realize you’re doing a lot of repeated work and you don’t need to be. So what we do is we save the value once we’ve evaluated it into a cache, and then whenever we have to evaluate it again, we just look into the cache and return the value if we’ve already evaluated it—if it exists in our cache. I’m not going to go too in-depth with caches.

**03:48**
There’s a really useful built-in cache decorator that we can just use for the purposes of this demo. And we can do something like `fibonacci_recursive_optimized()`

.

**04:02**
We put our decorator on top, `@lru_cache()`

with `maxsize`

,

**04:09**
then we copy all this code here. Oops. Remember to change all instances of the function calls. And then let’s load our file. So now when we do `fibonacci_recursive_optimized(1)`

, we only call it with `n`

equals `1`

.

**04:30**
Let’s wrap our text editor a little bit, like that. Sorry, a little hard to read. There we go. So, we’ll print it. We’ll do `fibonacci_recursive_optimized(n - 1) + fibonacci_recursive_optimized(n - 2)`

.

**04:46**
Call it with `1`

, then call it with `2`

. We call it with `n`

equals `2`

and `n`

equals `0`

, then we’ll call it with `3`

, `n = 3`

. `4`

, `n = 4`

.

**05:00**
Look how we don’t actually have to call it again with `n`

equals `4`

, `3`

, `2`

, et cetera, because it’s already been evaluated. Let’s do a number like `10`

. It’ll evaluate `10`

, `9`

, `8`

, `7`

, `6`

, because we haven’t evaluated that before, and then we don’t evaluate any more. Let’s try `1000`

. Wow, look at that.

**05:19**
That was so fast. I mean, it printed out from `1000`

all the way to `10`

or so, because that’s what we’ve already evaluated, and it evaluated Fibonacci of `1000`

.

**05:30**
So you can just see the power of caching and optimizing our recursive calls speeds up things drastically.

**Bartosz Zaczyński** RP Team on Feb. 23, 2022

**@matteoarellano** I’m not sure if I understood your question correctly, but there’s no inherent concept of order when you use the cache. The `@lru_cache`

decorator replaces your function with a wrapper function, which maintains a map of argument values and corresponding calculated results. If a particular combination of argument values is already present in the cache, then the wrapper returns a suitable value calculated previously without calling your function. Otherwise, it will call your function and keep the result for future invocations with this specific set of arguments.

You can test this out with the following code snippet. By the way, `lru_cache(maxsize=None)`

is equivalent to `@cache`

, which is available since Python 3.9:

```
>>> from functools import cache
>>> @cache
... def add(a, b):
... print(f"Calling add({a}, {b})...")
... return a + b
>>> add(1, 2)
Calling add(1, 2)...
3
>>> add(1, 2)
3
>>> add(1, 3)
Calling add(1, 3)...
4
>>> add(1, 3)
4
```

Notice that the underlying function only gets called the first time you pass a particular set of arguments. Subsequent calls with the same arguments will reuse the remembered value.

**matteoarellano** on Feb. 24, 2022

This is amazing explanation. Thank you so much!

Become a Member to join the conversation.

matteoarellanoon Feb. 23, 2022Does the cache work in sequential ascending order? Let’s say you evaluate first on n=500 on another type of function that is not fibonacci, does the same principle apply or only for specific type of function?