Wow, the third blog post. I’ve been thinking of how to write this all out for some time, but I didn’t think I had this much information about Fibonacci numbers. And that’s not even getting too far into the math side of things.

Anyways, at this stage in the problem, we’re knee-deep. We’ve got three solutions to the Fibonacci problem, each with varying levels of efficiency. We’ll first compare them, but I’ll have to introduce a new tool into our toolbox.

Runtime Analysis

In computer science, we often want to see how things operate in the worst case, in respect to the size of our input.

This is really useful because this tells us the approximate number of operations, and also the load we’re putting onto our computers. For this reason, we use mathematical functions to bound our operations.

In our case, $n$ is the input, and we want to find the $n$th Fibonacci number.

If a function uses one for loop with $n$ iterations, then the function operates in $O(n)$ time.

Lets look at the previous solutions:

  • Recursive solution. If you remember from the graph that we drew that showed the recursive Fibonacci calls, you can imagine a worst case that each call asks for two calls. In that case, it would be a full binary tree (what they’re called), which has $2^n$ elements in it. So we can say that our program is $O(2^n)$. How inefficient! The exponential function grows really fast, and so you can probably see how bad this is.

Note: The tightest upper bound is actually $O(\phi^n)$. Not entirely surprising, but if you know some big O notation properties and want to look at the closed form we solved for in the last blog post, you can figure this out too.

  • Dynamic Programming solution. It’s very easy to see that we only compute each Fibonacci number once and only once (up until and including $n$), so we only do $n$ operations. So this solution is $O(n)$. Amazing! (The memoized recursive solution in the footnote is also $O(n)$.)

  • Closed form solution. While it might be easy to think that you can just compute this all at once (doing only one operation), which is really honestly very easy to conclude, this is actually $O(\log n)$. Why? Well unlike multiplication and addition (which have been optimized to be O(1), more or less), exponentiation uses a technique called divide and conquer. Basically what we do is split up the exponentiation, and instead of computing $a^b$, we compute first $a^{b/2}$, and continue doing so, because we can take that result and just multiply it by itself. That’s approximately $\log n$ steps, which gives us our bound of $O(\log n)$.

Strictly speaking you can also naively compute the exponentiation by just using the multiplication and accumulating your product, buuuut that’s still $O(n)$, cause you’re doing it $n$ times.

If you were to look at a graph of $n$ and $\log n$, you’ll see that our new solution is faster. When $n = 10^6$, $\log n \approx 6$! However, the issue of conditioning from the last post still persists. How do we fix that while still maintaining our amazing speed?

Matrices, exponentiation and systems of equations

Prelude

Before we get into matrices, I’d like to give explain how I came to discovering this solution. Story time!

It’s a Wednesday, and I’ve been waiting for this specific Wednesday to attend Math Club. Normally I do stuff on Wednesday nights, but this time around, I dropped my normal Wednesday plans to stick around on campus.

Before you shame me, just keep in mind that I was waiting in anticipation for the guest speaker, Dr. Khanin. I really enjoyed taking my introduction to discrete math the semester prior, so I was coerced by my girlfriend and others to attend this session. Also, free pizza!

The talk was actually not related to the Fibonacci numbers at all. It was related to a famous problem in mathematics, called the Basel Problem. For anyone who knows the mathematician Leonhard Euler, this was the problem that brought him into public recognition, cause this stumped many of the mathematicians at the time. It’s a really interesting problem, and if you can get to hear about the story, or even learn of the discovery, I’d definitely suggest to check it out.

But it was what was at the end that blew me away. He started showing relations between different mathematical concepts, and then he wrote this down:

$$\begin{bmatrix}f_{n+1} \\ f_{n} \\\end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 0\\ \end{bmatrix}\begin{bmatrix} f_{n} \\ f_{n-1}\end{bmatrix}$$

This is a way of writing the Fibonacci sequence as a system of equations. Immediately, I was wondering why there was a redundant equation. “Isn’t the Fibonacci recurrence just one equation?”

Solution

Yes, but with this you can solve for any Fibonacci number like this:

$$\begin{bmatrix}f_{n+1} \\ f_{n} \\\end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 0\\ \end{bmatrix}^n\begin{bmatrix} 1 \\ 0\end{bmatrix}$$

Crazy, isn’t it? And if you remember the remarks from the last part, exponentiation is $O(\log n)$. If we employ the same method for exponentiation but change it to adapt for matrix exponentiation, you’ll get something that won’t suffer through the conditioning issues (because we’re using integers instead of irrationals and fractions and all that mess).

Surprisingly, that just brings us to the end of the journey for efficiency. There are always more optimizations to be made, but none as drastic as the ones I’ve shown. Thanks for joining me on this journey!

In the next blog post I’ll tie up loose ends and other miscellaneous facts about Fibonacci numbers.