# 21.5: A Feel for Recurrences

- Page ID
- 54602

We’ve guessed and verified, plugged and chugged, found roots, computed integrals, and solved linear systems and exponential equations. Now let’s step back and look for some rules of thumb. What kinds of recurrences have what sorts of solutions?

Here are some recurrences we solved earlier:

\[\nonumber \begin{array}{lll} & \text{Recurrence} & \text{Solution} \\ \text{Towers of Hanoi} & T_n = 2T_{n-1} + 1 & T_n \sim 2^n \\ \text{Merge Sort} & T_n = 2T_{n/2} + n - 1 & T_n \sim n \log n \\ \text{Hanoi variation} & T_n = 2T_{n-1} + n & T_n \sim 2 \cdot 2^n \\ \text{Fibonacci} & T_n = T_{n-1} + T_{n-2} & T_n \sim (1.618 \ldots)^{n+1} / \sqrt{5} \end{array}\]

Notice that the recurrence equations for Towers of Hanoi and Merge Sort are somewhat similar, but the solutions are radically different. Merge Sorting \(n = 64\) items takes a few hundred comparisons, while moving \(n = 64\) disks takes more than \(10^{19}\) steps!

Each recurrence has one strength and one weakness. In the Towers of Hanoi, we broke a problem of size \(n\) into two subproblem of size \(n - 1\) (which is large), but needed only 1 additional step (which is small). In Merge Sort, we divided the problem of size \(n\) into two subproblems of size \(n/2\) (which is small), but needed \(n - 1\) additional steps (which is large). Yet, Merge Sort is faster by a mile!

This suggests that *generating smaller subproblems is far more important to algorithmic speed than reducing the additional steps per recursive call*. For example, shifting to the variation of Towers of Hanoi increased the last term from \(+1\) to \(+n\), but the solution only doubled. And one of the two subproblems in the Fibonacci recurrence is just *slightly* smaller than in Towers of Hanoi (size \(n - 2\) instead of \(n - 1\)). Yet the solution is exponentially smaller! More generally, linear recurrences (which have big subproblems) typically have exponential solutions, while divideand-conquer recurrences (which have small subproblems) usually have solutions bounded above by a polynomial.

All the examples listed above break a problem of size \(n\) into two smaller problems. How does the number of subproblems affect the solution? For example, suppose we increased the number of subproblems in Towers of Hanoi from 2 to 3, giving this recurrence:

\[\nonumber T_n = 3T_{n - 1} + 1\]

This increases the root of the characteristic equation from 2 to 3, which raises the solution exponentially, from \(\Theta(2^n)\) to \(\Theta(3^n)\).

Divide-and-conquer recurrences are also sensitive to the number of subproblems. For example, for this generalization of the Merge Sort recurrence:

\[\begin{aligned} T_1 &= 0 \\ T_n &= aT_{n/2} + n - 1. \end{aligned}\]

the Akra-Bazzi formula gives:

\[\nonumber T_n = \left\{\begin{array}{ll} \Theta(n) & \text { for } a < 2 \\ \Theta(n \log n) & \text { for } a = 2 \\ \Theta(n^{\log a}) & \text { for } a > 2. \end{array}\right.\]

So the solution takes on three completely different forms as \(a\) goes from 1.99 to 2.01!

How do boundary conditions affect the solution to a recurrence? We’ve seen that they are almost irrelevant for divide-and-conquer recurrences. For linear recurrences, the solution is usually dominated by an exponential whose base is determined by the number and size of subproblems. Boundary conditions matter greatly only when they give the dominant term a zero coefficient, which changes the asymptotic solution.

So now we have a rule of thumb! The performance of a recursive procedure is usually dictated by the size and number of subproblems, rather than the amount of work per recursive call or time spent at the base of the recursion. In particular, if subproblems are smaller than the original by an additive factor, the solution is most often exponential. But if the subproblems are only a fraction the size of the original, then the solution is typically bounded by a polynomial.