Skip to main content
Engineering LibreTexts

6.3: Recursive Functions on Nonnegative Integers

  • Page ID
    48323
    • Eric Lehman, F. Thomson Leighton, & Alberty R. Meyer
    • Google and Massachusetts Institute of Technology via MIT OpenCourseWare
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The nonnegative integers can be understood as a recursive data type.

    Definition \(\PageIndex{1}\)

    The set, \mathbb{N}, is a data type defined recursively as:

    • \(0 \in \mathbb{N}\).
    • If \(n \in \mathbb{N}\), then the successor, \(n + 1\), of \(n\) is in \(\mathbb{N}\).

    The point here is to make it clear that ordinary induction is simply the special case of structural induction on the recursive Definition 6.3.1. This also justifies the familiar recursive definitions of functions on the nonnegative integers.

    Some Standard Recursive Functions on \(\mathbb{N}\)

    Example 6.3.2. The factorial function. This function is often written “\(n!\).” You will see a lot of it in later chapters. Here, we’ll use the notation \(\text{fac}(n)\):

    Example \(\PageIndex{2}\) The factorial function

    This function is often written “\(n!\).” You will see a lot of it in later chapters. Here, we’ll use the notation fac(\(n\)):

    • \(\text{fac}(0) ::= 1\).
    • \(\text{fac}(n+1) ::= (n+1) \cdot \text{fac}(n) \text{ for } n \geq 0.\)

    Example \(\PageIndex{3}\) The Fibonacci numbers

    Fibonacci numbers arose out of an effort 800 years ago to model population growth. They have a continuing fan club of people captivated by their extraordinary properties (see Problems 5.8, 5.21, 5.26). The \(n\)th Fibonacci number, fib, can be defined recursively by:

    \[\nonumber \begin{array}{l}
    F(0)::=0, \\
    F(1)::=1, \\
    F(n)::=F(n-1)+F(n-2) \quad \text { for } n \geq 2 .
    \end{array}\]

    Here the recursive step starts at \(n = 2\) with base cases for 0 and 1. This is needed since the recursion relies on two previous values.

    What is \(F(4)?\) Well, \(F(2)=F(1)+F(0)=1, F(3)=F(2)+F(1)=2\), so \(F(4)=3\). The sequence starts out \(0,1,1,2,3,5,8,13,21, \ldots\)

    Example 6.3.4. Summation notation

    Let "\(S(n)\)" abbreviate the expression "\(\sum_{i=1}^{n} f(i).\)" We can recursively define \(S(n)\) with the rules

    • \(S(0)::=0\)
    • \(S(n+1)::=f(n+1)+S(n)\) for \(n \geq 0\).

    Ill-formed Function Definitions

    There are some other blunders to watch out for when defining functions recursively. The main problems come when recursive definitions don’t follow the recursive definition of the underlying data type. Below are some function specifications that resemble good definitions of functions on the nonnegative integers, but really aren’t.

    \[\label{6.3.1} f_{1}(n)::=2+f_{1}(n-1).\]

    This “definition” has no base case. If some function, \(f_{1}\), satisfied (\ref{6.3.1}), so would a function obtained by adding a constant to the value of \(f_{1}\). So equation (\ref{6.3.1}) does not uniquely define an \(f_{1}\).

    \[\label{6.3.2} f_{2}(n)::=\left\{\begin{array}{ll}0, & \text { if } n=0 \\ f_{2}(n+1) & \text { otherwise }\end{array}\right.\]

    This "definition" has a base case, but still doesn't uniquely determine \(f_{2}\). Any function that is 0 at 0 and constant everywhere else would satisfy the specification, so (\ref{6.3.2}) also does not uniquely define anything.

    In a typical programming language, evaluation of \(f_{2}(1)\) would begin with a recursive call of \(f_{2}(2)\), which would lead to a recursive call of \(f_{2}(3), \ldots\) with recursive calls continuing without end. This "operational" approach interprets (\ref{6.3.2}) as defining a partial function, \(f_{2}\), that is undefined everywhere but 0 .

    \[\label{6.3.3} f_{3}(n)::=\left\{\begin{array}{ll}
    0, & \text { if } n \text { is divisible by } 2 \\
    1, & \text { if } n \text { is divisible by } 3 \\
    2, & \text { otherwise }
    \end{array}\right.\]

    This "definition" is inconsistent: it requires \(f_{3}(6)=0\) and \(f_{3}(6)=1\), so (\ref{6.3.3}) doesn't define anything.

    Mathematicians have been wondering about this function specification, known as the Collatz conjecture for a while:

    \[\label{6.3.4} f_{4}(n)::=\left\{\begin{array}{ll}
    1, & \text { if } n \leq 1 \\
    f_{4}(n / 2) & \text { if } n>1 \text { is even } \\
    f_{4}(3 n+1) & \text { if } n>1 \text { is odd }
    \end{array}\right.\]

    For example, \(f_{4}(3)=1\) because

    \[\nonumber f_{4}(3)::=f_{4}(10)::=f_{4}(5)::=f_{4}(16)::=f_{4}(8)::=f_{4}(4)::=f_{4}(2)::=f_{4}(1)::=1.\]

    The constant function equal to 1 will satisfy (\ref{6.3.4}), but it's not known if another function does as well. The problem is that the third case specifies \(f_{4}(n)\) in terms of \(f_{4}\) at arguments larger than \(n\), and so cannot be justified by induction on \(\mathbb{N}\). It's known that any \(f_{4}\) satisfying (\ref{6.3.4}) equals 1 for all \(n\) up to over \(10^{18}\).

    A final example is the Ackermann function, which is an extremely fast-growing function of two nonnegative arguments. Its inverse is correspondingly slow-growingit grows slower than \(\log n, \log \log n, \log \log \log n, \ldots\), but it does grow unboundly. This inverse actually comes up analyzing a useful, highly efficient procedure known as the Union-Find algorithm. This algorithm was conjectured to run in a number of steps that grew linearly in the size of its input, but turned out to be "linear" but with a slow growing coefficient nearly equal to the inverse Ackermann function. This means that pragmatically, Union-Find is linear, since the theoretically growing coefficient is less than 5 for any input that could conceivably come up.

    The Ackermann function can be defined recursively as the function, \(A\), given by the following rules:

    \[\begin{align} A(m, n)&=2n & \text{if } m=0 \text{ or } n \leq 1, \\ A(m, n)&=A(m-1, A(m, n-1)), & \text{otherwise.} \end{align}\]

    Now these rules are unusual because the definition of \(A(m, n)\) involves an evaluation of \(A\) at arguments that may be a lot bigger than \(m\) and \(n\). The definitions of \(f_{2}\) above showed how definitions of function values at small argument values in terms of larger one can easily lead to nonterminating evaluations. The definition of the Ackermann function is actually ok, but proving this takes some ingenuity (see Problem 6.17).


    This page titled 6.3: Recursive Functions on Nonnegative Integers is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Eric Lehman, F. Thomson Leighton, & Alberty R. Meyer (MIT OpenCourseWare) .

    • Was this article helpful?