Skip to main content
Engineering LibreTexts

2.2: The O Notation

  • Page ID
    49263
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Definition

    The \(O\) (pronounced big-oh) is the formal method of expressing the upper bound of an algorithm's running time. It's a measure of the longest amount of time it could possibly take for the algorithm to complete. We can assume that it represents the "worst case scenario" of a program.

    More formally, for non-negative functions, \(f(n)\) and \(g(n)\), if there exists an integer \(n_{0}\) and a constant \(c>0\) such that for all integers \(n>n_{0}\), \(f(n)\leq cg(n)\), then \(f(n)\) is big O of \((g(n))\). This is denoted as \(f(n)=O(g(n))\). If graphed, \(g(n)\) serves as an upper bound to the curve you are analyzing, \(f(n)\).

    Note that if \(f\) can take on finite values only (as it should happen normally) then this definition implies that there exists some constant \(C\) (potentially larger than \(c\)) such that for all values of \(n\), \(f(n)\leq Cg(n)\). An appropriate value for \(C\) is the maximum of \(c\) and \(\underset{1\leq n\leq n_{0}}{\max}{\dfrac {f(n)}{g(n)}}\).

    Theory Examples

    So, let's take an example of Big-O. Say that \(f(n)=2n+8\), and \(g(n)=n^{2}\). Can we find a constant \(n_{0}\), so that \(2n+8\leq n^{2}\)? The number \(4\) works here, giving us \(16\leq 16\). For any number \(n\) greater than \(4\), this will still work. Since we're trying to generalize this for large values of \(n\), and small values \(\{1,2,3\}\) aren't that important, we can say that \(f(n)\) is generally faster than \(g(n)\); that is, \(f(n)\) is bound by \(g(n)\), and will always be less than it.

    It could then be said that \(f(n)\) runs in \(O(n^{2})\) time: "f-of-n runs in Big-O of n-squared time".

    To find the upper bound - the Big-O time - assuming we know that \(f(n)\) is equal to (exactly) \(2n+8\), we can take a few shortcuts. For example, we can remove all constants from the runtime; eventually, at some value of \(n\), they become irrelevant. This makes \(f(n)=2n\). Also, for convenience of comparison, we remove constant multipliers; in this case, the \(2\). This makes \(f(n)=n\). It could also be said that \(f(n)\) runs in \(O(n)\) time; that lets us put a tighter (closer) upper bound onto the estimate.

    Practical Examples

    \(O(n)\): printing a list of \(n\) items to the screen, looking at each item once.

    \(O(\log _{2}{n})\): taking a list of \(n\) items, cutting it in half repeatedly until there's only one item left.

    \(O(n^{2})\): taking a list of \(n\) items, and comparing every item to every other item.


    This page titled 2.2: The O Notation is shared under a CC BY-SA license and was authored, remixed, and/or curated by Wikibooks - Data Structures (Wikipedia) .

    • Was this article helpful?