Skip to main content
Engineering LibreTexts

4.2: What Qualifies as "Negligible" Success Probability?

  • Page ID
    7335
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    It is not enough to consider only the running time of an attack. For example, consider an attacker who just tries to guess a victim’s secret key, making a single guess. This attack is extremely cheap, but it still has a nonzero chance of breaking security!

    In addition to an attack’s running time, we also need to consider its success probability. We don’t want to worry about attacks that are as expensive as a brute-force attack, and we don’t want to worry about attacks whose success probability is as low as a blind-guess attack.

    An attack with success probability \(2^{-128}\) should not really count as an attack, but an attack with success probability \(1 / 2\) should. Somewhere in between \(2^{-128}\) and \(2^{-1}\) we need to find a reasonable place to draw a line.

    Example

    Example Now we are dealing with extremely tiny probabilities that can be hard to visualize. Again, it can be helpful to conceptualize these probabilities with a more familiar reference:

     

    probability equivalent
    \(2^{-10}\) full house in 5-card poker
    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)
    \(2^{-20}\) royal flush in 5-card poker
    \(2^{-28}\) you win this week’s Powerball jackpot
    \(2^{-40}\) royal flush in 2 consecutive poker games
    \(2^{-60}\) the next meteorite that hits Earth lands in this square \(\rightarrow\)

    As before, it is not clear exactly where to draw the line between "reasonable" and "unreasonable" success probability for an attack. Just like we did with polynomial running time, we can also use an asymptotic approach to define when a probability is negligibly small. Just as "polynomial time" considers how fast an algorithm’s running time approaches infinity as its input grows, we can also consider how fast a success probability approaches zero as the security parameter grows.

    In a scheme with \(\lambda\)-bit keys, a blind-guessing attack succeeds with probability \(1 / 2^{\lambda}\). Now what about an attacker who makes 2 blind guesses, or \(\lambda\) guesses, or \(\lambda^{42}\) guesses? Such an attacker would still run in polynomial time, and has success probability \(2 / 2^{\lambda}, \lambda / 2^{\lambda}\), or \(\lambda^{42} / 2^{\lambda}\). However, no matter what polynomial you put in the numerator, the probability still goes to zero. Indeed, \(1 / 2^{\lambda}\) approaches zero so fast that no polynomial can "rescue" it; or, in other words, it approaches zero faster than 1 over any polynomial. This idea leads to our formal definition:

    Definition \(4.2\) (Negligible)

    A function \(f\) is negligible if, for every polynomial \(p\), we have \(\lim _{\lambda \rightarrow \infty} p(\lambda) f(\lambda)=0\).

    In other words, a negligible function approaches zero so fast that you can never catch up when multiplying by a polynomial. This is exactly the property we want from a security guarantee that is supposed to hold against all polynomial-time adversaries. If a polynomial-time attacker succeeds with probability \(f\), then repeating the same attack \(p\) independent times would still be an overall polynomial-time attack (if \(p\) is a polynomial), and its success probability would be \(p \cdot f\).

    When you want to check whether a function is negligible, you only have to consider polynomials \(p\) of the form \(p(\lambda)=\lambda^{c}\) for some constant \(c\) :

    Claim \(4.3\)

    If for every integer \(c, \lim _{\lambda \rightarrow \infty} \lambda^{c} f(\lambda)=0\), then \(f\) is negligible.

    Proof

    Suppose \(f\) has this property, and take an arbitrary polynomial \(p\). We want to show that \(\lim _{\lambda \rightarrow \infty} p(\lambda) f(\lambda)=0\)

    If \(d\) is the degree of \(p\), then \(\lim _{\lambda \rightarrow \infty} \frac{p(\lambda)}{\lambda^{d+1}}=0 .\) Therefore,

    \[\lim _{\lambda \rightarrow \infty} p(\lambda) f(\lambda)=\lim _{\lambda \rightarrow \infty}\left[\frac{p(\lambda)}{\lambda^{d+1}}\left(\lambda^{d+1} \cdot f(\lambda)\right)\right]=\left(\lim _{\lambda \rightarrow \infty} \frac{p(\lambda)}{\lambda^{d+1}}\right)\left(\lim _{\lambda \rightarrow \infty} \lambda^{d+1} \cdot f(\lambda)\right)=0 \cdot 0\]

    The second equality is a valid law for limits since the two limits on the right exist and are not an indeterminate expression like \(0 \cdot \infty\). The final equality follows from the hypothesis on \(f\).

    Example 

    The function \(f(\lambda)=1 / 2^{\lambda}\) is negligible, since for any integer \(c\), we have: \[\lim _{\lambda \rightarrow \infty} \lambda^{c} / 2^{\lambda}=\lim _{\lambda \rightarrow \infty} 2^{c \log (\lambda)} / 2^{\lambda}=\lim _{\lambda \rightarrow \infty} 2^{c \log (\lambda)-\lambda}=0\] since \(c \log (\lambda)-\lambda\) approaches \(-\infty\) in the limit, for any constant \(c\). Using similar reasoning, one can show that the following functions are also negligible: \[\frac{1}{2^{\lambda / 2}}, \quad \frac{1}{2^{\sqrt{\lambda}}}, \quad \frac{1}{2^{\log ^{2} \lambda}}, \quad \frac{1}{\lambda^{\log \lambda}} .\] Functions like \(1 / \lambda^{5}\) approach zero but not fast enough to be negligible. To see why, we can take polynomial \(p(\lambda)=\lambda^{6}\) and see that the resulting limit does not satisfy the requirement from Definition 4.2: \[\lim _{\lambda \rightarrow \infty} p(\lambda) \frac{1}{\lambda^{5}}=\lim _{\lambda \rightarrow \infty} \lambda=\infty \neq 0\] In this class, when we see a negligible function, it will typically always be one that is easy to recognize as negligible (just as in an undergraduate algorithms course, you won’t really encounter algorithms where it’s hard to tell whether the running time is polynomial).

    Definition \(4.3\) \((f \approx g)\)

    If \(f, g: \mathbb{N} \rightarrow \mathbb{R}\) are two functions, we write \(f \approx g\) to mean that \(|f(\lambda)-g(\lambda)|\) is a negligible function.

    We use the terminology of negligible functions exclusively when discussing probabilities, so the following are common:

    \begin{align*}\textrm{Pr}\left [ X \right ]\approx 0&\Leftrightarrow \text{"event X almost never happens"}\\
    \textrm{Pr}\left [ Y \right ]\approx 1&\Leftrightarrow \text{"event Y almost always happens"}\\
    \textrm{Pr}\left [ A \right ]\approx \textrm{Pr}\left [ A \right ] &\Leftrightarrow\text{"event A and B happen} \\ &\quad \quad \text{with essentially the same probability"}^{6}\end{align*}

    Additionally, the \(\approx\) symbol is transitive: \({ }^{7}\) if \(\operatorname{Pr}[X] \approx \operatorname{Pr}[Y]\) and \(\operatorname{Pr}[Y] \approx \operatorname{Pr}[Z]\), then \(\operatorname{Pr}[X] \approx\) \(\operatorname{Pr}[Z]\) (perhaps with a slightly larger, but still negligible, difference).


    \({ }^{6} \operatorname{Pr}[A] \approx \operatorname{Pr}[B]\) doesn’t mean that events \(A\) and \(B\) almost always happen together (when \(A\) and \(B\) are defined over a common probability space) - imagine \(A\) being the event "the coin came up heads" and \(B\) being the event "the coin came up tails." These events have the same probability but never happen together. To say that " \(A\) and \(B\) almost always happen together," you’d have to say something like \(\operatorname{Pr}[A \oplus B] \approx 0\), where \(A \oplus B\) denotes the event that exactly one of \(A\) and \(B\) happens.

    \({ }^{7}\) It’s only transitive when applied a polynomial number of times. So you can’t define a whole series of events \(X_{i}\), show that \(\operatorname{Pr}\left[X_{i}\right] \approx \operatorname{Pr}\left[X_{i+1}\right]\), and conclude that \(\operatorname{Pr}\left[X_{1}\right] \approx \operatorname{Pr}\left[X_{2^{n}}\right]\). It’s rare that we’ll encounter this subtlety in this course.


    This page titled 4.2: What Qualifies as "Negligible" Success Probability? is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mike Rosulek (Open Oregon State) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.