Skip to main content
Engineering LibreTexts

4.1: What Qualifies as a "Computationally Infeasible" Attack?

  • Page ID
    7334
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Schemes like one-time pad cannot be broken, even by an attacker that performs a brute-force attack, trying all possible keys (see Exercise 1.5). However, all future schemes that we will see can indeed be broken by such an attack. Nash is quick to point out that, for a scheme with \(\lambda\)-bit keys:

    The most direct computation procedure would be for the enemy to try all \(2^{\lambda}\) possible keys, one by one. Obviously this is easily made impractical for the enemy by simply choosing \(\lambda\) large enough.

    We call \(\lambda\) the security parameter of the scheme. It is like a knob that allows the user to tune the security to any desired level. Increasing \(\lambda\) makes the difficulty of a bruteforce attack grow exponentially fast. Ideally, when using \(\lambda\)-bit keys, every attack (not just a brute-force attack) will have difficulty roughy \(2^{\lambda}\). However, sometimes faster attacks are inevitable. Later in this chapter, we will see why many schemes with \(\lambda\)-bit keys have attacks that cost only \(2^{\lambda / 2}\). It is common to see a scheme described as having \(n\)-bit security if the best known attack requires \(2^{n}\) steps.

    Just how impractical is a brute-force computation on a 64-bit key? A 128-bit key? Huge numbers like \(2^{64}\) and \(2^{128}\) are hard to grasp at an intuitive level.

    Example

    It can be helpful to think of the cost of a computation in terms of monetary value, and a convenient way to assign such monetary costs is to use the pricing model of a cloud computing provider. Below, I have calculated roughly how much a computation involving \(2^{\lambda} C P U\) cycles would cost on Amazon EC2, for various choices of \(\lambda .^{3}\)

    clock cycles approx cost reference
    \(2^{50}\) \(\$ 3.50\) cup of coffee
    \(2^{55}\) \(\$ 100\) decent tickets to a Portland Trailblazers game
    \(2^{65}\) \(\$ 130,000\) median home price in Oshkosh, WI
    \(2^{75}\) \(\$ 130\) million budget of one of the Harry Potter movies
    \(2^{85}\) \(\$ 140\) billion GDP of Hungary
    \(2^{92}\) \(\$ 20\) trillion GDP of the United States
    \(2^{99}\) \(\$ 2\) quadrillion all of human economic activity since \(300,000 \mathrm{BC}^{4}\)
    \(2^{128}\) really a lot a billion human civilizations’ worth of effort

    Remember, this table only shows the cost to perform \(2^{\lambda}\) clock cycles. A brute-force attack checking \(2^{\lambda}\) keys would take many more cycles than that! But, as a disclaimer, these numbers reflect only the retail cost of performing a computation, on fairly standard general-purpose hardware. A government organization would be capable of manufacturing special-purpose hardware that would significantly reduce the computation’s cost. The exercises explore some of these issues, as well as non-financial ways of conceptualizing the cost of huge computations.

    Example \(\PageIndex{2}\)

    In 2017, the first collision in the SHA-1 hash function was found (we will discuss hash functions later in the course). The attack involved evaluating the SHA-1 function \(2^{63}\) times on a cluster of GPUs. An article in Ars Technica \({ }^{5}\) estimates the monetary cost of the attack as follows:

    Had the researchers performed their attack on Amazon’s Web Services platform, it would have cost \(\$ 560,000\) at normal pricing. Had the researchers been patient and waited to run their attack during off-peak hours, the same collision would have cost \(\$ 110,000 .\)

    Asymptotic Running Time

    It is instructive to think about the monetary cost of an enormous computation, but it doesn’t necessarily help us draw the line between "feasible" attacks (which we want to protect against) and "infeasible" ones (which we agreed we don’t need to care about). We need to be able to draw such a line in order to make security definitions that say "only feasible attacks are ruled out."

    Once again, John Nash thought about this question. He suggested to consider the asymptotic cost of an attack - how does the cost of a computation scale as the security parameter \(\lambda\) goes to infinity?

    So a logical way to classify enciphering processes is by the way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably a relatively small power of \(\lambda, a \cdot \lambda^{2}\) or \(a \cdot \lambda^{3}\), as in substitution ciphers.

    Nash highlights the importance of attacks that run in polynomial time:

    Definition 4.1

    A program runs in polynomial time if there exists a constant \(c>0\) such that for all sufficiently long input strings \(x\), the program stops after no more than \(O\left(|x|^{c}\right)\) steps.

    Polynomial-time algorithms scale reasonably well (especially when the exponent is small), but exponential-time algorithms don’t. It is probably no surprise to modern readers to see "polynomial-time" as a synonym for "efficient." However, it’s worth pointing out that, again, Nash is years ahead of his time relative to the field of computer science.

    In the context of cryptography, our goal will be to ensure that no polynomial-time attack can successfully break security. We will not worry about attacks like brute-force that require exponential time.

    Polynomial time is not a perfect match to what we mean when we informally talk about "efficient" algorithms. Algorithms with running time \(\Theta\left(n^{1000}\right)\) are technically polynomial-time, while those with running time \(\Theta\left(n^{\log \log \log n}\right)\) aren’t. Despite that, polynomial-time is extremely useful because of the following closure property: repeating a polynomial-time process a polynomial number of times results in a polynomial-time process overall.

    Potential Pitfall: Numerical Algorithms

    When we study public-key cryptography, we will discuss algorithms that operate on very large numbers (e.g., thousands of bits long). You must remember that representing the number \(N\) on a computer requires only \(\sim \log _{2} N\) bits. This means that \(\log _{2} N\), rather than \(N\), is our security parameter! We will therefore be interested in whether certain operations on the number \(N\) run in polynomial-time as a function of \(\log _{2} N\), rather than in \(N\). Keep in mind that the difference between running time \(O(\log N)\) and \(O(N)\) is the difference between writing down a number and counting to the number.

    For reference, here are some numerical operations that we will be using later in the class, and their known efficiencies:

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)

    Again, "efficient" means polynomial-time. Furthermore, we only consider polynomialtime algorithms that run on standard, classical computers. In fact, all of the problems in the right-hand column do have known polynomial-time algorithms on quantum computers.


    \({ }^{3}\) As of October 2018, the cheapest class of CPU that is suitable for an intensive computation is the m5.large, which is a \(2.5 \mathrm{GHz}\) CPU. Such a CPU performs \(2^{43}\) clock cycles per hour. The cheapest rate on EC2 for this CPU is \(0.044\) USD per hour (3-year reserved instances, all costs paid upfront). All in all, the cost for a single clock cycle (rounding down) is \(2^{-48}\) USD.

    \({ }^{4}\) "I found some estimates (https://en.Wikipedia.org/wiki/Gross_world_product) of the gross world product (like the GDP but for the entire world) throughout human history, and summed them up for every year.

    \({ }^{5}\) https://arstechnica.com/information-technology/2017/02/at-deaths-door-for-years-widely-used-sha1-function-is-now-de


    This page titled 4.1: What Qualifies as a "Computationally Infeasible" Attack? is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mike Rosulek (Open Oregon State) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.