Skip to main content
[ "article:topic-guide", "showtoc:no", "license:ccbync", "authorname:mrosulek" ]
Engineering LibreTexts

4: Basing Cryptography on Limits of Computation

  • Page ID
    7340
  • John Nash was a mathematician who earned the 1994 Nobel Prize in Economics for his work in game theory. His life was made into a successful movie, A Beautiful Mind. In 1955, Nash was in correspondence with the United States National Security Agency (NSA),1 discussing new methods of encryption that he had devised. In these letters, he also proposes some general principles of cryptography (bold highlighting not in the original):

    We see immediately that in principle the enemy needs very little information to begin to break down the process. Essentially, as soon as r bits2 of enciphered message have been transmitted the key is about determined. This is no security, for a practical key should not be too long. But this does not consider how easy or difficult it is for the enemy to make the computation determining the key. If this computation, although possible in principle, were sufficiently long at best then the process could still be secure in a practical sense

    The most direct computation procedure would be for the enemy to try all 2r possible keys, one by one. Obviously this is easily made impractical for the enemy by simply choosing r large enough.

    In many cruder types of enciphering, particularly those which are not autocoding, such as substitution ciphers [letter for letter, letter pair for letter pair, triple for triple..] shorter means for computing the key are feasible, essentially because the key can be determined piece meal, one substitution at a time.

    So a logical way to classify enciphering processes is by the way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably a relatively small power of \(r\), \(ar^2\) or \(ar^3\), as in substitution ciphers.

    Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, with the information content of the key. 

    The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable. As ciphers become more sophisticated the game of cipher breaking by skilled teams, etc. should become a thing of the past.

    Nash’s letters were declassified only in 2012, so they did not directly influence the development of modern cryptography. Still, his letters illustrate the most important idea of “modern” cryptography: that security can be based on the computational difficulty of an attack rather than on the impossibility of an attack. In other words, we are willing to accept that breaking a cryptographic scheme may be possible in principle, as long as breaking it would require too much computational effort to be feasible.

    We have already discussed one-time-secret encryption and secret sharing. For both of these tasks we were able to achieve a level of security guaranteeing that attacks are impossible in principle. But that’s essentially the limit of what can be accomplished with such ideal security. Everything else we will see in this class, and every well-known product of cryptography (public-key encryption, hash functions, etc.) has a “modern”-style level of security, which guarantees that attacks are merely computationally infeasible, not impossible.

    1The original letters, handwritten by Nash, are available at: http://www.nsa.gov/public_info/press_room/ 2012/nash_exhibits.html.

    2Nash is using r to denote the length of the key, in bits