Skip to main content
Engineering LibreTexts

9.3: Defining CCA Security

  • Page ID
    7357
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Our goal now is to develop a new security definition - one that considers an adversary that can construct malicious ciphertexts and observe the effects caused by their decryption. We will start with the basic approach of CPA security, with left and right libraries that differ only in which of two plaintexts they encrypt.

    In a typical system, an adversary might be able to learn only some specific partial information about the Dec process. In the padding oracle attack, the adversary was able to learn only whether the result of decryption had valid padding.

    However, we are trying to come up with a security definition that is useful no matter how the encryption scheme is deployed. How can we possibly anticipate every kind of partial information that might make its way to the adversary in every possible usage of the encryption scheme? The safest choice is to be as pessimistic as possible, as long as we end up with a security notion that we can actually achieve in the end. So let’s just allow the adversary to totally decrypt arbitrary ciphertexts of its choice. In other words, if we can guarantee security when the adversary has full information about decrypted ciphertexts, then we certainly have security when the adversary learns only partial information about decrypted ciphertexts (as in a typical real-world system).

    But this presents a significant problem. An adversary can do \(c^{*}:=\operatorname{EAVESDROP}\left(m_{L}, m_{R}\right)\) to obtain a challenge ciphertext, and then immediately ask for that ciphertext \(c^{*}\) to be decrypted. This will obviously reveal to the adversary whether it is linked to the left or right library.

    So, simply providing unrestricted Dec access to the adversary cannot lead to a reasonable security definition (it is a security definition that can never be satisfied). The simplest way to patch this obvious problem with the definition is to allow the adversary to ask for the decryption of any ciphertext, except those produced in response to EAVESDROP queries. In doing so, we arrive at the final security definition: security against chosenciphertext attacks, or CCA-security:

    Definition 9.1 (CCA security)

    Let \(\sum\) be an encryption scheme. We say that \(\sum\) has security against chosen-ciphertext attacks (CCA security) if \(\mathcal{L}_{\mathrm{cca}-\mathrm{L}}^{\Sigma} \approx \mathcal{L}_{\mathrm{cca}-\mathrm{R}}^{\Sigma}\), where:

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)

    In this definition, the set \(\mathcal{S}\) keeps track of the ciphertexts that have been generated by the EAVESDROP subroutine. The DECRYPT subroutine refuses to decrypt these ciphertexts, but will gladly decrypt any other ciphertext of the adversary’s choice.

    An Example

    The padding oracle attack already demonstrates that \(C B C\) mode does not provide security in the presence of chosen ciphertext attacks. But that attack was quite complicated since the adversary was restricted to learn just 1 bit of information at a time about a decrypted ciphertext. An attack against full CCA security can be much more direct, since the adversary has full access to decrypted ciphertexts.

    Example

    Consider the adversary below attacking the CCA security of \(C B C\) mode (with block length blen)

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)

    It can easily be verified that this adversary achieves advantage 1 distinguishing \(\mathcal{L}_{\mathrm{cca}-\mathrm{L}}\) from \(\mathcal{L}_{\mathrm{cca}-\mathrm{R}}\). The attack uses the fact (also used in the padding oracle attack) that if \(c_{0}\left\|c_{1}\right\| c_{2}\) encrypts \(m_{1} \| m_{2}\), then \(c_{0} \| c_{1}\) encrypts \(m_{1}\). To us, it is obvious that ciphertext \(c_{0} \| c_{1}\) is related to \(c_{0}\left\|c_{1}\right\| c_{2}\). Unfortunately for \(C B C\) mode, the security definition is not very clever - since \(c_{0} \| c_{1}\) is simply different than \(c_{0}\left\|c_{1}\right\| c_{2}\), the DECRYPT subroutine happily decrypts it.

    Example

    Perhaps unsurprisingly, there are many very simple ways to catastrophically attack the CCA security of \(C B C\)-mode encryption. Here are some more (where \(\bar{x}\) denotes the result of flipping every bit in \(x)\):

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)

    The first attack uses the fact that modifying \(c_{2}\) has no effect on the first plaintext block. The second attack uses the fact that flipping every bit in the IV flips every bit in \(m_{1}\).

    Again, in all of these cases, the DECRYPT subroutine is being called on a different (but related) ciphertext than the one returned by EAVESDROP.

    Discussion

    So if I use a CCA-secure encryption scheme, I should never decrypt a ciphertext that I encrypted myself?

    Remember: when we define the Enc and Dec algorithms of an encryption scheme, we are describing things from the normal user’s perspective. As a user of an encryption scheme, you can encrypt and decrypt whatever you like. It would indeed be strange if you encrypted something knowing that it should never be decrypted. What’s the point?

    The security definition describes things from the attacker’s perspective. The \(\mathcal{L}_{\text {cca- } \star}\) libraries tell us what are the circumstances under which the encryption scheme provides security? They say (roughly):

    an attacker can’t tell what’s inside a ciphertext \(c^{*}\), even if she has some partial information about that plaintext, even if she had some partial influence over the choice of that plaintext, and even if she is allowed to decrypt any other ciphertext she wants.

    Of course, if a real-world system allows an attacker to learn the result of decrypting \(c^{*}\), then by definition the attacker learns what’s inside that ciphertext.

    CCA security is deeply connected with the concept of malleability. Malleability means that, given a ciphertext that encrypts an unknown plaintext \(m\), it is possible to generate a different ciphertext that encrypts a plaintext that is related to \(m\) in a predictable way. For example:

    • If \(c_{0}\left\|c_{1}\right\| c_{2}\) is a CBC encryption of \(m_{1} \| m_{2}\), then \(c_{0} \| c_{1}\) is a CBC encryption of \(m_{1}\).
    • If \(c_{0}\left\|c_{1}\right\| c_{2}\) is a CBC encryption of \(m_{1} \| m_{2}\), then \(c_{0}\left\|c_{1}\right\| c_{2} \| 0^{\text {blen }}\) is a CBC encryption of some plaintext that begins with \(m_{1} \| m_{2}\).
    • If \(c_{0} \| c_{1}\) is a CBC encryption of \(m_{1}\), then \(\left(c_{0} \oplus x\right) \| c_{1}\) is a \(\mathrm{CBC}\) encryption of \(m_{1} \oplus x\).

    Note from the second example that we don’t need to know exactly the relationship between the old and new ciphertexts.

    If an encryption scheme is malleable, then a typical attack against its CCA security would work as follows:

    1. Request an encryption \(c\) of some plaintext.
    2. Applying the malleability of the scheme, modify \(c\) to some other ciphertext \(c^{\prime}\).
    3. Ask for \(c^{\prime}\) to be decrypted.

    Since \(c^{\prime} \neq c\), the security library allows \(c^{\prime}\) to be decrypted. The malleability of the scheme says that the contents of \(c^{\prime}\) should be related to the contents of \(c\). In other words, seeing the contents of \(c^{\prime}\) should allow the attacker to determine what was initially encrypted in

    Pseudorandom Ciphertexts

    We can also modify the pseudorandom-ciphertexts security definition (CPA$ security) in a similar way:

    Definition 9.2 (CCA$ security)

    Let \(\sum\) be an encryption scheme. We say that \(\sum\) has pseudorandom ciphertexts in the presence of chosen-ciphertext attacks (CCA$ security) if \(\mathcal{L}_{\mathrm{cca} \$ \text {-real }}^{\Sigma} \approx \mathcal{L}_{\mathrm{cca} \$ \text {-rand }}^{\Sigma}\) where:

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): Copy and Paste Caption here. (Copyright; author via source)

    Just like for CPA security, if a scheme has CCA$ security, then it also has CCA security, but not vice-versa. See the exercises.


    This page titled 9.3: Defining CCA Security is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mike Rosulek (Open Oregon State) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?