Skip to main content
[ "article:topic", "showtoc:no", "license:ccbync", "authorname:mrosulek" ]
Engineering LibreTexts

6.1: Definition

  • Page ID
  • You can think of a PRF as a way to achieve random access to a very long [pseudo]random string. That long string is also the truth table of the function \(F(k,·)\). A distinguisher is allowed to query this function, and the results should look as though they came from a random function — a function whose truth table was chosen uniformly at random. This perspective is the source of the name “pseudorandom function;” it’s an object that (when the seed is chosen uniformly) is indistinguishable from a random function, to polynomial-time computations.

    Under this perspective, we can consider a function with any input output length, not just with a single bit of output. Consider the set ? of functions from \(\{0,1\}^{in}\) to \(\{0,1\}^{out}\). If F is a good PRF, then when instantiated with a uniformly chosen \(k\), the function \(F(k,·)\) should look just like a function chosen uniformly from the set \(?\), when given query access to the function.

    We can therefore formalize security for PRFs by defining two libraries that provide an interface to query a function on arbitrary inputs. The two libraries differ in whether the responses are determined by evaluating a PRF (with randomly chosen seed) or by a evaluating a function chosen truly at random.

    For technical reasons, it is important that the libraries for the PRF definition run in polynomial time. This means we must be careful about the library that provides access to a truly random function. We cannot explicitly sample the entire truth table of a random function upfront — the truth table could be exponential in size. Instead, outputs of the function are sampled on demand and cached for later in an associative array. The formal definition looks like this:

    Definition \(\PageIndex{1}\): PRF Security

    Let \(F : \{0,1\}^{\lambda} \times \{0,1\}^{in} \rightarrow \{0,1\}^{out}\) be a deterministic function. We say that \(F\) is a secure pseudorandom function (PRF) if \(\mathscr{L}^F_{\text{prf-real}} ≋ \mathscr{L}^F_{\text{prf-rand}}\), where:

    Screen Shot 2019-03-08 at 3.26.10 PM.png


    • We measure the security of a PRF by comparing its outputs to those of a “random function.” The idea of a “random function” is tricky. We do not mean a function that gives different outputs when called twice on the same input. Instead, we mean a function whose truth table is chosen uniformly at random, after which the truth table is absolutely fixed. Calling the function twice on the same input will result in the same output.

    But due to the technical limitations mentioned above, we don’t actually write the code of \(\mathscr{L}_{\text{prf-rand}}\) in this way. The truth table is too large to sample and store explicitly. Rather, we have to sample the values appearing in the truth table on-demand. But we see that once they are chosen, the variable \(T\) ensures that they remain fixed.

    We see also that in \(\mathscr{L}_{\text{prf-real}}\), the function \(F\) is deterministic and the key \(k\) is static between calls to QUERY. So calling QUERY twice on the same input gives the same answer.

    • If the input length in is large (e.g., if \(in = \lambda\)), then the calling program will not be able to query the function on all inputs \(x \in \{0,1\}^{in}\). In fact, it can only query on a negligible fraction of the input space \(\{0,1\}^{in}\)!
    • Suppose we know that the calling program is guaranteed to never call the QUERY subroutine on the same x twice. Then there is no need for \(\mathscr{L}_{\text{prf-rand}}\) to keep track of \(T\). The library could just as well forget the value that it gave in response to \(x\) since we have a guarantee that it will never be requested again. In that case, we can considerably simplify the code of \(\mathscr{L}_{\text{prf-rand}}\) as follows:

    \[\underline{\text{QUERY}(x\in\space\in\{0,1\}^{in}):}\\z\space\leftarrow\{0,1\}^{out}\\\text{return}\space z\nonumber\]


    In implementations of cryptographic systems, “provable security” typically starts only above the level of PRFs. In other words, we typically build a cryptographic system and prove its security under the assumption that we have used a secure PRF as one of the building blocks. PRFs are a kind of “ground truth” of applied cryptography.

    By contrast, the algorithms that we use in practice as PRFs are not proven to be secure PRFs based on any simpler assumption. Rather, these algorithms are subjected to intense scrutiny by cryptographers, they are shown to resist all known classes of attacks, etc. All of these things taken together serve as evidence supporting the assumption that these algorithms are secure PRFs.

    An example of a conjectured PRF is the Advanced Encryption Standard (AES). It is typically classified as a block cipher, which refers to a PRF with some extra properties that will be discussed in the next chapter. In particular, AES can be used with a security parameter of \(\lambda \in \{128,192,256\}\), and has input/output length \(in = out = 128\). The amount of scrutiny applied to AES since it was first published in 1998 is considerable. To date, it is not known to be vulnerable to any severe attacks. The best known attacks on AES are only a tiny bit faster than brute-force, and still vastly beyond the realm of feasible computation.