Skip to main content
Engineering LibreTexts

8.12: What has SAT got to do with it?

  • Page ID
    48342
    • Eric Lehman, F. Thomson Leighton, & Alberty R. Meyer
    • Google and Massachusetts Institute of Technology via MIT OpenCourseWare

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    So why does society, or at least everybody’s secret codes, fall apart if there is an efficient test for satisfiability (SAT), as we claimed in Section 3.5? To explain this, remember that RSA can be managed computationally because multiplication of two primes is fast, but factoring a product of two primes seems to be overwhelmingly demanding.

    Let’s begin with the observation from Section 3.2 that a digital circuit can be described by a bunch of propositional formulas of about the same total size as the circuit. So testing circuits for satisfiability is equivalent to the SAT problem for propositional formulas (see Problem 3.18).

    Now designing digital multiplication circuits is completely routine. We can easily build a digital “product checker” circuit out of \(\text{AND}\), \(\text{OR}\), and \(\text{NOT}\) gates with 1 output wire and \(4n\) digital input wires. The first \(n\) inputs are for the binary representation of an integer \(i\), the next \(n\) inputs for the binary representation of an integer \(j\), and the remaining \(2n\) inputs for the binary representation of an integer \(k\). The output of the circuit is 1 iff \(ij = k\) and \(i, j > 1\). A straightforward design for such a product checker uses proportional to \(n^2\) gates.

    Now here’s how to factor any number \(m\) with a length \(2n\) binary representation using a SAT solver. First, fix the last \(2n\) digital inputs—the ones for the binary representation of \(k\)—so that \(k\) equals \(m\).

    Next, set the first of the \(n\) digital inputs for the representation of \(i\) to be 1. Do a SAT test to see if there is a satisfying assignment of values for the remaining \(2n - 1\) inputs used for the \(i\) and \(j\) representations. That is, see if the remaining inputs for \(i\) and \(j\) can be filled in to cause the circuit to give output 1. If there is such an assignment, fix the first \(i\)-input to be 1, otherwise fix it to be 0. So now we have set the first \(i\)-input equal to the first digit of the binary representations of an \(i\) such that \(ij = m\).

    Now do the same thing to fix the second of the \(n\) digital inputs for the representation of \(i\), and then third, proceeding in this way through all the \(n\) inputs for the number \(i\). At this point, we have the complete \(n\)-bit binary representation of an \(i>1\) such \(ij = m\) for some \(j >1\). In other words, we have found an integer \(i\) that is a factor of \(m\). We can now find \(j\) by dividing \(m\) by \(i\).

    So after \(n\) SAT tests, we have factored \(m\). This means that if SAT for digital circuits with \(4n\) inputs and about \(n^2\) gates could be determined by a procedure taking a number of steps bounded above by a degree \(d\) polynomial in \(n\), then \(2n\) digit numbers can be factored in \(n\) times this many steps, that is, with a number of steps bounded by a polynomial of degree \(d + 1\) in \(n\). So if SAT could be solved in polynomial time, then so could factoring, and consequently RSA would be “easy” to break.


    This page titled 8.12: What has SAT got to do with it? is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Eric Lehman, F. Thomson Leighton, & Alberty R. Meyer (MIT OpenCourseWare) .