Skip to main content
Engineering LibreTexts

1.2.2: Substitution Laws

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    It’s important to understand that the propositional variables in the laws of Boolean al- gebra can stand for any propositions, including compound propositions. It is not just true, as the Double Negation Law states, that ¬(¬p) ≡ p. It is also true that ¬(¬q) ≡ q, that ¬(¬(p q)) ≡ (p q), that ¬(¬(p → (q ∧ ¬p))) ≡ (p → (q ∧ ¬p)), and an infinite number of other statements of the same form. Here, a “statement of the same form” is one that can be obtained by substituting something for p in both places where it occurs in ¬(¬p) ≡ p. How can I be sure that all these infinitely many statements are valid when all that I’ve checked is one little two-line truth table? The answer is that any given pro- position, Q, no matter how complicated, has a particular truth value, either true or false. So, the question of the validity of ¬(¬Q) ≡ Q always reduces to one of the two cases I already checked in the truth table. (Note that for this argument to be valid, the same Qmust be substituted for p in every position where it occurs.) While this argument may be ‘obvious’, it is not exactly a proof, but for now we will just accept the validity of the following theorem:

    Theorem 2.1

    (First Substitution Law). Suppose that Q is any proposition, and that p is a propositional variable. Consider any tautology. If (Q) is substituted for p in all places where poccurs in the tautology, then the result is also a tautology.

    Since logical equivalence is defined in terms of tautology, it is also true that when (Q)is substituted for p in a logical equivalence, the result is again a logical equivalence.7

    The First Substitution Law lets you do algebra! For example, you can substitute p q for p in the law of double negation, ¬(¬p) ≡ p. This allows you to ‘simplify’ the expression ¬(¬(r q)) to r q with confidence that the resulting expression has the same logical value as the expression you started with. (That’s what it means for¬(¬(r q)) and r q to be logically equivalent.) You can play similar tricks with all the laws in Figure 2.2. Even more important is the Second Substitution Law, which says that you can substitute an expression for a logically equivalent expression, wherever it occurs. Once again, we will accept this as a theorem without trying to prove it here. It is surprisingly hard to put this law into words:

    Theorem 2.2

    (Second Substitution Law). Suppose that P and Q are any propositions such that P ≡ Q. Suppose that R is any compound proposition in which (P) occurs as a subproposition. Let R′ be the proposition that is obtained by substituting (Q) for that occurrence of (P) in R. Then R ≡ R′.

    Note that in this case, the theorem does not require (Q) to be substituted for every occurrence of (P) in R. You are free to substitute for one, two, or as many occurrences of(P) as you like, and the result is still logically equivalent to R.

    The Second Substitution Law allows us to use the logical equivalence ¬(¬p) ≡ p to ‘simplify’ the expression q → (¬(¬p)) by substituting ¬(¬p) for p. The resulting expression, q p, is logically equivalent to the original q → (¬(¬p)). Once again, we have to be careful about parentheses: The fact that p p p does not allow us to rewriteqpprasqpr. The problem is that qppr means (qp)∨(pr), so that (p p) is not a subexpression. This again underlines the importance of always writing parantheses in your propositional formulas.

    1.2.2: Substitution Laws is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?