3.1: Propositions and Logical Operators
- Page ID
- 53931
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Propositional Logic
Within this subunit, we encounter basic definitions and operators. Fundamental symbology is also presented and discussed. There are several examples that help you understand the math in terms of human language. You will see several ways to say the same thing, while remaining logically and mathematically correct.
Logic is the study of consequence. Given a few mathematical statements or facts, we would like to be able to draw some conclusions. For example, if I told you that a particular real-valued function was continuous on the interval [0, 1], and f (0) = −1 and f (1) = 5, can we conclude that there is some point between [0, 1] where the graph of the function crosses the x-axis? Yes, we can, thanks to the Intermediate Value Theorem from Calculus. Can we conclude that there is exactly one point? No. Whenever we find an "answer" in math, we really have a (perhaps hidden) argument. Mathematics is really about proving general statements (like the Intermediate Value Theorem), and this too is done via an argument, usually called a proof. We start with some given conditions, the premises of our argument, and from these, we find a consequence of interest, our conclusion.
The problem is, as you no doubt know from arguing with friends, not all arguments are good arguments. A "bad" argument is one in which the conclusion does not follow from the premises, i.e., the conclusion is not a consequence of the premises. Logic is the study of what makes an argument good or bad. In other words, logic aims to determine in which cases a conclusion is, or is not, a consequence of a set of premises.
By the way, "argument" is actually a technical term in math (and philosophy, another discipline which studies logic):
Arguments
An argument is a set of statements, one of which is called the conclusion and the rest of which are called premises. An argument is said to be valid if the conclusion must be true whenever the premises are all true. An argument is invalid if it is not valid; it is possible for all the premises to be true and the conclusion to be false.
For example, consider the following two arguments:
Are these arguments valid? Hopefully, you agree that the first one is but the second one is not. Logic tells us why by analyzing the structure of the statements in the argument. Notice the two arguments above look almost identical. Edith and Florence both eat their vegetables. In both cases, there is a connection between the eating of vegetables and cookies. But we claim that it is valid to conclude that Edith gets a cookie, but not that Florence does. The difference must be in the connection between eating vegetables and getting cookies. We need to be skilled at reading and comprehending these sentences. Do the two sentences mean the same thing? Unfortunately, in everyday language, we are often sloppy, and you might be tempted to say they are equivalent. But notice that just because Florence must eat her vegetables, we have not said that doing so would be enough (she might also need to clean her room, for example). In everyday (non-mathematical) practice, you might be tempted to say this "other direction" is implied. In mathematics, we never get that luxury.
Before proceeding, it might be a good idea to quickly review Section 0.2 where we first encountered statements and the various forms they can take. The goal now is to see what mathematical tools we can develop to better analyze these, and then to see how this helps read and write proofs.
3.1 Propositional Logic
Investigate!
You stumble upon two trolls playing Stratego®. They tell you:
Troll 1: If we are cousins, then we are both knaves.
Troll 2: We are cousins or we are both knaves.
Could both trolls be knights? Recall that all trolls are either always-truth-telling knights or always-lying knaves.
Attempt the above activity before proceeding.
A proposition is simply a statement. Propositional logic studies the ways statements can interact with each other. It is important to remember that propositional logic does not really care about the content of the statements. For example, in terms of propositional logic, the claims, "if the moon is made of cheese then basketballs are round", and "if spiders have eight legs then Sam walks with a limp" are exactly the same. They are both implications: statements of the form, \(P \to Q\).
Truth Tables
Here's a question about playing Monopoly:
If you get more doubles than any other player then you will lose, or if you lose then you must have bought the most properties.
True or false? We will answer this question, and won't need to know anything about Monopoly. Instead, we will look at the logical form of the statement.
We need to decide when the statement \((P \to Q) \vee (Q \to R)\) is true. Using the definitions of the connectives in Section 0.2, we see that for this to be true, either \(P \to Q\) must be true or \( Q \to R\) must be true (or both). Those are true if either P is false or Q is true (in the first case) and Q is false or R is true (in the second case). So – yeah, it gets kind of messy. Luckily, we can make a chart to keep track of all the possibilities. Enter truth tables. The idea is this: on each row, we list a possible combination of T's and F's (for true and false) for each of the sentential variables, and then mark down whether the statement in question is true or false in that case. We do this for every possible combination of T's and F's. Then we can clearly see in which cases the statement is true or false. For complicated statements, we will first fill in values for each part of the statement, as a way of breaking up our task into smaller, more manageable pieces.
Since the truth value of a statement is completely determined by the truth values of its parts and how they are connected, all you really need to know is the truth tables for each of the logical connectives. Here they are:
The truth table for negation looks like this:
None of these truth tables should come as a surprise; they are all just restating the definitions of the connectives. Let's try another one.
Example 3.1.1
Make a truth table for the statement \( \neg P \vee Q\).
Solution
Note that this statement is not \( \neg (P \vee Q)\), the negation belongs to P alone. Here is the truth table:
We added a column for \(\neg P\) to make filling out the last column easier. The entries in the \(\neg P\) column were determined by the entries in the P column. Then to fill in the final column, look only at the column for Q and the column for \(\neg P\) and use the rule for \(\vee\).
Now let's answer our question about monopoly:
Example 3.1.2:
Analyze the statement, "if you get more doubles than any other player you will lose, or that if you lose you must have bought the most properties", using truth tables.
Solution. Represent the statement in symbols as \((P \to Q) \vee (Q \to R)\), where P is the statement "you get more doubles than any other player", Q is the statement "you will lose", and R is the statement "you must have bought the most properties". Now make a truth table.
The truth table needs to contain 8 rows in order to account for every possible combination of truth and falsity among the three statements. Here is the full truth table:
The first three columns are simply a systematic listing of all possible combinations of T and F for the three statements (do you see how you would list the 16 possible combinations for four statements?). The next two columns are determined by the values of P, Q, and R and the definition of implication. Then, the last column is determined by the values in the previous two columns and the definition of \(\vee\). It is this final column we care about.
Notice that in each of the eight possible cases, the statement in question is true. So our statement about monopoly is true (regardless of how many properties you own, how many doubles you roll, or whether you win or lose).
The statement about monopoly is an example of a tautology, a statement which is true on the basis of its logical form alone. Tautologies are always true but they don't tell us much about the world. No knowledge about monopoly was required to determine that the statement was true. In fact, it is equally true that "If the moon is made of cheese, then Elvis is still alive, or if Elvis is still alive, then unicorns have 5 legs".
Logical Equivalence
You might have noticed in Example 3.1.1 that the final column in the truth table for \(\neg P \vee Q\) is identical to the final column in the truth table for \(P \to Q\):
This says that no matter what P and Q are, the statements \(\neg P \vee Q\) and \( P \to Q\) either both true or both false. We therefore say these statements are logically equivalent.
Logical Equivalence.
Two (molecular) statements P and Q are logically equivalent provided P is true precisely when Q is true. That is, P and Q have the same truth value under any assignment of truth values to their atomic parts.
To verify that two statements are logically equivalent, you can make a truth table for each and check whether the columns for the two statements are identical.
Recognizing two statements as logically equivalent can be very helpful. Rephrasing a mathematical statement can often lend insight into what it is saying, or how to prove or refute it. By using truth tables we can systematically verify that two statements are indeed logically equivalent.
Example 3.1.3
Are the statements, "it will not rain or snow" and "it will not rain and it will not snow" logically equivalent?
Solution
We want to know whether \(\neg (P \vee Q)\) is logically equivalent to \(\neg P \wedge \neg Q\). Make a truth table which includes both statements:
Since in every row the truth values for the two statements are equal, the two statements are logically equivalent.
Notice that this example gives us a way to "distribute" a negation over a disjunction (an "or"). We have a similar rule for distributing over conjunctions ("and"s):
De Morgan's Laws.
\(\neg (P \wedge Q)\) is logically equivalent to \(\wedge P \vee \wedge Q\) .
This suggests there might be a sort of "algebra" you could apply to statements (okay, there is: it is called Boolean algebra) to transform one statement into another. We can start collecting useful examples of logical equivalence, and apply them in succession to a statement, instead of writing out a complicated truth table.
De Morgan's laws do not directly help us with implications, but as we saw above, every implication can be written as a disjunction:
Implications are Disjunctions.
\(P \to Q\) is logically equivalent to \(\neg P \vee Q\).
Example: "If a number is a multiple of 4, then it is even" is equivalent to, "a number is not a multiple of 4 or (else) it is even".
With this and De Morgan's laws, you can take any statement and simplify it to the point where negations are only being applied to atomic propositions. Well, actually not, because you could get multiple negations stacked up. But this can be easily dealt with:
Double Negation.
\(\neg \neg P\) is logically equivalent to P.
Example: "It is not the case that c is not odd" means "c is odd".
Let's see how we can apply the equivalences we have encountered so far.
Example 3.1.4:
Prove that the statements \(\neg (P \to Q)\) and \(P \wedge \neg Q\) are logically equivalent without using truth tables.
Solution
We want to start with one of the statements, and transform it into the other through a sequence of logically equivalent statements. Start with \(\neg (P \to Q)\). We can rewrite the implication as a disjunction this is logically equivalent to
\(\neg ( \neg P \vee Q)\).
Now apply DeMorgan's law to get
\(\neg \neg P \wedge \neg Q\).
Finally, use double negation to arrive at \(P \wedge \neg Q\)
Notice that the above example illustrates that the negation of an implication is NOT an implication: it is a conjunction! We saw this before, in Section 0.2, but it is so important and useful, it warrants a second blue box here:
Negation of an Implication.
The negation of an implication is a conjunction:
¬(P → Q) is logically equivalent to P ∧ ¬Q.
That is, the only way for an implication to be false is for the hypothesis to be true AND the conclusion to be false.
To verify that two statements are logically equivalent, you can use truth tables or a sequence of logically equivalent replacements. The truth table method, although cumbersome, has the advantage that it can verify that two statements are NOT logically equivalent.
Example 3.1.5:
Are the statements \((P \vee Q) \to R\) and \((P \to R) \vee (Q \to R)\) logically equivalent?
Solution
Note that while we could start rewriting these statements with logically equivalent replacements in the hopes of transforming one into another, we will never be sure that our failure is due to their lack of logical equivalence rather than our lack of imagination. So instead, let's make a truth table:
Look at the fourth (or sixth) row. In this case, \((P \to R) \vee (Q \to R)\) is true, but \((P \vee Q) \to R\) is false. Therefore the statements are not logically equivalent.
While we don't have logical equivalence, it is the case that whenever \((P \vee Q) \to R\) is true, so is \((P \to R) \vee (Q \to R)\). This tells us that we can deduce \((P \to R) \vee (Q \to R)\) from \((P \vee Q) \to R\), just not the reverse direction.
Deductions
Investigate!
Holmes owns two suits: one black and one tweed. He always wears either a tweed suit or sandals. Whenever he wears his tweed suit and a purple shirt, he chooses to not wear a tie. He never wears the tweed suit unless he is also wearing either a purple shirt or sandals. Whenever he wears sandals, he also wears a purple shirt. Yesterday, Holmes wore a bow tie. What else did he wear?
Attempt the above activity before proceeding
Earlier we claimed that the following was a valid argument:
If Edith eats her vegetables, then she can have a cookie. Edith ate her vegetables. Therefore Edith gets a cookie.
How do we know this is valid? Let's look at the form of the statements. Let P denote "Edith eats her vegetables" and Q denote "Edith can have a cookie". The logical form of the argument is then:
This is just the truth table for \(P \to Q\), but what matters here is that all the lines in the deduction rule have their own column in the truth table. Remember that an argument is valid provided the conclusion must be true given that the premises are true. The premises in this case are \(P \to Q\) and P. Which rows of the truth table correspond to both of these being true? P is true in the first two rows, and of those, only the first row has \(P \to Q\) true as well. And lo-and-behold, in this one case, Q is also true. So if \(P \to Q\) and P are both true, we see that Q must be true as well.
Here are a few more examples.
Example 3.1.6
Show that
is a valid deduction rule.
Solution
We make a truth table which contains all the lines of the argument form:
(we include a column for \(\neg P\) just as a step to help getting the column for \(\neg P \to Q\)).
Now, look at all the rows for which both \(P \to Q\) and \(\neg P \to Q\) are true. This happens only in rows 1 and 3. Hey! In those rows, Q is true as well, so the argument form is valid (it is a valid deduction rule).
Example 3.1.7
Decide whether
is a valid deduction rule.
Solution
Let's make a truth table containing all four statements.
Look at the second to last row. Here all three premises of the argument are true, but the conclusion is false. Thus this is not a valid deduction rule.
While we have the truth table in front of us, look at rows 1, 3, and 5. These are the only rows in which all of the statements \(P \to R\), \(Q \to R\), and \(P \vee Q\) are true. It also happens that R is true in these rows as well. Thus we have discovered a new deduction rule we know is valid:
Beyond Propositions
As we saw in Section 0.2, not every statement can be analyzed using logical connectives alone. For example, we might want to work with the statement:
All primes greater than 2 are odd.
To write this statement symbolically, we must use quantifiers. We can translate as follows:
\(\forall x((P(x) \wedge x > 2) \to O(x))\).
In this case, we are using P(x) to denote "x is prime" and O(x) to denote "x is odd". These are not propositions, since their truth-value depends on the input x. Better to think of P and O as denoting properties of their input. The technical term for these is predicates and when we study them in logic, we need to use predicate logic.
It is important to stress that predicate logic extends propositional logic (much in the way quantum mechanics extends classical mechanics). You will notice that our statement above still used the (propositional) logical connectives. Everything that we learned about logical equivalence and deductions still applies. However, predicate logic allows us to analyze statements at a higher resolution, digging down into the individual propositions P, Q, etc.
A full treatment of predicate logic is beyond the scope of this text. One reason is that there is no systematic procedure for deciding whether two statements in predicate logic are logically equivalent (i.e., there is no analogue to truth tables here). Rather, we end with two examples of logical equivalence and deduction, to pique your interest.
Example 3.1.8
Suppose we claim that there is no smallest number. We can translate this into symbols as
\(\neg \exists x \forall y (x \le y)\)
(literally, "it is not true that there is a number x such that for all numbers y, x is less than or equal to y").
However, we know how negation interacts with quantifiers: we can pass a negation over a quantifier by switching the quantifier type (between universal and existential). So the statement above should be logically equivalent to
\(forall x \exists y (y < x)\).
Notice that y < x is the negation of x ≤ y. This literally says, "for every number x there is a number y which is smaller than x". We see that this is another way to make our original claim.
Example 3.1.9
Can you switch the order of quantifiers? For example, consider the two statements:
\(\forall x \exists yP(x,y)\) and \(\exists y \forall x P(x,y\).
Are these logically equivalent?
Solution
These statements are NOT logically equivalent. To see this, we should provide an interpretation of the predicate P(x, y ) which makes one of the statements true and the other false.
Let P(x, y ) be the predicate x < y. It is true, in the natural numbers, that for all x there is some y greater than that x (since there are infinitely many numbers). However, there is not a natural number y which is greater than every number x. Thus it is possible for \(\forall x \exists y P(x,y)\) to be true while \(\exists y \forall x P(x,y)\) is false.
We cannot do the reverse of this though. If there is some y for which every x satisfies P (x, y), then certainly for every x there is some y which satisfies P(x, y ). The first is saying we can find one y that works for every x. The second allows different y's to work for different x's, but there is nothing preventing us from using the same y that work for every x. In other words, while we don't have logical equivalence between the two statements, we do have a valid deduction rule:
Put yet another way, this says that the single statement
\(\exists y \forall xP(x,y) \to \forall x \exists yP(x,y)\)
is always true. This is sort of like a tautology, although we reserve that term for necessary truths in propositional logic. A statement in predicate logic that is necessarily true gets the more prestigious designation of a law of logic (or sometimes logically valid, but that is less fun).