PropPred
PropPred
PropPred
LEO GOLDMAKHER
The goal of this essay is to describe two types of logic: Propositional Calculus (also called 0th order logic)
and Predicate Calculus (also called 1st order logic). Both work with propositions and logical connectives, but
Predicate Calculus is more general than Propositional Calculus: it allows variables, quantifiers, and relations.
1. P ROPOSITIONAL C ALCULUS
Given two numbers, we have various ways of combining them: add them, multiply them, etc. We can also
take the negative or absolute value or square of a single number, and apply various functions to a given number.
In other words, we can perform various operations on both individual numbers and on collections of numbers,
and this endows the set of all numbers with a rich structure (e.g. arithmetic).
Can we do the same for mathematical argument? Is there an arithmetic of mathematical assertions?
1.1. Propositions. The first thing to do is to formally define what ‘mathematical assertion’ means. We shall
refer to a mathematical assertion as a proposition; the book uses the word statement for this concept.
Definition. A proposition is a statement that is either true or false, but not both, neither, or sometimes one and
sometimes the other.
For example:
(1) Williams College is located in Williamstown. is a proposition (because it’s true).
(2) Leo is a frog. is a propositions (because it’s false).
(3) You are located in Williamstown. is not a proposition, because it’s sometimes true and sometimes false.
(4) This statement is false. is not a proposition, because it is neither true nor false.
(5) Every even number larger than 2 is the sum of two primes. is a proposition, because it’s either true or
false. (No one knows which! This is called Goldbach’s conjecture, and it’s been open for at least 270
years.)
The last example is a special type of proposition called a predicate, which we’ll discuss later. In the meantime,
we return to our original question: given two propositions, how can we combine them?
1.2. Boolean Algebra. We use logical connectives: and, or, not, thus, etc. These have fancy names and
symbols:
(1) ‘and’ is called conjunction, denoted ∧
(2) ‘or’ is called disjunction, denoted ∨
(3) ‘not’ is called negation, denoted ¬
(4) ‘thus’ of ‘if... then...’ is called conditional, denoted =⇒
Now we can study of the “arithmetic” formed by propositions under these (and other) logical connectives. For
brevity, set
P := Williams college is in Williamstown and Q := Leo is a frog
and consider the following.
(1) ¬Q is true
(2) P ∧ Q is false
(3) P ∨ Q is true
(4) P =⇒ Q: true or false?
(5) P ⇐= Q: true or false?
The last two are tricky, because it’s not clear at all what the connection is between P and Q and how to evaluate
the validity of the statement as a whole. Let’s abstract these. The first is of the form
true proposition =⇒ false proposition.
Ideally one shouldn’t be able to logically deduce a false statement from a true one (at least, this goes against
the intuition of what logic is for!). So this hints that we should declare the proposition P =⇒ Q as false.
The second questionable statement was of the form
false proposition =⇒ true proposition.
Is this something we want to accept? Should it be possible to logically deduce a true statement from a false
one? There are several ways to argue why it’s reasonable to accept this as a logical argument. First, it’s a
proposition that’s vacuously true. To see this, consider the assertion
Every human made of cheese is named Bill.
This is vacuously true—they are all named Bill, because there are no such humans. We can restate this as a
conditional statement:
If a human is made of cheese, then the human’s name is Bill.
In other words, whenever we have an assertion of the form (false statement) =⇒ (any statement) we might
consider it true vacuously, because the initial condition is never met so anything can happen.1 This argument is
a bit suspect, though: the human’s name is Bill isn’t a proposition at all!
In a similar vein, one can think about promises made and kept. Suppose I announced:
If you give me a Lamborghini, then I’ll give you an A.
If you gave me a Lamborghini and I gave you an A, I was telling the truth. And if you didn’t give me a
Lamborghini and I didn’t give you an A, I was still telling you the truth. What if you didn’t give me a Lambo
and I gave you an A? I was still telling the truth! The only situation in which I was not telling the truth is if
you gave me a Lambo and I didn’t give you an A. Thus we’re tempted to assert that (True =⇒ True) is true,
(False =⇒ True) is true, (False =⇒ False) is true, and (True =⇒ False) is false. Once again, this argument
is suspect, since our example doesn’t involve propositions. (“You give me a Lamborghini” is neither true nor
false, so it’s not a proposition.)
A more convincing argument comes from a mathematical example involving actual propositions. Consider
the following logical deduction:
2=3 =⇒ 0 × 2 = 0 × 3.
| {z } | {z }
0 0
In other words, if we assume the 2 really does equal 3, then it logically follows that 0 equals 0. Of course, it
turns out the 0 equals 0 even without making this weird assumption! But that isn’t relevant here: we’ve shown
that we can deduce a true statement from a false one. Of course, we can also logical deduce a false proposition
from a false one:
2 = 3 =⇒ 2 = 3.
And we can deduce a true proposition from a true one:
0 = 0 =⇒ 0 = 0.
The only thing we cannot accomplish to deduce a false proposition from a true one. Try to deduce 2 = 3 from
the starting point 0 = 0.
Inspired by this, we define the truth value of P =⇒ Q using the following ‘truth table’:
P Q P =⇒ Q
T T T
T F F
F T T
F F T
1
This type of argument is summed up in a famous idiom: If my grandmother had wheels, she’d be a truck.
Note that the two statements P =⇒ Q and Q =⇒ P are logically independent. They might both be true, or
one might be true and the other false. (Can you come up with an examples of these?) A visual way to see this
is to compare their truth tables:
P Q P =⇒ Q P ⇐= Q
T T T T
T F F T
F T T F
F F T T
Their outputs aren’t always the same, so P =⇒ Q and P ⇐= Q are not logically equivalent. By the way,
these have fancy names: given the conditional statement P =⇒ Q, the proposition P ⇐= Q is called its
converse.
We can create similar truth tables for other combinations of propositions, for example for (¬P ) ⇐= (¬Q):
P Q ¬P ¬Q (¬P ) ⇐= (¬Q)
T T F F T
T F F T F
F T T F T
F F T T T
Notice anything? This is the same output as P =⇒ Q! In other words, (¬P ) ⇐= (¬Q) is logically
equivalent to P =⇒ Q; we represent this symbolically as
(¬P ) ⇐= (¬Q) ≡ P =⇒ Q.
The odd-looking proposition on the left hand side is called the contrapositive of the conditional P =⇒ Q,
and the fact that it’s logically equivalent is super useful. In fact, we’ve already used it in this course: the way
we justified that
a2 is even =⇒ a is even
was by arguing
a is odd =⇒ a2 is odd,
which is the contrapositive of the first statement.
1.3. Transforming boolean algebra into a genuine arithmetic. Given a bunch of propositions, we have four
operations (¬, ∧, ∨, and =⇒ ) that we can apply to them (either to a single proposition, or to two propositions)
to create new propositions. This is highly reminiscent of arithmetic: given a bunch of numbers, we have four
operations (+, −, ×, ÷) that we can apply to them to create new numbers.
One fun observation is that we can turn Boolean algebra into literal arithmetic. To do this, we assign numer-
ical values to the truth of a given proposition, say
(
1 if P is true, and
#P :=
0 if P is false.
Thus #(Leo is a frog) = 0. Using this notation, we can rewrite our truth table for ∧:
#P #Q #(P ∧ Q)
1 1 1
1 0 0
0 1 0
0 0 0
Note that this is indistinguishable from a (very short) multiplication table. So ∧ is the logical equivalent of
multiplication, i.e.
#(P ∧ Q) = #P · #Q.
A bit of thought shows we can play the same game for ¬P :
#(¬P ) = 1 − #P.
There are similar formulas one can derive for the other boolean operations.
1.4. Biconditionals. Now we come to a logical connective that will play a fundamental role in our work this
semester: the biconditional. To motivate it, first consider the following question: what does it mean for a
number to be a perfect square? A number is a perfect square if it’s the square of an integer. Right?
Wrong! This ‘definition’ tells us that 4 is a perfect square, but it doesn’t tell us whether or not 5 is a perfect
square. Using ‘if and only if’ (usually abbreviated iff) resolves the issue, however:
Definition. We say a n is a perfect square iff n = k 2 for some integer k.
The boolean operator that represents this is called a biconditional and is denoted ⇐⇒ . More precisely, we
define ⇐⇒ via the following truth table:
P Q P ⇐⇒ Q
T T T
T F F
F T F
F F T
Iff captures the idea that P and Q are logically equivalent: each one implies the other. Another way to write
this idea down is
(P ⇐⇒ Q) ≡ (P =⇒ Q) ∧ (P ⇐= Q)
a logical equivalence that is easily proved via a truth table. In practice this is usually how we prove biconditional
statements. Let’s do an example.
Proposition 1.1. n2 − 1 is a perfect square iff n = ±1.
Proof. We prove the two directions separately. We begin with the easier of the two:
( ⇐= ) If n = ±1, then n2 − 1 = 0 which is a perfect square.
( =⇒ ) If n2 − 1 is a perfect square, then there exists an integer a such that
n 2 − 1 = a2 .
Thus, (n − a)(n + a) = 1. This implies that both n + a and n − a are 1, or both are −1; in particular,
n + a = n − a. Thus a = 0, whence n2 = 1, so n = ±1 as claimed.
2. P REDICATE C ALCULUS
In practice we often want to make statements like
Conjecture 2.1 (Goldbach’s Conjecture). Every even number larger than 2 is the sum of two primes.
This is a proposition, but it’s a rather fancy type of proposition, because it contains within it multiple proposi-
tions: one for each even number larger than 2. This type of statement is called a predicate.
Definition. A predicate is a sentence written in terms of a finite set of variables that becomes a proposition for
any choice of those variables allowed by the sentence.
What are the variables in Goldbach’s conjecture?
• the even number, and
• the two primes.
We can rewrite Goldbach’s conjecture in a way that makes the variables explicit:
Conjecture 2.2. For all even integers n ≥ 4, there exist primes p and q such that n = p + q.
To be propositions, predicates need to have some quantifiers and relations that specify what the variables are
allowed to be. For the purposes of this course, we’ll require just two quantifiers: ‘for all’ (aka ‘for every’ or
‘for any’) and ‘there exist(s)’. We’ve encountered examples of these already, albeit implicitly:
(1) For all positive integers N we have 1 + 2 + · · · + N = N (N2+1) .
2
(2) For all integers a, b with b 6= 0, we have ab 6= 2.
(3) For every integer n ≥ 2 there exists a prime p such that n is divisible by p.
(4) For any finite collection of primes, there exists a prime not in our collection.
But these statements have something in common other than the quantifiers: they all contain relations specifying
the set where the variables are allowed to live. For example, we can rewrite Goldbach’s conjecture in the form
Conjecture 2.3. For all n belonging to {even integers ≥ 4}, there exist p, q belonging to {primes} such that
n = p + q.
The other predicates we wrote down can be translated into formal predicate form as well:
(1)For all N belonging to {positive integers} we have 1 + 2 + · · · + N = N (N2+1) .
2
(2)For all a belonging to {integers} and all b belonging to {nonzero integers}, ab 6= 2.
(3)For all n belonging to {integers ≥ 2} there exists a prime p such that n is divisible by p.
(4)For all n belonging to {positive integers} and for all p1 , p2 , . . . , pn belonging to {primes}, there exists
q belonging to {primes} such that q 6= pi for all i belonging to {integers ≥ 1 and ≤ n}.
The fact that we’re writing the same phrases over and over again motivates introducing some notation!
(1) ∀ means ‘for all’
(2) ∃ means ‘there exists’
(3) ∈ means ‘belongs to’
Thus, for example, Goldbach’s conjecture becomes
Conjecture 2.4. ∀n ∈ {even integers ≥ 4}, ∃p, q ∈ {primes} such that n = p + q.
Here’s how math works. You start with a finite set of axioms; these are propositions that are defined to be
true. Next, you create theorems, which are true propositions that can be logically deduced from propositions
already known to be true. (At the beginning, only the axioms are known to be true; from these you deduce
some theorems; from these theorems and the axioms, you deduce other theorems; etc.) A mathematical theory
is a finite set of axioms and the set of all propositions that are true with respect to these axioms. Here are some
examples of mathematical theories:
(1) Euclidean geometry (generated by 10 axioms)
(2) Non-Euclidean geometry (generated by the same axioms as euclidean geometry, apart from the parallel
postulate)
(3) Number theory (generated by the 9 Peano axioms)
(4) Real analysis (generated by 13 axioms)
(5) Set theory (generated by the 9 axioms of ZFC)
R EMARK . This is all extremely different from science and philosophy, in which theory means a conjecture.
Broadly speaking, the goal of science is to discover a minimal set of axioms from which explanations for all
observed phenomena can be derived. The goal of math, by contrast, is exactly the opposite: to explore the
landscape of a given theory by discovering new theorems.
In practice, mathematicians don’t take proofs down all the way to the axioms; instead, they reduce down to
a bunch of statements that are known consequences of the axioms. We’ll do the same in this course, relying on
an unnecessarily huge set of axioms: things you learned up through high school. To see honest reductions all
the way down to the level of axioms, you should take abstract algebra (355) and real analysis (350).
Here are a few different types of theorems you’ll encounter in the wild:
(1) A Lemma is a theorem whose primary interest is its utility in the proof of another theorem.
(2) A Proposition is a little theorem.
(3) A Corollary is a theorem that’s a quick consequence of another theorem.
(4) A Conjecture is a proposition made in the language of the axioms.
A much less common word (but extremely useful concept) is Porism: a theorem that’s a quick consequence of
the proof of another theorem.
5. F INAL REMARKS ON LOGIC
Mathematical logic is the formal study of mathematical theories. It is a beautiful (albeit difficult) field.
Here’s a very brief discussion of some of the major breakthroughs from the 20th century. The main thrust is
the question of whether a given mathematical theory is nice. Here are some attributes we might wish a nice
theory to possess:
(1) Soundness: every finite formal deduction from the axioms yields a true proposition. (Colloquially:
every valid proof yields a true statement.)
(2) Completeness: every true proposition can be attained via a finite sequence of logical deductions from
the axioms. (Colloquially: every true statement can be proved.)
(3) Consistency: no two theorems derived from the axioms will contradict one another.
Can one prove that a given set of axioms leads to a theory that possesses the above desirable qualities?
The first progress towards this was made by Paul Bernays (in 1918) and Emil Post (in 1921), who proved
that propositional calculus (viewed as a mathematical theory itself) is sound and complete. (Colloquially, this
says that a proposition is true iff it is a theorem.) This implies that propositional calculus is also consistent.
This is neat, but propositional is extremely simplistic as a model of mathematics. In his 1929 doctoral
dissertation, Kurt Gödel proved that predicate calculus – a much richer theory than propositional calculus – is
also sound and complete. This result is now known as the Completeness Theorem.
Even predicate calculus doesn’t quite model the complexity of doing mathematics. What about taking an
actually practiced mathematical theory, like number theory? Is it a nice theory? Gödel surprised working
mathematicians by proving that any theory that’s strong enough to do arithmetic in cannot simultaneously be
consistent and complete. In other words, number theory is either inconsistent (has contradictory theorems) or
incomplete (there exist propositions about numbers that, if true, are not provable from the axioms). This crazy
result is now known as Gödel’s first incompleteness theorem.
The first incompleteness theorem shows that consistency and completeness are incompatible in any suffi-
ciently interesting mathematical theory. But which one fails? Can we check whether or not number theory is
consistent, for example? Following up on his first theorem, Gödel then proved a second fundamental limitation
on interesting mathematical theories: he proved that any theory that’s strong enough to do some arithmetic
cannot prove its own consistency (assuming that it is consistent). This is no known as Gödel’s second in-
completeness theorem. It’s worth pointing out that we can sometimes prove consistency from within a larger
framework, e.g. the consistency of the Peano axioms for arithmetic is provable in the broader set theory of
ZFC.
All of this seems super abstract – largely because it is – but it led to fundamental breakthroughs in computer
science as well. One of the most famous is Turing’s work on the Halting Problem. The problem is this: does
there exist an algorithm that, given as inputs a computer program and an input into the program, predicts
whether the program will terminate (halt) after a finite number of steps? (This is desirable, since the alternative
is for a program to enter an infinite loop and never produce an output.) Inspired by Gödel’s work, Turing proved
(in 1936) that such an algorithm cannot exist. This shows the limitations on what we can do with computer
science to check its own programs.
D EPT OF M ATHEMATICS & S TATISTICS , W ILLIAMS C OLLEGE , W ILLIAMSTOWN , MA, USA
E-mail address: leo.goldmakher@williams.edu