Coming to the right conclusion has been the theme of this book. We learned how to represent statements formally in Chapter 2, and how to prove or disprove statements in Chapter 3. We looked at more important foundational parts of computer science in Chapter 4: sets, functions and relations.
Last chapter, I said that the foundations of mathematics are in “a bit of a philosophical muddle”. That was at the end of our discussion about counting past infinity (Section 4.6). The questions from the work of Cantor, Russell and others became more profound in the 1930s thanks to Kurt Gödel. All this just before practical computers were invented—yes, the theoretical study of computing began before computers existed!
Since this book is about the foundations of computation, let’s mention Gõdel and his contemporary, Alan Turing.
Gödel published his two incompleteness theorems in 1931. The first incompleteness theorem states that for any self-consistent recursive axiomatic system powerful enough to describe the arithmetic of the natural numbers1, there are true propositions about the naturals that cannot be proved from the axioms. In other words, in any formal system of logic, there will always be statements that you can never prove nor disprove: you don’t know.
These two theorems ended a half-century of attempts, beginning with the work of Frege and culminating in the work of Russell and others, to find a set of axioms sufficient for all mathematics.
John von Neumann, one of the pioneers of the first computers, wrote “Kurt Gödel’s achievement in modern logic is singular and monumental—indeed it is more than a monument, it is a landmark which will remain visible far in space and time.” John von Neumann was a brilliant Hungarian-American mathematician, physicist and computer scientist. Among many other things, he invented the von Neumann architecture (familiar from Computer Organisation?) and is called the ‘midwife’ of the modern computer.
Source: en.Wikipedia.org/wiki/Kurt_Gödel and en.Wikipedia.org/ wiki/John_von_Neumann.
Around the same time, one of the early models of computation was developed by the British mathematician, Alan Turing. Turing was interested in studying the theoretical abilities and limitations of computation. (This before the first computers! Von Neumann knew Turing and emphasized that “the fundamental conception [of the modern computer] is owing to Turing” and not to himself.2) His model for computation is a very simple abstract computing machine which has come to be known as a Turing machine. While Turing machines are not very practical, their use as a fundamental model for computation means that every computer scientist should be familiar with them, at least in a general way.3 You’ll learn about them in Automata, Languages, and Computability.
We can also use Turing machines to define ‘computable languages’. The idea is that anything that it is possible to compute, you can compute using a Turing machine (just very slowly). Turing, with American mathematician Alonzo Church, made a hypothesis about the nature of computable functions.4 It states that a function on the natural numbers is computable by a human being following an algorithm (ignoring resource limitations), if and only if it is computable by a Turing machine.
So Gödel established that there are some things we can never tell whether they are true or false, and Turing established a computational model for that the things we can compute.
Is there a link between these results? The halting problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
We end the book with the idea of the proof:
Proof. Consider a model of computation, such a Turing machine. For any program f that might determine if programs halt, construct a ‘pathological’ program g as follows. When called with an input, g passes its own source and its input to f , and when f returns an output (halt/not), g then specifically does the opposite of what f predicts g will do. No f can exist that handles this case.