Although in some ways, the roots of the Incompleteness Theorems can be traced back to the beginnings of mathematics, the relevant history begins in the nineteenth century with logicians such as Augustus De Morgan, George Boole, and Charles Sanders Pierce who gave birth to the notion of rigorous, formal logic in the modern sense. Herman Grassmann, father of Linear Algebra, showed that many properties of arithmetic can be reduced to the operations of succession and induction, which spurred the work of such illustrious scientists as Giuseppe Peano and Gottlob Frege. Somewhat separately, Georg Cantor developed the theory of sets while working in Topology. All of this increase in rigor and precision, all of this focus on reducing mathematics to bare logic, led David Hilbert, one of the most prominent mathematicians of the early twentieth century, to in 1920 begin a program of research (now remembered as simply "Hilbert's Program"), the aim of which was to uncover a minimal set of complete, consistent logical axioms, from which all of mathematics could be derived - no small task indeed. The mathematicians of the philosophical school of Formalism championed this noble pursuit, and for the next several years, mathematicians around the world chased this holiest of holy grails of science. Gödel's Incompleteness Theorems, proven by Kurt Gödel, completely dismantled Hilbert's Program by showing that its intended goals were impossible to achieve.

Let's back up a moment. "Axioms", "completeness", "consistency", "formal logic"...What does it all mean? Why does logic need to be

*formalized*? Isn't logic, by definition, intuitive? Yes...Well, at least that's the idea. To find motivation, we might look to the theory of mathematical equations. A mathematical equation, properly formulated, demonstrates an absolute, inalienable truth; it expresses a relationship between two quantities, and furthermore expresses the notion that both of these quantities are identical. As any high school algebra student will tell you, however, sometimes such identities are not as readily apparent as we would like them to be. Sometimes in order to

*prove*identity, we must follow a complicated series of steps in order to rigorously transform one representation of the quantity into the other. Intuitively, one could argue that identity, the property of two objects being

*identical*, should be relatively easy to reconcile and to identify as the same. However, the example of equations demonstrate an important principle: An object can have a myriad of different representations, and it is not always trivial to recognize that these representations are in fact guises of the same abstract object.

What is it about the formula

Here we discover the larger purpose of mathematics: To discover absolute truths. In general, we do not care for the laws of the physical Universe, we only care for what can be proven in an abstract context. However, in order to study such truths, we must begin with

*some*assumptions...After all, how can one possibly arrive at any results given no data nor truths to begin with? These

*assumptions*which we necessarily take to be the foundation of our studies are referred to as

**axioms**, and the sum total of our axioms together with the rules by which they can be manipulated is called a

**deductive system**. In this way, the notion of arriving at new truths from old ones, the fundamental goal of logic, is formulated almost in terms of a game. This game lies at the heart of modern mathematics.

Let us return for a moment to our example of the more familiar science of equations. Suppose that we have two numbers - 3 and 4 - and we wish to express an equation which combines them to form new numbers. Then we can use addition to obtain 3+4=7. A similar notion applies to our axioms, for we can combine them together to obtain

*new*results not previously a part of our set of known truths. Once these new truths have been derived from our axioms, these

**theorems**have been established, they can be used same as the axioms in order to

*derive*new theorems. The exact manner in which known truths, whether they be axioms or previously constructed theorems, are combined in order to produce some new theorem is called a

**proof**, and if the deductive system is in a certain sense correct, then the proof is absolutely and unconditionally valid so long as the axioms underlying the whole system are valid in and of themselves. The set of all truths derivable from a deductive system is known as a

**theory**.

In this way, the grand acropolis of mathematics is constructed from a relatively small set of assumed truths. There is a subtlety in the above description, however, which should be questioned: In what way must a deductive system, a set of axioms and rules for their manipulation, be

*correct*in order for a proof to be valid? Although we can never (by definition) really know whether the axioms of a system are correct, we

*can*consider whether the axioms make logical sense when used with one another. We make an appeal to human intuition, understanding, and "logic" in the traditional sense when we assert that no deductive system should ever be able to prove that some statement is both true and false simultaneously. The danger of such a situation is readily apparent: Not only does the logician lose the valuable "proof by contradiction" which is so often utilized in everyday reasoning and mathematics alike, but the deductive system very well may fall victim to the Principle of Explosion, which informally states that "from contradiction, everything follows". If a statement is both true and false at the same time, then all statements which can be derived from the statement

*or*its negation are both true. In general, mathematicians, logicians, scientists, and philosophers alike rule out this situation as among the absurd and focus on systems which do not allow for such

**contradictions**(although, for those interested, some fringe logicians

*do*study such dialethian statements, along with the so-called paraconsistent systems in which they are allowed). Theories which do not contain any contradictions - that is, theories which do not admit statements which are both true and false at the same time - are called

**consistent**, and it is very much desirable and necessary that the framework in which mathematics takes place has this property. If a mathematical theory is consistent - if every statement is exclusively either true or false - then it is also very highly desirable that every such statement's truth value can be decided. If every statement within a theory can be determined by proof to be either true or false, then this deductive system is denoted

**complete**. It is important to recognize that it is possible that a deductive system be consistent, but not complete (in which case there are statements whose validity cannot be decided), but if a theory is inconsistent, then the Principle of Explosion implies that

*every*statement in the theory is true.

Thus it was the goal of David Hilbert and his contemporaries (including Kurt Gödel) to develop a theory which could determine the truth (or falsehood) of any mathematical statement while assuming as few axiomatic truths as possible. Should such a theory exist, then it would in principle be possible to encode the theory into a computer and in this way have available a machine which could take as input a mathematical statement in the language of formal logic, and as output produce a value of "true" or "false". The only additional requirement which must be imposed to permit such a program is that the axioms of the theory be "enumerable": There are not so many axioms that we cannot, in principle, list them. Note that this does not imply that the number of axioms be finite, but it does imply that we can know the difference between an axiom and a theorem (this notion is made more precise by the concept of a recursively enumerable set, but such technicality is outside the scope of this post). In general, the requirement that the axioms be in a sense "knowable" is a reasonable one, as it allows us to perform proofs in a reasonable fashion. Such a theory whose axioms can be enumerated is called

**effectively generated**, as the theorems which it produces can be proved in an

*effective*manner. If the theory is correct and consistent in the sense that it is contains no contradictions, complete in the sense that all statements can be proven to be either true or false, and effectively generated in the sense that proofs can be derived from a reasonable set of axioms, then a computer program can be written which can decide the truth or falsehood of any statement of the theory.

What an idea! I can think of no more ambitious project in all of the history of science. Alas, it was not meant to be. In 1931, Kurt Gödel published a paper entitled "On Formally Undecidable Propositions of Principia Mathematica and Related Systems" which not only struck a crippling blow to Hilbert's Program (and the theory of Types proposed by Alfred Whitehead North and Bertrand Russell), but, indeed, shook the very foundations of science to its core. In this paper, Gödel proved two theorems, now universally recognized as "Gödel's Incompleteness Theorems":

- Any consistent, effectively generated theory which is powerful enough to express the natural numbers is necessarily incomplete.
- Any effectively generated theory which is powerful enough to express the natural numbers and can prove its own consistency is necessarily inconsistent.

...Wow. So after all of that, we arrive at the result that

__not only are there true statements in Mathematics which can never be proved, but Mathematics__. Just...let that sink in for a moment. Let it sink in how cocky the scientific world sometimes is. Let it sink in how confidently biologists and sociologists and psychologists assert that they understand the world; let it sink in how fundamentally those disciplines rely on chemistry and physics; let it sink in how absolutely dependent physics and chemistry are upon the tools of mathematics. And let it sink in how*itself*can never be proved correct*deeply*this pair of theorems lies within the heart of the hearts of science.
And yet, as profound as the Incompleteness Theorems are, we must be careful not to misuse them. In the past 90 years or so, Gödel's Theorems have been used to assert everything from the existence of God to the impossibility of a physical Theory of Everything to the unreliability of Mathematics, and much of this stems from a lack of understanding as to what the theorems say (although Gödel did attempt to produce his own, separate proof of the existence of God). The Theorems only apply within a very specific context, that of effectively generated theories which are powerful enough to express the natural numbers, and while that is indeed a very

*large*and*useful*class of theories, it is not a class of theories which we can arbitrarily assume*all*theories to fall under. Nor should we confuse the notion of a "theory" of a field such as Biology or Physics with a "theory" in the logico-mathematical sense; it would very much be a mistake to apply the Incompleteness Theorems to the Theory of Evolution, for example. The Incompleteness Theorems are indeed profound and far-reaching, but they, like any statements of truth, only apply to situations in which all of their hypotheses are precisely met.
Still, even though the Incompleteness Theorems don't (necessarily) prove the impossibility of a Theorem of Everything, they do stand as shining, unbreakable monoliths which shall forever implicitly remind logicians, mathematicians, and scientists in general of their own mortality. The story of Hilbert's Program and the Incompleteness Theorems is almost biblical; the scientists of the early twentieth century boldly threw caution to the wind and attempted to claim the ultimate scientific power of Judgement of Truth, but just as their goal seemed within reach - Hilbert announced in 1929 that the consistency of second-order arithmetic, and thus most of mathematics, would soon be proven - the Incompleteness Theorems were erected, and the logicians' dreams were crushed. The story of the Incompleteness Theorems serves as a reminder to science that it should always retain humility.

Fortunately for us, the deductive system in which Mathematics is currently defined, Zermelo-Fraenkel Set Theory with the Axiom of Choice (ZFC), has stood for almost 100 years without anybody finding a contradiction, thus it is widely believed to be consistent - though, of course, the Second Incompleteness Theorem tells us that we can never prove it to be so. The predictions of the First Incompleteness Theorem have since come to fruition as well, as there have been several statements found within mathematics which cannot be decided within ZFC; for a particularly famous example of one, see my post on the Continuum Hypothesis.

Well, there you have it...Gödel's Incompleteness Theorems. One of the most devastating discoveries ever to strike Mathematics, and one of the most incredible results ever proven by the human mind.