(This post was originally published on the NKS Forum.)
Last Friday (April 28, 2006) would have been Kurt Gödel’s 100th birthday. I agreed to try to write something about it for publication in a newspaper … which had the dual misfeatures that (a) I had to compress what I was saying and (b) that it didn’t actually get done…
Still, I thought people on the Forum might find my draft interesting … so here it is. Please recognize that this wasn’t polished for final publication…
When Kurt Gödel was born—one hundred years ago today—the field of mathematics seemed almost complete. Two millennia of development had just been codified into a few axioms, from which it seemed one should be able almost mechanically to prove or disprove anything in mathematics—and, perhaps with some extension, in physics too.
Twenty-five years later things were proceeding apace, when at the end of a small academic conference, a quiet but ambitious fresh PhD involved with the Vienna Circle ventured that he had proved a theorem that this whole program must ultimately fail.
In the seventy-five years since then, what became known as Gödel’s theorem has been ascribed almost mystical significance, sowed the seeds for the computer revolution, and meanwhile been practically ignored by working mathematicians—and viewed as irrelevant for broader science.
The ideas behind Gödel’s theorem have, however, yet to run their course. And in fact I believe that today we are poised for a dramatic shift in science and technology for which its principles will be remarkably central.
Gödel’s original work was quite abstruse. He took the axioms of logic and arithmetic, and asked a seemingly paradoxical question: can one prove the statement “this statement is unprovable”?
One might not think that mathematical axioms alone would have anything to say about this. But Gödel demonstrated that in fact his statement could be encoded purely as a statement about numbers.
Yet the statement says that it is unprovable. So here, then, is a statement within mathematics that is unprovable by mathematics: an “undecidable statement”. And its existence immediately shows that there is a certain incompleteness to mathematics: there are mathematical statements that mathematical methods cannot reach.
It could have been that these ideas would rest here. But from within the technicalities of Gödel’s proof there emerged something of incredible practical importance. For Gödel’s seemingly bizarre technique of encoding statements in terms of numbers was a critical step towards the idea of universal computation—which implied the possibility of software, and launched the whole computer revolution.
Thinking in terms of computers gives us a modern way to understand what Gödel did: although he himself in effect only wanted to talk about one computation, he proved that logic and arithmetic are actually sufficient to build a universal computer, which can be programmed to carry out any possible computation.
Not all areas of mathematics work this way. Elementary geometry and elementary algebra, for example, have no universal computation, and no analog of Gödel’s theorem—and we even now have practical software that can prove any statement about them.
But universal computation—when it is present—has many deep consequences.
The exact sciences have always been dominated by what I call computational reducibility: the idea of finding quick ways to compute what systems will do. Newton showed how to find out where an (idealized) Earth will be in a million years—we just have to evaluate a formula; we do not have to trace a million orbits.
But if we study a system that is capable of universal computation we can no longer expect to “outcompute” it like this; instead, to find out what it will do may take us irreducible computational work.
And this is why it can be so difficult to predict what computers will do—or to prove that software has no bugs. It is also at the heart of why mathematics can be difficult: it can take an irreducible amount of computational work to establish a given mathematical result.
And it is what leads to undecidability. For if we want to know, say, whether any number of any size has a certain property, computational irreducibility may tell us that without checking infinitely many cases we may not be able to decide for sure.
Working mathematicians, though, have never worried much about undecidability. For certainly Gödel’s original statement is remote, being astronomically long when translated into mathematical form. And the few alternatives constructed over the years have seemed almost as irrelevant in practice.
But my own work with computer experiments suggests that in fact undecidability is much closer at hand. And indeed I suspect that quite a few of the famous unsolved problems in mathematics today will turn out to be undecidable within the usual axioms.
The reason undecidability has not been more obvious is just that—despite their reputation for abstract generality—mathematicians, like most scientists, tend to concentrate on questions that their methods succeed with.
Back in 1931, Gödel and his contemporaries were not even sure whether Gödel’s theorem was something general, or just a quirk of their formalism for logic and arithmetic. But a few years later, when Turing machines and other models for computers showed the same phenomenon, it began to seem more general.
Still, Gödel wondered whether there would be an analog of his theorem for human minds, or for physics. We still do not know the complete answer, though I certainly expect that both minds and physics are in principle just like universal computers—with Gödel-like theorems.
One of the great surprises of my own work has been just how easy it is to find universal computation. If one systematically explores the abstract universe of possible computational systems one does not have to go far. One needs nothing like the billion transistors of a modern electronic computer, or even the elaborate axioms of logic and arithmetic. Simple rules that can be stated in a short sentence—or summarized in a three-digit number—are enough.
And it is almost inevitable that such rules are common in nature—bringing with them undecidability. Is a solar system ultimately stable? Can a biochemical process ever go out of control? Can a set of laws have a devastating consequence? We can now expect general versions of such questions to be undecidable.
This might have pleased Gödel—who once said he had found a bug in the US Constitution, who gave his friend Einstein a paradoxical model of the universe for his birthday—and who told a physicist I knew that for theoretical reasons he “did not believe in natural science”.
Even in the field of mathematics, Gödel—like his results—was always treated as somewhat alien to the mainstream. He continued for decades to supply central ideas to mathematical logic, even as “the greatest logician since Aristotle” (as John von Neumann called him) became increasingly isolated, worked to formalize theology using logic, became convinced that discoveries of Leibniz from the 1600s had been suppressed, and in 1978, with his wife’s health failing, died of starvation, suspicious of doctors and afraid of being poisoned.
He left us the legacy of undecidability, which we now realize affects not just esoteric issues about mathematics, but also all sorts of questions in science, engineering, medicine and more.
One might think of undecidability as a limitation to progress, but in many ways it is instead a sign of richness. For with it comes computational irreducibility, and the possibility for systems to build up behavior beyond what can be summarized by simple formulas. Indeed, my own work suggests that much of the complexity we see in nature has precisely this origin. And perhaps it is also the essence of how from deterministic underlying laws we can build up apparent free will.
In science and technology we have normally crafted our theories and devices by careful design. But thinking in the abstract computational terms pioneered by Gödel’s methods we can imagine an alternative. For if we represent everything uniformly in terms of rules or programs, we can in principle just explicitly enumerate all possibilities.
In the past, though, nothing like this ever seemed even faintly sensible. For it was implicitly assumed that to create a program with interesting behavior would require explicit human design—or at least the efforts of something like natural selection. But when I started actually doing experiments and systematically running the simplest programs, what I found instead is that the computational universe is teeming with diverse and complex behavior.
Already there is evidence that many of the remarkable forms we see in biology just come from sampling this universe. And perhaps by searching the computational universe we may find—even soon—the ultimate underlying laws for our own physical universe. (To discover all their consequences, though, will still require irreducible computational work.)
Exploring the computational universe puts mathematics too into a new context. For we can also now see a vast collection of alternatives to the mathematics that we have ultimately inherited from the arithmetic and geometry of ancient Babylon. And for example, the axioms of basic logic, far from being something special, now just appear as roughly the 50,000th possibility. And mathematics, long a purely theoretical science, must adopt experimental methods.
The exploration of the computational universe seems destined to become a core intellectual framework in the future of science. And in technology the computational universe provides a vast new resource that can be searched and mined for systems that serve our increasingly complex purposes. It is undecidability that guarantees an endless frontier of surprising and useful material to find.
And so it is that from Gödel’s abstruse theorem about mathematics has emerged what I believe will be the defining theme of science and technology in the twenty-first century.