Happy 100th Birthday, Alan Turing

(This is an updated version of what I wrote for Alan Turing’s 98th birthday.)

Today (June 23, 2012) would have been Alan Turing’s 100th birthday—if he had not died in 1954, at the age of 41.

I never met Alan Turing; he died five years before I was born. But somehow I feel I know him well—not least because many of my own intellectual interests have had an almost eerie parallel with his.

And by a strange coincidence, Mathematica’s “birthday” (June 23, 1988) is aligned with Turing’s—so that today is also the celebration of Mathematica‘s 24th birthday.

I think I first heard about Alan Turing when I was about eleven years old, right around the time I saw my first computer. Through a friend of my parents, I had gotten to know a rather eccentric old classics professor, who, knowing my interest in science, mentioned to me this “bright young chap named Turing” whom he had known during the Second World War.

One of the classics professor’s eccentricities was that whenever the word “ultra” came up in a Latin text, he would repeat it over and over again, and make comments about remembering it. At the time, I didn’t think much of it—though I did remember it. Only years later did I realize that “Ultra” was the codename for the British cryptanalysis effort at Bletchley Park during the war. In a very British way, the classics professor wanted to tell me something about it, without breaking any secrets. And presumably it was at Bletchley Park that he had met Alan Turing.

A few years later, I heard scattered mentions of Alan Turing in various British academic circles. I heard that he had done mysterious but important work in breaking German codes during the war. And I heard it claimed that after the war, he had been killed by British Intelligence. At the time, at least some of the British wartime cryptography effort was still secret, including Turing’s role in it. I wondered why. So I asked around, and started hearing that perhaps Turing had invented codes that were still being used. (In reality, the continued secrecy seems to have been intended to prevent it being known that certain codes had been broken—so other countries would continue to use them.)

I’m not sure where I next encountered Alan Turing. Probably it was when I decided to learn all I could about computer science—and saw all sorts of mentions of “Turing machines”. But I have a distinct memory from around 1979 of going to the library, and finding a little book about Alan Turing written by his mother, Sara Turing.

And gradually I built up quite a picture of Alan Turing and his work. And over the 30+ years that have followed, I have kept on running into Alan Turing, often in unexpected places.

This essay is also in Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People »

In the early 1980s, for example, I had become very interested in theories of biological growth—only to find (from Sara Turing’s book) that Alan Turing had done all sorts of largely unpublished work on that.

And for example in 1989, when we were promoting an early version of Mathematica, I decided to make a poster of the Riemann zeta function—only to discover that Alan Turing had at one time held the record for computing zeros of the zeta function. (Earlier he had also designed a gear-based machine for doing this.)

Recently I even found out that Turing had written about the “reform of mathematical notation and phraseology”—a topic of great interest to me in connection with both Mathematica and Wolfram|Alpha.

And at some point I learned that a high school math teacher of mine (Norman Routledge) had been a friend of Turing’s late in his life. But even though my teacher knew my interest in computers, he never mentioned Turing or his work to me. And indeed, 35 years ago, Alan Turing and his work were little known, and it is only fairly recently that Turing has become as famous as he is today.

Turing’s greatest achievement was undoubtedly his construction in 1936 of a universal Turing machine—a theoretical device intended to represent the mechanization of mathematical processes. And in some sense, Mathematica is precisely a concrete embodiment of the kind of mechanization that Turing was trying to represent.

In 1936, however, Turing’s immediate purpose was purely theoretical. Indeed, it was to show not what could be mechanized in mathematics, but what could not. In 1931, Gödel’s theorem had shown that there were limits to what could be proved in mathematics, and Turing wanted to understand the boundaries of what could ever be done by any systematic procedure in mathematics.

Turing was a young mathematician in Cambridge, England, and his work was couched in terms of mathematical problems of his time. But one of his steps was the theoretical construction of a universal Turing machine capable of being “programmed” to emulate any other Turing machine. In effect, Turing had invented the idea of universal computation—which was later to become the foundation on which all of modern computer technology is built.

At the time, though, Turing’s work did not make much of a splash, probably largely because the emphasis of Cambridge mathematics was elsewhere. Just before Turing published his paper, he learned about a similar result by Alonzo Church from Princeton, formulated not in terms of theoretical machines, but in terms of the mathematics-like lambda calculus. And as a result Turing went to Princeton for a year to study with Church—and while he was there, wrote the most abstruse paper of his life.

The next few years for Turing were dominated by his wartime cryptanalysis work. I learned a few years ago that during the war Turing visited Claude Shannon at Bell Labs in connection with speech encipherment. Turing had been working on a kind of statistical approach to cryptanalysis—and I am extremely curious to know whether Turing told Shannon about this, and potentially launched the idea of information theory, which itself was first formulated for secret cryptanalysis purposes.

After the war, Turing got involved with the construction of the first actual computers in England. To a large extent, these computers had emerged from engineering, not from a fundamental understanding of Turing’s work on universal computation.

There was, however, a definite, if circuitous, connection. In 1943 Warren McCulloch and Walter Pitts in Chicago wrote a theoretical paper about neural networks that used the idea of universal Turing machines to discuss general computation in the brain. John von Neumann read this paper, and used it in his recommendations about how practical computers should be built and programmed. (John von Neumann had known about Turing’s paper in 1936, but at the time did not recognize its significance, instead describing Turing in a recommendation letter as having done interesting work on the central limit theorem.)

It is remarkable that in just over a decade Alan Turing was transported from writing theoretically about universal computation, to being able to write programs for an actual computer. I have to say, though, that from today’s vantage point, his programs look incredibly “hacky”—with lots of special features packed in, and encoded as strange strings of letters. But perhaps to reach the edge of a new technology it’s inevitable that there has to be hackiness.

And perhaps too it required a certain hackiness to construct the very first universal Turing machine. The concept was correct, but Turing quickly published an erratum to fix some bugs, and in later years, it’s become clear that there were more bugs. But at the time Turing had no intuition about how easily bugs can occur.

Turing also did not know just how general or not his results about universal computation might be. Perhaps the Turing machine was just one model of a computational process, and other models—or brains—might have quite different capabilities. But gradually over the course of several decades, it became clear that a wide range of possible models were actually exactly equivalent to the machines Turing had invented.

It’s strange to realize that Alan Turing never appears to have actually simulated a Turing machine on a computer. He viewed Turing machines as theoretical devices relevant for proving general principles. But he does not appear to have thought about them as concrete objects to be explicitly studied.

And indeed, when Turing came to make models of biological growth processes, he immediately started using differential equations—and appears never to have considered the possibility that something like a Turing machine might be relevant to natural processes.

When I became interested in simple computational processes around 1980, I also didn’t consider Turing machines—and instead started off studying what I later learned were called cellular automata. And what I discovered was that even cellular automata with incredibly simple rules could produce incredibly complex behavior—which I soon realized could be considered as corresponding to a complex computation.

I probably simulated my first explicit Turing machine only in 1991. To me, Turing machines were built a little bit too much like engineering systems—and not like something that would likely correspond to a system in nature. But I soon found that even simple Turing machines, just like simple cellular automata, could produce immensely complex behavior.

In a sense, Alan Turing could easily have discovered this. But his intuition—like my original intuition—would have told him that no such phenomenon was possible. So it would likely only have been luck—and access to easy computation—that would have led him to find the phenomenon.

Had he done so, I am quite sure he would have become curious about just what the threshold for his concept of universality would be, and just how simple a Turing machine would suffice. In the mid-1990s, I searched the space of simple Turing machines, and found the smallest possible candidate. And after I put up a $25,000 prize, in 2007 Alex Smith showed that indeed this Turing machine is universal.

No doubt Alan Turing would quite quickly have grasped the significance of such results for thinking about both natural processes and mathematics. But without the empirical discoveries, his thinking did not progress in this direction.

Instead, he began to consider from a more engineering point of view to what extent computers should be able to emulate brains, and he invented ideas like the Turing Test. Reading through his writings today, it is remarkable how many of his conceptual arguments about artificial intelligence still need to be made—though some, like his discussion of extrasensory perception, have become quaintly dated.

And looking at his famous 1950 article on “Computing Machinery and Intelligence” one sees a discussion of programming into a machine the contents of Encyclopædia Britannica—which he estimates should take 60 workers 50 years. I wonder what Alan Turing would think of Wolfram|Alpha, which, thanks to progress over the past 60 years, and perhaps some cleverness, has so far taken at least slightly less human effort.

In addition to his intellectual work, Turing has in recent times become something of a folk hero, most notably through the story of his death. Almost certainly it will never be known for sure whether his death was in fact intentional. But from what I know and have heard I must say that I rather doubt that it was.

When one first hears that Alan Turing died by eating an apple impregnated with cyanide one assumes it must have been intentional suicide. But when one later discovers that he was quite a tinkerer, had recently made cyanide for the purpose of electroplating spoons, kept chemicals alongside his food, and was rather a messy individual, the picture becomes a lot less clear.

I often wonder what Alan Turing would have been like to meet. I do not know of any recording of his voice (though he did once do a BBC radio broadcast). But I gather that even near the end of his life he giggled a lot, and talked with a kind of stutter that seemed to come from thinking faster than he was talking. He seemed to have found it easiest to talk to mathematicians. He thought a little about physics, though doesn’t seem to have ever gotten deeply into it. And he seemed to have maintained a child-like enthusiasm and wonder for many intellectual questions throughout his life.

He was something of a loner, working successively on his own on his various projects. He was gay, and lived alone. He was no organizational politician, and towards the end of his life seems to have found himself largely ignored both by people working on computers and by people working on his new interest of biological growth and morphogenesis.

He was in some respects a quintessential British amateur, dipping his intellect into different areas. He achieved a high level of competence in pure mathematics, and used that as his professional base. His contributions in traditional mathematics were certainly perfectly respectable, though not spectacular. But in every area he touched, there was a certain crispness to the ideas he developed—even if their technical implementation was sometimes shrouded in arcane notation and masses of detail.

In some ways he was fortunate to live when he did. For he was at the right time to be able take the formalism of mathematics as it had been developed, and to combine it with the emerging engineering of his day, to see for the first time the general concept of computation.

It is perhaps a shame that he died 25 years before computer experiments became widely feasible. I certainly wonder what he would have discovered tinkering with Mathematica. I don’t doubt that he would have pushed it to its limits, writing code that would horrify me. But I fully expect that long before I did, he would have discovered the main elements of NKS, and begun to understand their significance.

He would probably be disappointed that 60 years after he invented the Turing test, there is still no full human-like artificial intelligence. And perhaps long ago he would have begun to campaign for the creation of something like Wolfram|Alpha, to turn human knowledge into something computers can handle.

If he had lived a few decades longer, he would no doubt have applied himself to a half dozen more areas. But there is still much to be grateful for in what Alan Turing did achieve in his 41 years, and his modern reputation as the founding father of the concept of computation—and the conceptual basis for much of what I, for example, have done—is well deserved.

Happy posthumous 100th birthday, Alan Turing.

A few additional pointers:

Turing machine history in A New Kind of Science »
TuringMachine function in Mathematica »
Turing machines in the Wolfram Demonstrations Project »
Turing machines in Wolfram|Alpha »
The Alan Turing Year »

Stephen Wolfram (2012), "Happy 100th Birthday, Alan Turing," Stephen Wolfram Writings. writings.stephenwolfram.com/2012/06/happy-100th-birthday-alan-turing.
Text
Stephen Wolfram (2012), "Happy 100th Birthday, Alan Turing," Stephen Wolfram Writings. writings.stephenwolfram.com/2012/06/happy-100th-birthday-alan-turing.
CMS
Wolfram, Stephen. "Happy 100th Birthday, Alan Turing." Stephen Wolfram Writings. June 23, 2012. writings.stephenwolfram.com/2012/06/happy-100th-birthday-alan-turing.
APA
Wolfram, S. (2012, June 23). Happy 100th birthday, Alan Turing. Stephen Wolfram Writings. writings.stephenwolfram.com/2012/06/happy-100th-birthday-alan-turing.

Posted in: Computational Science, Historical Perspectives

1 comment

  1. If Turing had lived, it is possible that we would have achieved AGI earlier. The 1950 paper you point to contains the Turing Test. Imagine a formalism capable of implementing the test itself. Now consider what that formalism can express. AGI, anyone?