Learning about the Future from 2001: A Space Odyssey, Fifty Years Later

2001: A Space Odyssey

A Glimpse of the Future

It was 1968. I was 8 years old. The “space race” was in full swing. For the first time, a space probe had recently landed on another planet (Venus). And I was eagerly studying everything I could to do with space.

Then on April 3, 1968 (May 15 in the UK), the movie 2001: A Space Odyssey was released—and I was keen to see it. So in the early summer of 1968 there I was, the first time I’d ever been in an actual cinema (yes, it was called that in the UK). I’d been dropped off for a matinee, and was pretty much the only person in the theater. And to this day, I remember sitting in a plush seat and eagerly waiting for the curtain to go up, and the movie to begin.

It started with an impressive extraterrestrial sunrise. But then what was going on? Those weren’t space scenes. Those were landscapes, and animals. I was confused, and frankly a little bored. But just when I was getting concerned, there was a bone thrown in the air that morphed into a spacecraft, and pretty soon there was a rousing waltz—and a big space station turning majestically on the screen.

This essay is also in:
WIRED »SoundCloud »

Images from the beginning of 2001: A Space Odyssey

The next two hours had a big effect on me. It wasn’t really the spacecraft (I’d seen plenty of them in books by then, and in fact made many of my own concept designs). And at the time I didn’t care much about the extraterrestrials. But what was new and exciting for me in the movie was the whole atmosphere of a world full of technology—and the notion of what might be possible there, with all those bright screens doing things, and, yes, computers driving it all.

Control screens from 2001: A Space Odyssey

It would be another year before I saw my first actual computer in real life. But those two hours in 1968 watching 2001 defined an image of what the computational future could be like, that I carried around for years.

I think it was during the intermission to the movie that some seller of refreshments—perhaps charmed by a solitary kid so earnestly pondering the movie—gave me a “cinema program” about the movie. Half a century later I still have that program, complete with a food stain, and faded writing from my 8-year-old self, recording (with some misspelling) where and when I saw the movie:

Cinema program from 1968 release of 2001: A Space Odyssey

What Actually Happened

A lot has happened in the past 50 years, particularly in technology, and it’s an interesting experience for me to watch 2001 again—and compare what it predicted with what’s actually happened. Of course, some of what’s actually been built over the past 50 years has been done by people like me, who were influenced in larger or smaller ways by 2001.

When Wolfram|Alpha was launched in 2009—showing some distinctly HAL-like characteristics—we paid a little homage to 2001 in our failure message (needless to say, one piece of notable feedback we got at the beginning was someone asking: “How did you know my name was Dave?!”):

Early Wolfram|Alpha error message: "I'm sorry Dave, I'm afraid I can't do that..."

One very obvious prediction of 2001 that hasn’t panned out, at least yet, is routine, luxurious space travel. But like many other things in the movie, it doesn’t feel like what was predicted was off track; it’s just that—50 years later—we still haven’t got there yet.

So what about the computers in the movie? Well, they have lots of flat-screen displays, just like real computers today. In the movie, though, one obvious difference is that there’s one physical display per functional area; the notion of windows, or dynamically changeable display areas, hadn’t arisen yet.

Another difference is in how the computers are controlled. Yes, you can talk to HAL. But otherwise, it’s lots and lots of mechanical buttons. To be fair, cockpits today still have plenty of buttons—but the centerpiece is now a display. And, yes, in the movie there weren’t any touchscreens—or mice. (Both had actually been invented a few years before the movie was made, but neither was widely known.)

There also aren’t any keyboards to be seen (and in the high-tech spacecraft full of computers going to Jupiter, the astronauts are writing with pens on clipboards; presciently, no slide rules and no tape are shown—though there is one moment when a printout that looks awfully like a punched card is produced). Of course, there were keyboards for computers back in the 1960s. But in those days, very few people could type, and there probably didn’t seem to be any reason to think that would change. (Being something of a committed tool user, I myself was routinely using a typewriter even in 1968, though I didn’t know any other kids who were—and my hands at the time weren’t big or strong enough to do much other than type fast with one finger, a skill whose utility returned decades later with the advent of smartphones.)

What about the content of the computer displays? That might have been my favorite thing in the whole movie. They were so graphical, and communicating so much information so quickly. I had seen plenty of diagrams in books, and had even painstakingly drawn quite a few myself. But back in 1968 it was amazing to imagine that a computer could generate information, and display it graphically, so quickly.

Of course there was television (though color only arrived in the UK in 1968, and I’d only seen black and white). But television wasn’t generating images; it was just showing what a camera saw. There were oscilloscopes too, but they just had a single dot tracing out a line on the screen. So the computer displays in 2001 were, at least for me, something completely new.

At the time it didn’t seem odd that in the movie there were lots of printed directions (how to use the “Picturephone”, or the zero-gravity toilet, or the hibernation modules). Today, any such instructions (and they’d surely be much shorter, or at least broken up a lot, for today’s less patient readers) would be shown onscreen. But when 2001 was made, the idea of word processing, and of displaying text to read onscreen, was still several years in the future—probably not least because at the time people thought of computers as machines for calculation, and there didn’t seem to be anything calculational about text.

There are lots of different things shown on the displays in 2001.  Even though there isn’t the idea of dynamically movable windows, the individual displays, when they’re not showing anything, go into a kind of “iconic” state, just showing in large letters codes like NAV or ATM or FLX or VEH or GDE.

When the displays are active they sometimes show things like tables of numbers, and sometimes show lightly animated versions of a whole variety of textbook-like diagrams. A few of them show 1980s-style animated 3D line graphics (“what’s the alignment of the spacecraft?”, etc.)—perhaps modeled after analog airplane controls.

But very often there’s also something else—and occasionally it fills a whole display. There’s something that looks like code, or a mixture of code and math.

Docking-control display from 2001: A Space Odyssey

It’s usually in a fairly “modern-looking” sans serif font (well, actually, a font called Manifold for IBM Selectric electric typewriters). Everything’s uppercase. And with stars and parentheses and names like TRAJ04, it looks a bit like early Fortran code (except that given the profusion of semicolons, it was more likely modeled on IBM’s PL/I language). But then there are also superscripts, and built-up fractions—like math.

Looking at this now, it’s a bit like trying to decode an alien language. What did the makers of the movie intend this to be about? A few pieces make sense to me. But a lot of it looks random and nonsensical—meaningless formulas full of unreasonably high-precision numbers. Considering all the care put into the making of 2001, this seems like a rare lapse—though perhaps 2001 started the long and somewhat unfortunate tradition of showing meaningless code in movies. (A recent counterexample is my son Christopher’s alien-language-analysis code for Arrival, which is actual Wolfram Language code that genuinely makes the visualizations shown.)

But would it actually make sense to show any form of code on real displays like the ones in 2001? After all, the astronauts aren’t supposed to be building the spacecraft; they’re only operating it. But here’s a place where the future is only just now arriving. During most of the history of computing, code has been something that humans write, and computers read. But one of my goals with the Wolfram Language is to create a true computational communication language that is high-level enough that not only computers, but also humans, can usefully read.

Yes, one might be able to describe in words some procedure that a spacecraft is executing. But one of the points of the Wolfram Language is to be able to state the procedure in a form that directly fits in with human computational thinking. So, yes, on the first real manned spacecraft going to Jupiter, it’ll make perfect sense to display code, though it won’t look quite like what’s in 2001.

Accidents of History

I’ve watched 2001 several times over the years, though not specifically in the year 2001 (that year for me was dominated by finishing my magnum opus A New Kind of Science). But there are several very obvious things in the movie 2001 that don’t ring true for the real year 2001—quite beyond the very different state of space travel.

One of the most obvious is that the haircuts and clothing styles and general formality look wrong. Of course these would have been very hard to predict. But perhaps one could at least have anticipated (given the hippie movement etc.) that clothing styles and so on would get less formal. But back in 1968, I certainly remember for example getting dressed up even to go on an airplane.

Another thing that today doesn’t look right in the movie is that nobody has a personal computer. Of course, back in 1968 there were still only a few thousand computers in the whole world—each weighing at least some significant fraction of a ton—and basically nobody imagined that one day individual people would have computers, and be able to carry them around.

As it happens, back in 1968 I’d recently been given a little plastic kit mechanical computer (called Digi-Comp I) that could (very laboriously) do 3-digit binary operations. But I think it’s fair to say that I had absolutely no grasp of how this could scale up to something like the computers in 2001. And indeed when I saw 2001 I imagined that to have access to technology like I saw in the movie, I’d have to be joining something like NASA when I was grown up.

What of course I didn’t foresee—and I’m not sure anyone did—is that consumer electronics would become so small and cheap. And that access to computers and computation would therefore become so ubiquitous.

In the movie, there’s a sequence where the astronauts are trying to troubleshoot a piece of electronics. Lots of nice computer-aided, engineering-style displays come up. But they’re all of printed circuit boards with discrete components. There are no integrated circuits or microprocessors—which isn’t surprising, because in 1968 these basically hadn’t been invented yet. (Correctly, there aren’t vacuum tubes, though. Apparently the actual prop used—at least for exterior views—was a gyroscope.)

Troubleshooting electronics in 2001: A Space Odyssey

It’s interesting to see all sorts of little features of technology that weren’t predicted in the movie. For example, when they’re taking commemorative pictures in front of the monolith on the Moon, the photographer keeps tipping the camera after each shot—presumably to advance the film inside. The idea of digital cameras that could electronically take pictures simply hadn’t been imagined then.

In the history of technology, there are certain things that just seem inevitable—even though sometimes they may take decades to finally arrive. An example are videophones. There were early ones even back in the 1930s. And there were attempts to consumerize them in the 1970s and 1980s. But even by the 1990s they were still exotic—though I remember that with some effort I successfully rented a pair of them in 1993—and they worked OK, even over regular phone lines.

On the space station in 2001, there’s a Picturephone shown, complete with an AT&T logo—though it’s the old Bell System logo that looks like an actual bell. And as it happens, when 2001 was being made, there was a real project at AT&T called the Picturephone.

Picturephone in 2001: A Space Odyssey

Of course, in 2001 the Picturephone isn’t a cellphone or a mobile device. It’s a built-in object, in a kiosk—a pay Picturephone. In the actual course of history, though, the rise of cellphones occurred before the consumerization of videochat—so payphone and videochat technology basically never overlapped.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # [“octothorp”]). Push-button phones actually already existed in 1968, although they were not yet widely deployed. And, of course, because of the details of our technology today, when one actually does a videochat, I don’t know of any scenario in which one ends up pushing mechanical buttons.

There’s a long list of instructions printed on the Picturephone—but in actuality, just like today, its operation seems quite straightforward. Back in 1968, though, even direct long-distance dialing (without an operator) was fairly new—and wasn’t yet possible at all between different countries.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s. (I remember that in the mid-1970s in the UK, when I got my first ATM card, it consisted simply of a piece of plastic with holes like a punched card—not the most secure setup one can imagine.)

At the end of the Picturephone call in 2001, there’s a charge displayed: $1.70. Correcting for inflation, that would be about $12 today. By the standards of modern cellphones—or internet videochatting—that’s very expensive. But for a present-day satellite phone, it’s not so far off, even for an audio call. (Today’s handheld satphones can’t actually support the necessary data rates for videocalls, and networks on planes still struggle to handle videocalls.)

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons. (And there’s also futuristic-for-the-1960s programming, like a televised women’s judo match.)

A curious film-school-like fact about 2001 is that essentially every major scene in the movie (except the ones centered on HAL) shows the consumption of food. But how would food be delivered in the year 2001? Well, like everything else, it was assumed that it would be more automated, with the result that in the movie a variety of elaborate food dispensers are shown. As it’s turned out, however, at least for now, food delivery is something that’s kept humans firmly in the loop (think McDonald’s, Starbucks, etc.).

In the part of the movie concerned with going to Jupiter, there are “hibernaculum pods” shown—with people inside in hibernation. And above these pods there are vital-sign displays, that look very much like modern ICU displays. In a sense, that was not such a stretch of a prediction, because even in 1968, there had already been oscilloscope-style EKG displays for some time.

Of course, how to put people into hibernation isn’t something that’s yet been figured out in real life. That it—and cryonics—should be possible has been predicted for perhaps a century. And my guess is that—like cloning or gene editing—to do it will take inventing some clever tricks. But in the end I expect it will pretty much seem like a historical accident in which year it’s figured out. It just so happens not to have happened yet.

There’s a scene in 2001 where one of the characters arrives on the space station and goes through some kind of immigration control (called “Documentation”)—perhaps imagined to be set up as some kind of extension to the Outer Space Treaty from 1967. But what’s particularly notable in the movie is that the clearance process is handled automatically, using biometrics, or specifically, voiceprint identification. (The US insignia displayed are identical to the ones on today’s US passports, but in typical pre-1980s form, the system asks for “surname” and “Christian name”.)

There had been primitive voice recognition systems even in the 1950s (“what digit is that?”), and the idea of identifying speakers by voice was certainly known. But what was surely not obvious is that serious voice systems would need the kind of computer processing power that only became available in the late 2000s.

And in just the last few years, automatic biometric immigration control systems have started to become common at airports—though using face and sometimes fingerprint recognition rather than voice. (Yes, it probably wouldn’t work well to have lots of people talking at different kiosks at the same time.)

In the movie, the kiosk has buttons for different languages: English, Dutch, Russian, French, Italian, Japanese. It would have been very hard to predict what a more appropriate list for 2001 might have been.

Even though 1968 was still in the middle of the Cold War, the movie correctly portrays international use of the space station—though, like in Antarctica today, it portrays separate moon bases for different countries. Of course, the movie talks about the Soviet Union. But the fact the Berlin Wall would fall 21 years after 1968 isn’t the kind of thing that ever seems predictable in human history.

The movie shows logos from quite a few companies as well. The space shuttle is proudly branded Pan Am. And in at least one scene, its instrument panel has “IBM” in the middle. (There’s also an IBM logo on spacesuit controls during an EVA near Jupiter.)  On the space station there are two hotels shown: Hilton and Howard Johnson’s. There’s also a Whirlpool “TV dinner” dispenser in the galley of the spacecraft going to the Moon. And there’s the AT&T (Bell System) Picturephone, as well as an Aeroflot bag, and a BBC newscast. (The channel is “BBC 12”, though in reality the expansion has only been from BBC 2 to BBC 4 in the past 50 years.)

Companies have obviously risen and fallen over the course of 50 years, but it’s interesting how many of the ones featured in the movie still exist, at least in some form. Many of their logos are even almost the same—though AT&T and BBC are two exceptions, and the IBM logo got stripes added in 1972.

It’s also interesting to look at the fonts used in the movie. Some seem quite dated to us today, while others (like the title font) look absolutely modern. But what’s strange is that at times over the past 50 years some of those “modern” fonts would have seemed old and tired. But such, I suppose, is the nature of fashion. And it’s worth remembering that even those “serifed fonts” from stone inscriptions in ancient Rome are perfectly capable of looking sharp and modern.

Something else that’s changed since 1968 is how people talk, and the words they use. The change seems particularly notable in the technospeak. “We are running cross-checking routines to determine reliability of this conclusion” sounds fine for the 1960s, but not so much for today. There’s mention of the risk of “social disorientation” without “adequate preparation and conditioning”, reflecting a kind of behaviorist view of psychology that at least wouldn’t be expressed the same way today.

It’s sort of charming when a character in 2001 says that whenever they “phone” a moon base, they get “a recording which repeats that the phone lines are temporarily out of order”. One might not say something too different about landlines on Earth today, but it feels like with a moon base one should at least be talking about automatically finding out if their network is down, rather than about having a person call on the phone and listen to a recorded message.

Of course, had a character in 2001 talked about “not being able to ping their servers”, or “getting 100% packet loss” it would have been completely incomprehensible to 1960s movie-goers—because those are concepts of a digital world which basically had just not been invented yet (even though the elements for it definitely existed).

What about HAL?

The most notable and enduring character from 2001 is surely the HAL 9000 computer, described (with exactly the same words as might be used today) as “the latest in machine intelligence”. HAL talks, lipreads, plays chess, recognizes faces from sketches, comments on artwork, does psychological evaluations, reads from sensors and cameras all over the spaceship, predicts when electronics will fail, and—notably to the plot—shows a variety of human-like emotional responses.

It might seem remarkable that all these AI-like capabilities would be predicted in the 1960s. But actually, back then, nobody yet thought that AI would be hard to create—and it was widely assumed that before too long computers would be able to do pretty much everything humans can, though probably better and faster and on a larger scale.

But already by the 1970s it was clear that things weren’t going to be so easy, and before long the whole field of AI basically fell into disrepute—with the idea of creating something like HAL beginning to seem as fictional as digging up extraterrestrial artifacts on the Moon.

In the movie, HAL’s birthday is January 12, 1992 (though in the book version of 2001, it was 1997). And in 1997, in Urbana, Illinois, fictional birthplace of HAL (and, also, as it happens, the headquarters location of my company), I went to a celebration of HAL’s fictional birthday. People talked about all sorts of technologies relevant to HAL. But to me the most striking thing was how low the expectations had become. Almost nobody even seemed to want to mention “general AI” (probably for fear of appearing kooky), and instead people were focusing on solving very specific problems, with specific pieces of hardware and software.

Having read plenty of popular science (and some science fiction) in the 1960s, I certainly started from the assumption that one day HAL-like AIs would exist. And in fact I remember that in 1972, when I happened to end up delivering a speech to my whole school—and picking the topic of what amounts to AI ethics. I’m afraid that what I said I would now consider naive and misguided (and in fact I was perhaps partly misled by 2001). But, heck, I was only 12 at the time. And what I find interesting today is just that I thought AI was an important topic even back then.

For the remainder of the 1970s I was personally mostly very focused on physics (which, unlike AI, was thriving at the time). AI was still in the back of my mind, though, when for example I wanted to understand how brains might or might not relate to statistical physics and to things like the formation of complexity. But what made AI really important again for me was that in 1981 I had launched my first computer language (SMP) and had seen how successful it was at doing mathematical and scientific computations—and I got to wondering what it would take to do computations about (and know about) everything.

My immediate assumption was that it would require full brain-like capabilities, and therefore general AI. But having just lived through so many advances in physics, this didn’t immediately faze me. And in fact, I even had a fairly specific plan. You see, SMP—like the Wolfram Language today—was fundamentally based on the idea of defining transformations to apply when expressions match particular patterns. I always viewed this as a rough idealization of certain forms of human thinking. And what I thought was that general AI might effectively just require adding a way to match not just precise patterns, but also approximate ones (e.g. “that’s a picture of an elephant, even though its pixels aren’t exactly the same as in the sample”).

I tried a variety of schemes for doing this, one of them being neural nets. But somehow I could never formulate experiments that were simple enough to even have a clear definition of success. But by making simplifications to neural nets and a couple of other kinds of systems, I ended up coming up with cellular automata—which quickly allowed me to make some discoveries that started me on my long journey of studying the computational universe of simple programs, and made me set aside approximate pattern matching and the problem of AI.

At the time of HAL’s fictional birthday in 1997, I was actually right in the middle of my intense 10-year process of exploring the computational universe and writing A New Kind of Science—and it was only out of my great respect for 2001 that I agreed to break out of being a hermit for a day and talk about HAL.

It so happened that just three weeks before there had been the news of the successful cloning of Dolly the sheep.

And, as I pointed out, just like general AI, people had discussed cloning mammals for ages. But it had been assumed to be impossible, and almost nobody had worked on it—until the success with Dolly. I wasn’t sure what kind of discovery or insight would lead to progress in AI. But I felt certain that eventually it would come.

Meanwhile, from my study of the computational universe, I’d formulated my Principle of Computational Equivalence—which had important things to say about artificial intelligence. And at some level, what it said is that there isn’t some magic “bright line” that separates the “intelligent” from the merely computational.

Emboldened by this—and with the Wolfram Language as a tool—I then started thinking again about my quest to solve the problem of computational knowledge. It certainly wasn’t an easy thing. But after quite a few years of work, in 2009, there it was: Wolfram|Alpha—a general computational knowledge engine with a lot of knowledge about the world. And particularly after Wolfram|Alpha was integrated with voice input and voice output in things like Siri, it started to seem in many ways quite HAL-like.

HAL in the movie had some more tricks, though. Of course he had specific knowledge about the spacecraft he was running—a bit like the custom Enterprise Wolfram|Alpha systems that now exist at various large corporations. But he had other capabilities too—like being able to do visual recognition tasks.

And as computer science developed, such things had hardened into tough nuts that basically “computers just can’t do”. To be fair, there was lots of practical progress in things like OCR for text, and face recognition. But it didn’t feel general. And then in 2012, there was a surprise: a trained neural net was suddenly discovered to perform really well on standard image recognition tasks.

It was a strange situation. Neural nets had first been discussed in the 1940s, and had seen several rounds of waxing and waning enthusiasm over the decades. But suddenly just a few years ago they really started working. And a whole bunch of “HAL-like tasks” that had seemed out of range suddenly began to seem achievable.

In 2001, there’s the idea that HAL wasn’t just “programmed”, but somehow “learned”. And in fact HAL mentions at one point that HAL had a (human) teacher. And perhaps the gap between HAL’s creation in 1992 and deployment in 2001 was intended to correspond to HAL’s human-like period of education. (Arthur C. Clarke probably changed the birth year to 1997 for the book because he thought that a 9-year-old computer would be obsolete.)

But the most important thing that’s made modern machine learning systems actually start to work is precisely that they haven’t been trained at human-type rates. Instead, they’ve immediately been fed millions or billions of example inputs—and then they’ve been expected to burn huge amounts of CPU time systematically finding what amount to progressively better fits to those examples. (It’s conceivable that an “active learning” machine could be set up to basically find the examples it needs within a human-schoolroom-like environment, but this isn’t how the most important successes in current machine learning have been achieved.)

So can machines now do what HAL does in the movie? Unlike a lot of the tasks presumably needed to run an actual spaceship, most of the tasks the movie concentrates on HAL doing are ones that seem quintessentially human. And most of these turn out to be well-suited to modern machine learning—and month by month more and more of them have now been successfully tackled.

But what about knitting all these tasks together, to make a “complete HAL”? One could conceivably imagine having some giant neural net, and “training it for all aspects of life”. But this doesn’t seem like a good way to do things. After all, if we’re doing celestial mechanics to work out the trajectory of a spacecraft, we don’t have to do it by matching examples; we can do it by actual calculation, using the achievements of mathematical science.

We need our HAL to be able to know about a lot of kinds of things, and to be able to compute about a lot of kinds of things, including ones that involve human-like recognition and judgement.

In the book version of 2001, the name HAL was said to stand for “Heuristically programmed ALgorithmic computer”. And the way Arthur C. Clarke explained it is that this was supposed to mean “it can work on a program that’s already set up, or it can look around for better solutions and you get the best of both worlds”.

And at least in some vague sense, this is actually a pretty good description of what I’ve built over the past 30 years as the Wolfram Language. The “programs that are already set up” happen to try to encompass a lot of the systematic knowledge about computation and about the world that our civilization has accumulated.

But there’s also the concept of searching for new programs. And actually the science that I’ve done has led me to do a lot of work searching for programs in the computational universe of all possible programs. We’ve had many successes in finding useful programs that way, although the process is not as systematic as one might like.

In recent years, the Wolfram Language has also incorporated modern machine learning—in which one is effectively also searching for programs, though in a restricted domain defined for example by weights in a neural network, and constructed so that incremental improvement is possible.

Could we now build a HAL with the Wolfram Language? I think we could at least get close. It seems well within range to be able to talk to HAL in natural language about all sorts of relevant things, and to have HAL use knowledge-based computation to control and figure out things about the spaceship (including, for example, simulating components of it).

The “computer as everyday conversation companion” side of things is less well developed, not least because it’s not as clear what the objective might be there. But it’s certainly my hope that in the next few years—in part to support applications like computational smart contracts (and yes, it would have been good to have one of those set up for HAL)—that things like my symbolic discourse language project will provide a general framework for doing this.

“Incapable of Error”

Do computers “make mistakes”? When the first electronic computers were made in the 1940s and 1950s, the big issue was whether the hardware in them was reliable. Did the electrical signals do what they were supposed to, or did they get disrupted, say because a moth (“bug”) flew inside the computer?

By the time mainframe computers were developed in the early 1960s, such hardware issues were pretty well under control. And so in some sense one could say (and marketing material did) that computers were “perfectly reliable”.

HAL reflects this sentiment in 2001. “The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”

From a modern point of view, saying this kind of thing seems absurd. After all, everyone knows that computer systems—or, more specifically, software systems—inevitably have bugs. But in 1968, bugs weren’t really understood.

After all, computers were supposed to be perfect, logical machines. And so, the thinking went, they must operate in a perfect way. And if anything went wrong, it must, as HAL says in the movie, “be attributable to human error”. Or, in other words, that if the human were smart and careful enough, the computer would always “do the right thing”.

When Alan Turing did his original theoretical work in 1936 to show that universal computers could exist, he did it by writing what amounts to a program for his proposed universal Turing machine. And even in this very first program (which is only a page long), it turns out that there were already bugs.

But, OK, one might say, with enough effort, surely one can get rid of any possible bug. Well, here’s the problem: to do so requires effectively foreseeing every aspect of what one’s program could ever do. But in a sense, if one were able to do that, one almost doesn’t need the program in the first place.

And actually, pretty much any program that’s doing nontrivial things is likely to show what I call computational irreducibility, which implies that there’s no way to systematically shortcut what the program does. To find out what it does, there’s basically no choice but just to run it and watch what it does. Sometimes this might be seen like a desirable feature—for example if one’s setting up a cryptocurrency that one wants to take irreducible effort to mine.

And, actually, if there isn’t computational irreducibility in a computation, then it’s a sign that the computation isn’t being done as efficiently as it could be.

What is a bug? One might define it as a program doing something one doesn’t want. So maybe we want the pattern on the left created by a very simple program to never die out. But the point is that there may be no way in anything less than an infinite time to answer the “halting problem” of whether it can in fact die out. So, in other words, figuring out if the program “has a bug” and does something one doesn’t want may be infinitely hard.

A cellular automaton. Will it halt?

And of course we know that bugs are not just a theoretical problem; they exist in all large-scale practical software. And unless HAL only does things that are so simple that we foresee every aspect of them, it’s basically inevitable that HAL will exhibit bugs.

But maybe, one might think, HAL could at least be given some overall directives—like “be nice to humans”, or other potential principles of AI ethics. But here’s the problem: given any precise specification, it’s inevitable that there will unintended consequences. One might say these are “bugs in the specification”, but the problem is they’re inevitable. When computational irreducibility is present, there’s basically never any finite specification that can avoid any conceivable “unintended consequence”.

Or, said in terms of 2001, it’s inevitable that HAL will be capable of exhibiting unexpected behavior. It’s just a consequence of being a system that does sophisticated computation. It lets HAL “show creativity” and “take initiative”. But it also means HAL’s behavior can’t ever be completely predicted.

The basic theoretical underpinnings to know this already existed in the 1950s or even earlier. But it took experience with actual complex computer systems in the 1970s and 1980s for intuition about bugs to develop. And it took my explorations of the computational universe in the 1980s and 1990s to make it clear how ubiquitous the phenomenon of computational irreducibility actually is, and how much it affects basically any sufficiently broad specification.

How Did They Get It Right?

It’s interesting to see what the makers of 2001 got wrong about the future, but it’s impressive how much they got right. So how did they do it? Well, between Stanley Kubrick and Arthur C. Clarke (and their “scientific consultant” Fred Ordway III), they solicited input from a fair fraction of the top technology companies of the day—and (though there’s nothing in the movie credits about them) received a surprising amount of detailed information about the plans and aspirations of these companies, along with quite a few designs custom-made for the movie as a kind of product placement.

In the very first space scene in the movie, for example, one sees an assortment of differently shaped spacecraft, that were based on concept designs from the likes of Boeing, Grumman and General Dynamics, as well as NASA. (In the movie, there are no aerospace manufacturer logos—and NASA also doesn’t get a mention; instead the assorted spacecraft carry the flags of various countries.)

But so where did the notion of having an intelligent computer come from? I don’t think it had an external source. I think it was just an idea that was very much “in the air” at the time. My late friend Marvin Minsky, who was one of the pioneers of AI in the 1960s, visited the set of 2001 during its filming. But Kubrick apparently didn’t ask him about AI; instead he asked about things like computer graphics, the naturalness of computer voices, and robotics. (Marvin claims to have suggested the configuration of arms that was used for the pods on the Jupiter spacecraft.)

But what about the details of HAL? Where did those come from? The answer is that they came from IBM.

IBM was at the time by far the world’s largest computer company, and it also conveniently happened to be headquartered in New York City, which is where Kubrick and Clarke were doing their work. IBM—as now—was always working on advanced concepts that they could demo. They worked on voice recognition. They worked on image recognition. They worked on computer chess. In fact, they worked on pretty much all the specific technical features of HAL shown in 2001. Many of these features are even shown in the “Information Machine” movie IBM made for the 1964 World’s Fair in New York City (though, curiously, that movie has a dynamic multi-window form of presentation that wasn’t adopted for HAL).

From a marketing brochure for the IBM System/360 mainframe computer

In 1964, IBM had proudly introduced their System/360 mainframe computers:

And the rhetoric about HAL having a flawless operational record could almost be out of IBM’s marketing material for the 360. And of course HAL was physically big—like a mainframe computer (actually even big enough that a person could go inside the computer). But there was one thing about HAL that was very non-IBM. Back then, IBM always strenuously avoided ever saying that computers could themselves be smart; they just emphasized that computers would do what people told them to. (Somewhat ironically, the internal slogan that IBM used for its employees was “Think”. It took until the 1980s for IBM to start talking about computers as smart—and for example in 1980 when my friend Greg Chaitin was advising the then-head of research at IBM he was told it was deliberate policy not to pursue AI, because IBM didn’t want its human customers to fear they might be replaced by AIs.)

An interesting letter from 1966 surfaced recently. In it, Kubrick asks one of his producers (a certain Roger Caras, who later became well known as a wildlife TV personality): “Does I.B.M. know that one of the main themes of the story is a psychotic computer?”. Kubrick is concerned that they will feel “swindled”. The producer writes back, talking about IBM as “the technical advisor for the computer”, and saying that IBM will be OK so long as they are “not associated with the equipment failure by name”.

But was HAL supposed to be an IBM computer? The IBM logo appears a couple of times in the movie, but not on HAL. Instead, HAL has a nameplate that looks like this:

HAL's nameplate in 2001: A Space Odyssey

It’s certainly interesting that the blue is quite like IBM’s characteristic “big blue” blue. It’s also very curious that if you go one step forward in the alphabet from the letters H A L, you get I B M. Arthur C. Clarke always claimed this was a coincidence, and it probably was. But my guess is that at some point, that blue part of HAL’s nameplate was going to say “IBM”.

Like some other companies, IBM was fond of naming its products with numbers. And it’s interesting to look at what numbers they used. In the 1960s, there were a lot of 3- and 4-digit numbers starting with 3’s and 7’s, including a whole 7000 series, etc. But, rather curiously, there was not a single one starting with 9: there was no IBM 9000 series. In fact, IBM didn’t have a single product whose name started with 9 until the 1990s. And I suspect that was due to HAL.

By the way, the IBM liaison for the movie was their head of PR, C. C. Hollister, who was interviewed in 1964 by the New York Times about why IBM—unlike its competitors—ran general advertising (think Super Bowl), given that only a thin stratum of corporate executives actually made purchasing decisions about computers. He responded that their ads were “designed to reach… the articulators or the 8 million to 10 million people that influence opinion on all levels of the nation’s life” (today one would say “opinion makers”, not “articulators”).

He then added “It is important that important people understand what a computer is and what it can do.” And in some sense, that’s what HAL did, though not in the way Hollister might have expected.

Predicting the Future

OK, so now we know—at least over the span of 50 years—what happened to the predictions from 2001, and in effect how science fiction did (or did not) turn into science fact. So what does this tell us about predictions we might make today?

In my observation things break into three basic categories. First, there are things people have been talking about for years, that will eventually happen—though it’s not clear when. Second, there are surprises that basically nobody expects, though sometimes in retrospect they may seem somewhat obvious. And third, there are things people talk about, but that potentially just won’t ever be possible in our universe, given how its physics works.

Something people have talked about for ages, that surely will eventually happen, is routine space travel. When 2001 was released, no humans had ever ventured beyond Earth orbit. But even by the very the next year, they’d landed on the Moon. And 2001 made what might have seemed like a reasonable prediction that by the year 2001 people would routinely be traveling to the Moon, and would be able to get as far as Jupiter.

Now of course in reality this didn’t happen. But actually it probably could have, if it had been considered a sufficient priority. But there just wasn’t the motivation for it. Yes, space has always been more broadly popular than, say, ocean exploration. But it didn’t seem important enough to put the necessary resources into.

Will it ever happen? I think it’s basically a certainty. But will it take 5 years or 50? It’s very hard to tell—though based on recent developments I would guess about halfway between.

People have been talking about space travel for well over a hundred years. They’ve been talking about what’s now called AI for even longer. And, yes, at times there’ve been arguments about how some feature of human intelligence is so fundamentally special that AI will never capture it. But I think it’s pretty clear at this point that AI is on an inexorable path to reproduce any and all features of whatever we would call intelligence.

A more mundane example of what one might call “inexorable technology development” is videophones. Once one had phones and one had television, it was sort of inevitable that eventually one would have videophones. And, yes, there were prototypes in the 1960s. But for detailed reasons of computer and telecom capacity and cost, videophone technology didn’t really become broadly available for a few more decades. But it was basically inevitable that it eventually would.

In science fiction, basically ever since radio was invented, it was common to imagine that in the future everyone would be able to communicate through radio instantly. And, yes, it took the better part of a century. But eventually we got cellphones. And in time we got smartphones that could serve as magic maps, and magic mirrors, and much more.

An example that’s today still at an earlier stage in its development is virtual reality. I remember back in the 1980s trying out early VR systems. But back then, they never really caught on. But I think it’s basically inevitable that they eventually will. Perhaps it will require having video that’s at the same quality level as human vision (as audio has now been for a couple of decades). And whether it’s exactly VR, or instead augmented reality, that eventually becomes widespread is not clear. But something like that surely will. Though exactly when is not clear.

There are endless examples one can cite. People have been talking about self-driving cars since at least the 1960s. And eventually they will exist. People have talked about flying cars for even longer. Maybe helicopters could have gone in this direction, but for detailed reasons of control and reliability that didn’t work out. Maybe modern drones will solve the problem. But again, eventually there will be flying cars. It’s just not clear exactly when.

Similarly, there will eventually be robotics everywhere. I have to say that this is something I’ve been hearing will “soon happen” for more than 50 years, and progress has been remarkably slow. But my guess is that once it’s finally figured out how to really do “general-purpose robotics”—like we can do general-purpose computation—things will advance very quickly.

And actually there’s a theme that’s very clear over the past 50+ years: what once required the creation of special devices is eventually possible by programming something that is general purpose. In other words, instead of relying on the structure of physical devices, one builds up capabilities using computation.

What is the end point of this? Basically it’s that eventually everything will be programmable right down to atomic scales. In other words, instead of specifically constructing computers, we’ll basically build everything “out of computers”. To me, this seems like an inevitable outcome. Though it happens to be one that hasn’t yet been much discussed, or, say, explored in science fiction.

Returning to more mundane examples, there are other things that will surely be possible one day, like drilling into the Earth’s mantle, or having cities under the ocean (both subjects of science fiction in the past—and there’s even an ad for a “Pan Am Underwater Hotel” visible on the space station in 2001). But whether these kinds of things will be considered worth doing is not so clear. Bringing back dinosaurs? It’ll surely be possible to get a good approximation to their DNA. How long all the necessary bioscience developments will take I don’t know, but one day one will surely be able to have a live stegosaurus again.

Perhaps one of the oldest “science fiction” ideas ever is immortality. And, yes, human lifespans have been increasing. But will there come a point where humans can for practical purposes be immortal? I am quite certain that there will. Quite whether the path will be primarily biological, or primarily digital, or some combination involving molecular-scale technology, I do not know. And quite what it will all mean, given the inevitable presence of an infinite number of possible bugs (today’s “medical conditions”), I am not sure. But I consider it a certainty that eventually the old idea of human immortality will become a reality. (Curiously, Kubrick—who was something of an enthusiast for things like cryonics—said in an interview in 1968 that one of the things he thought might have happened by the year 2001 is the “elimination of old age”.)

So what’s an example of something that won’t happen? There’s a lot we can’t be sure about without knowing the fundamental theory of physics. (And even given such a theory, computational irreducibility means it can be arbitrarily hard to work out the consequence for some particular issue.)  But two decent candidates for things that won’t ever happen are Honey-I-Shrunk-the-Kids miniaturization and faster-than-light travel.

Well, at least these things don’t seem likely to happen the way they are typically portrayed in science fiction. But it’s still possible that things that are somehow functionally equivalent will happen. For example, it perfectly well could be possible to “scan an object” at an atomic scale, and then “reinterpret it”, and build up using molecular-scale construction at least a very good approximation to it that happens to be much smaller.

What about faster-than-light travel? Well, maybe one will be able to deform spacetime enough that it’ll effectively be possible. Or conceivably one will be able to use quantum mechanics to effectively achieve it. But these kinds of solutions assume that what one cares about are things happening directly in our physical universe.

But imagine that in the future everyone has effectively been “uploaded” into some digital system—so that the “physics” one’s experiencing is instead something virtualized. And, yes, at the level of the underlying hardware maybe there will be restrictions based on the speed of light. But for purposes of the virtualized experience, there’ll be no such constraint. And, yes, in a setup like this, one can also imagine another science fiction favorite: time travel (notwithstanding its many philosophical issues).

OK, so what about surprises? If we look at the world today, compared to 50 years ago, it’s easy to identify some surprises. Computers are far more ubiquitous than almost anyone expected. And there are things like the web, and social media, that weren’t really imagined (even though perhaps in retrospect they seem “obvious”).

There’s another surprise, whose consequences are so far much less well understood, but that I’ve personally been very involved with: the fact that there’s so much complexity and richness to be found in the computational universe.

Almost by definition, “surprises” tend to occur when understanding what’s possible, or what makes sense, requires a change of thinking, or some kind of “paradigm shift”. Often in retrospect one imagines that such changes of thinking just occur—say in the mind of one particular person—out of the blue. But in reality what’s almost always going on is that there’s a progressive stack of understanding developed—which, perhaps quite suddenly, allows one to see something new.

And in this regard it’s interesting to reflect on the storyline of 2001. The first part of the movie shows an alien artifact—a black monolith—that appears in the world of our ape ancestors, and starts the process that leads to modern civilization. Maybe the monolith is supposed to communicate critical ideas to the apes by some kind of telepathic transmission.

But I like to have another interpretation. No ape 4 million years ago had ever seen a perfect black monolith, with a precise geometrical shape. But as soon as they saw one, they could tell that something they had never imagined was possible. And the result was that their worldview was forever changed. And—a bit like the emergence of modern science as a result of Galileo seeing the moons of Jupiter—that’s what allowed them to begin constructing what became modern civilization.

The Extraterrestrials

When I first saw 2001 fifty years ago nobody knew whether there would turn out to be life on Mars. People didn’t expect large animals or anything. But lichens or microorganisms seemed, if anything, more likely than not.

With radio telescopes coming online, and humans just beginning to venture out into space, it also seemed quite likely that before long we’d find evidence of extraterrestrial intelligence. But in general people seemed neither particularly excited, or particularly concerned, about this prospect. Yes, there would be mention of the time when a radio broadcast of H. G. Wells’s War of the Worlds story was thought to be a real alien invasion in New Jersey. But 20 or so years after the end of World War II, people were much more concerned about the ongoing Cold War, and what seemed like the real possibility that the world would imminently blow itself up in a giant nuclear conflagration.

The seed for what became 2001 was a rather nice 1951 short story by Arthur C. Clarke called “The Sentinel” about a mysterious pyramid discovered on the Moon, left there before life emerged on Earth, and finally broken open by humans using nuclear weapons, but found to have contents that were incomprehensible. Kubrick and Clarke worried that before 2001 was released, their story might have been overtaken by the actual discovery of extraterrestrial intelligence (and they even explored taking out insurance against this possibility).

But as it is, 2001 became basically the first serious movie exploration of what the discovery of extraterrestrial intelligence might be like. As I’ve recently discussed at length, deciding in the abstract whether or not something was really “produced by intelligence” is a philosophically deeply challenging problem. But at least in the world as it is today, we have a pretty good heuristic: things that look geometrically simpler (with straight edges, circles, etc.) are probably artifacts. Of course, at some level it’s a bit embarrassing that nature seems to quite effortlessly make things that look more complex than what we typically produce, even with all our engineering prowess. And, as I’ve argued elsewhere, as we learn to take advantage of more of the computational universe, this will no doubt change. But at least for now, the “if it’s geometrically simple, it’s probably an artifact” heuristic works quite well.

And in 2001 we see it in action—when the perfectly cuboidal black monolith appears on the 4-million-year-old Earth: it’s visually very obvious that it isn’t something that belongs, and that it’s something that was presumably deliberately constructed.

A little later in the movie, another black monolith is discovered on the Moon. It’s noticed because of what’s called in the movie the “Tycho Magnetic Anomaly” (“TMA-1”)—probably named by Kubrick and Clarke after the South Atlantic Anomaly associated with the Earth’s radiation belts, that was discovered in 1958. The magnetic anomaly could have been natural (“a magnetic rock”, as one of the characters says). But once it’s excavated and found to be a perfect black cuboidal monolith, extraterrestrial intelligence seems the only plausible origin.

As I’ve discussed elsewhere, it’s hard to even recognize intelligence that doesn’t have any historical or cultural connection to our own. And it’s essentially inevitable that this kind of alien intelligence will seem to us in many ways incomprehensible. (It’s a curious question, though, what would happen if the alien intelligence had already inserted itself into the distant past of our own history, as in 2001.)

Kubrick and Clarke at first assumed that they’d have to actually show extraterrestrials somewhere in the movie. And they worried about things like how many legs they might have. But in the end Kubrick decided that the only alien that had the degree of impact and mystery that he wanted was an alien one never actually saw.

And so, for the last 17% of 2001, after Dave Bowman goes through the “star gate” near Jupiter, one sees what was probably supposed to be purposefully incomprehensible—if aesthetically interesting. Are these scenes of the natural world elsewhere in the universe? Or are these artifacts created by some advanced civilization?

Images from the "star gate" scenes in 2001: A Space Odyssey

We see some regular geometric structures, that read to us like artifacts. And we see what appear to be more fluid or organic forms, that do not. For just a few frames there are seven strange flashing octahedra.

Flashing octahedra from 2001: A Space Odyssey

I’m pretty sure I never noticed these when I first saw 2001 fifty years ago. But in 1997, when I studied the movie in connection with HAL’s birthday, I’d been thinking for years about the origins of complexity, and about the differences between natural and artificial systems—so the octahedra jumped out at me (and, yes, I spent quite a while wrangling the LaserDisc version of 2001 I had back then to try to look at them more carefully).

I didn’t know what the octahedra were supposed to be. With their regular flashing, I at first assumed they were meant to be some kind of space beacons. But I’m told that actually they were supposed to be the extraterrestrials themselves, appearing in a little cameo. Apparently there’d been an earlier version of the script in which the octahedra wound up riding in a ticker tape parade in New York City—but I think the cameo was a better idea.

When Kubrick was interviewed about 2001, he gave an interesting theory for the extraterrestrials:  “They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans.”

It’s interesting to see Kubrick grappling with the idea that minds and intelligence don’t have to have physical form. Of course, in HAL he’d already in a sense imagined a “non-physical mind”. But back in the 1960s, with the idea of software only just emerging, there wasn’t yet a clear notion that computation could be something meaningful in its own right, independent of the particulars of its “hardware” implementation.

That universal computation was possible had arisen as an essentially mathematical idea in the 1930s. But did it have physical implications? In the 1980s I started talking about things like computational irreducibility, and about some of the deep connections between universal computation and physics. But back in the 1950s, people looked for much more direct implications of universal computation. And one of the notable ideas that emerged was of “universal constructors”—that would somehow be able to construct anything, just as universal computers could compute anything.

In 1952—as part of his attempt to “mathematicize” biology—John von Neumann wrote a book about “self-reproducing automata” in which he came up with what amounts to an extremely complicated 2D cellular automaton that can have a configuration that reproduces itself. And of course—as was discovered in 1953—it turns out to be correct that digital information, as encoded in DNA, is what specifies the construction of biological organisms.

But in a sense von Neumann’s efforts were based on the wrong intuition. For he assumed (as I did, before I saw evidence to the contrary) that to make something that has a sophisticated feature like self-reproduction, the thing itself must somehow be correspondingly complicated.

But as I discovered many years later by doing experiments in the computational universe of simple programs, it’s just not true that it takes a complicated system to show complicated behavior: even systems (like cellular automata) with some of the simplest imaginable rules can do it. And indeed, it’s perfectly possible to have systems with very simple rules that show self-reproduction—and in the end self-reproduction doesn’t seem like a terribly special feature at all (think computer code that copies itself, etc.).

But back in the 1950s von Neumann and his followers didn’t know that. And given the enthusiasm for things to do with space, it was inevitable that the idea of “self-reproducing machines” would quickly find its way into notions of self-reproducing space probes (as well as self-reproducing lunar factories, etc.)

I’m not sure if these threads had come together by the time 2001 was made, but certainly by the time of the 2010 sequel, Arthur C. Clarke had decided that the black monoliths were self-reproducing machines. And in a scene reminiscent of the modern idea that AIs, when given the directive to make more paperclips, might turn everything (including humans) into paperclips, the 2010 movie includes black monoliths turning the entire planet of Jupiter into a giant collection of black monoliths.

What are the aliens trying to do in 2001? I think Kubrick recognized that their motivations would be difficult to map onto anything human. Why for example does Dave Bowman wind up in what looks like a Louis-XV-style hotel suite—that’s probably the most timeless human-created backdrop of the movie (except for the fact that in keeping with 1960s practices, there’s a bathtub but no shower in the suite)?

It’s interesting that 2001 contains both artificial and extraterrestrial intelligence. And it’s interesting that 50 years after 2001 was released, we’re getting more and more comfortable with the idea of artificial intelligence, yet we believe we’ve seen no evidence of extraterrestrial intelligence.

As I’ve argued extensively elsewhere, I think the great challenge of thinking about extraterrestrial intelligence is defining what we might mean by intelligence. It’s very easy for us humans to have the analog of a pre-Copernican view in which we assume that our intelligence and capabilities are somehow fundamentally special, just like the Earth used to be assumed to be at the center of the universe.

But what my Principle of Computational Equivalence suggests is that in fact we’ll never be able to define anything fundamentally special about our intelligence; what’s special about it is its particular history and connections. Does the weather “have a mind of its own”? Well, based on the Principle of Computational Equivalence I don’t think there’s anything fundamentally different about the computations it’s doing from the ones that go on in our brains.

And similarly, when we look out into the cosmos, it’s easy to see examples of sophisticated computation going on. Of course, we don’t think of the complex processes in a pulsar magnetosphere  as “extraterrestrial intelligence”; we just think of them as something “natural”. In the past we might have argued that however complex such a process looks, it’s really somehow fundamentally simpler than human intelligence. But given the Principle of Computational Equivalence we know this isn’t true.

So why don’t we consider a pulsar magnetosphere to be an example of “intelligence”? Well, because in it we don’t recognize anything like our own history, or our own detailed behavior. And as a result, we don’t have a way to connect what it does with purposes that we humans understand.

The computational universe of all possible programs is full of sophisticated computations that aren’t aligned with any existing human purposes. But as we try to develop AI, what we are effectively doing is to mine that computational universe for programs that do things we want done.

Out there in the computational universe, though, there’s an infinite collection of “possible AIs”. And there’s nothing less capable about the ones that we don’t yet choose to use; we just don’t see how they align with things we want.

Artificial intelligence is in a sense the first example of alien intelligence that we’re seeing (yes, there are animals too, but it’s easier to connect with AI). We’re still at the very early stages of getting widespread intuition about AI. But as we understand more about what AI really can be, and how it relates to everything else in the computational universe, I think we’ll get a clearer perspective on the forms intelligence can take.

Will we find extraterrestrial intelligence? Well, in many respects I think we already have. It’s all around us in the universe—doing all kinds of sophisticated computations.

Will there ever be a dramatic moment, like in 2001, where we find extraterrestrial intelligence that’s aligned enough with our own intelligence that we can recognize the perfect black monoliths it makes—even if we can’t figure out their “purpose”? My current suspicion is that it’ll be more “push” than “pull”: instead of seeing something that we suddenly recognize, we’ll instead gradually generalize our notion of intelligence, until we start to be comfortable attributing it not just to ourselves and our AIs, but also to other things in the universe.

Personal Journey

When I first saw 2001 I don’t think I ever even calculated how old I’d be in the year 2001. I was always thinking about what the future might be like, but I didn’t internalize actually living through it. Back when I was 8 years old, in 1968, space was my greatest interest, and I made lots of little carefully stapled booklets, full of typewritten text and neatly drawn diagrams. I kept detailed notes on every space probe that was launched, and tried to come up with spacecraft (I wrote it “space-craft”) designs of my own.

What made me do this? Well, presaging quite a bit that I’ve done in my life, I did it just because I found it personally interesting. I never showed any of it to anyone, and never cared what anyone might think of it. And for nearly 50 years I’ve just had it all stored away. But looking at it again now, I found one unique example of something related to my interests that I did for school: a booklet charmingly titled “The Future”, written when I was 9 or 10 years old, and containing what’s to me now a cringingly embarrassing page of my predictions for the future of space exploration (complete with a nod to 2001):

"The Future," by Stephen Wolfram (at age 9 or 10)

Fortunately perhaps, I didn’t wait around to find out how wrong these predictions were, and within a couple of years my interest in space had transformed into interests in more foundational fields, first physics and then computation and the study of the computational universe. When I first started using computers around 1972, it was a story of paper tape and teleprinters—far from the flashing screens of 2001.

But I’ve been fortunate enough to live through a time when the computer technology of 2001 went from pure fiction to something close to fact. And I’ve been even more fortunate to have been able to contribute a bit to that.

I’ve often said—in a kind of homage to 2001—that my favorite personal aspiration is to build “alien artifacts”: things that are recognizable once they’re built, but which nobody particularly expected would exist or be possible. I like to think that Wolfram|Alpha is some kind of example—as is what the Wolfram Language has become. And in a sense so have my efforts been in exploring the computational universe.

I never interacted with Stanley Kubrick. But I did interact with Arthur C. Clarke, particularly when my big book A New Kind of Science was being published. (I like to think that the book is big in content, but it is definitely big in size, with 1280 pages, weighing nearly 6 pounds.) Arthur C. Clarke asked for a pre-publication copy, which I duly sent, and on March 1, 2002, I received an email from him saying that “A ruptured postman has just staggered away from my front door… Stay tuned…..”.

Then, three days later, I got another piece of mail: “Well, I have <looked> at (almost) every page and am still in a state of shock. Even with computers, I don’t see how you could have done it.”  Wow! I actually succeeded in making what seemed to Arthur C. Clarke like an alien artifact!

He offered me a back-cover quote for the book: “… Stephen’s magnum opus may be the book of the decade, if not the century. It’s so comprehensive that perhaps he should have called it ‘A New Kind of Universe’, and even those who skip the 1200 pages of (extremely lucid) text will find the computer-generated illustrations fascinating. My friend HAL is very sorry he hadn’t thought of them first…” (In the end Steve Jobs talked me out of having quotes on the book, though, saying “Isaac Newton didn’t have back-cover quotes; why do you want them?”)

It’s hard for me to believe it’s been 50 years since I first saw 2001. Not all of 2001 has come true (yet). But for me what was important was that it presented a vision of what might be possible—and an idea of how different the future might be. It helped me set the course of my life to try to define in whatever ways I can what the future will be. And not just waiting for aliens to deliver monoliths, but trying to build some “alien artifacts” myself.

9 comments

  1. I saw 2001, also in 1968, in Cinerama. That visual experience cannot be duplicated, even on today’s big screens. I was 22.
    Speaking of the picture phone, I had a (very) low level job at Bell Labs in Murray Hill NJ in 1967. I got to play with the picture phone, but the only people I could call were the executives who had offices at the ends of the side corridors of the building. (The main corridor was a mile long.) It was brilliant for the time.

    As wonderful as the future has been, I still miss moon bases and Mars colonies. It is as if there was a collective loss of nerve.

  2. Wolfram research can do it! – you can create the world’s first general intelligence! (AGI). To succeed, it’s essential that the Wolfram language is extended to deal with cognitive and social entities (goal-directed dynamical systems). As I suggested in my previous comment, cognitive and social entities are *quasi-abstract* (localized in time but not space). Once Wolfram language can model cognitive systems, it will have the capacity for *reflection* *(self-reference), which seems to me to be the essential step needed for AGI.

    Here’s my current best model of ‘intelligence’:

    I think there’s 3 levels:

    (1) Prediction (Learning): Network level (level of machine learning). Sub-domains: Statistical Inference, Neural Networks and Network Science

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Learning

    (2) Optimization: Landscape level (level of fitness landscapes). Sub-domains: Search&Evolutionary Computation, Mathematical Optimization and Dynamical Systems

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Optimization

    (3) Reflection: Conceptual level (level of conceptual models). Sub-domains: Linguistics, Ontology engineering and World Models (Epistemology)

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Psychology%26NLP

    Wolfram research very much concentrates on (3) the level of reflection or symbolic language representation, which I think is indeed the correct approach, since it seems to me that symbolic modeling of knowledge subsumes all the other levels. The key to AGI, as I mentioned, is to extend the Wolfram language to model cognitive systems (then Wolfram Alpha will be able to model itself!) At this point, you will have fully fledged AGI I think!

  3. This article is really interesting/fascinating, in fact much more than that.

    What I appreciated more, and I think very few other people could have come to that conclusion, is the push vs pull idea of our understanding of intelligence.

    I think you touched a fundamental aspect of the universe(s) we live in.

    To George: come to down to Italy to see 2001 in 70mm on the 30mt screen of Arcadia “cinema”. It runs almost every year, there’s no other place to see it ( http://www.in70mm.com/news/2017/arcadia/index.htm )

  4. “there was television (though color only arrived in the UK in 1968, and I’d only seen black and white).”. color tv began in the U.S. kin 1954 (i remember watching it). why was the UK so slow to begin broadcasting color TV?

  5. ” SMP—like the Wolfram Language today—was fundamentally based on the idea of defining transformations to apply when expressions match particular patterns. I always viewed this as a rough idealization of certain forms of human thinking. And what I thought was that general AI might effectively just require adding a way to match not just precise patterns, but also approximate ones”. This seems correct. Analogical thinking is essentially ‘imprecise pattern matching’ and one could argue, as does Hofstadter, that analogy-making is foundational in thinking. Computation (following recipes or algorithms) is not a substitute for thinking, by humans or by computers.

  6. Is there any value to qualia, consciousness, etc. when discussing a computational universe? This reminds me of a video game where every piece of the thing that you build is useless without a fundamental component that ‘activates’ it all… in some way.

  7. “When Kubrick was interviewed about 2001, he gave an interesting theory for the extraterrestrials: “They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans.”

    Terrence Mckenna’s DMT Experience is basically what Kubrick was hinting toward. Starts at 29m50s mark:
    youtu.be/ZFrihMdTetg?t=29m49s

  8. I see you are sufficiently confident in human beings to predict a long term future for us, despite our damage to the global environment. I too am optimistic (though not completely certain) that we shall overcome our problems.

  9. Re: your comments about television just showing what the camera saw, vs. the computer displays in 2001:

    Most of the displays in 2001 were actually showing photographic information. A good friend of mine, Colin Cantwell, worked on many of the models and concept designs for 2001, and he explained that the technology to generate graphics like that just didn’t exist yet.

    I spluttered a bit and asked “But what about the computer image of the radio dish that they aimed back at Earth? That was a computer-generated wireframe animation, wasn’t it?”

    No, Colin said. It was a WIRE FRAME. He built it by cutting and soldering pieces of wire into the desired shape. Then they painted it white and filmed it against a black background with high-contrast film!

    Just another way that things have changed since 1968…