Latest Perspectives on the Computation Age

This is an edited version of a short talk I gave last weekend at The Nantucket Project—a fascinatingly eclectic event held on an island that I happen to have been visiting every summer for the past dozen years.

Lots of things have happened in the world in the past 100 years. But I think in the long view of history one thing will end up standing out among all others: this has been the century when the idea of computation emerged.

We’ve seen all sorts of things “get computerized” over the last few decades—and by now a large fraction of people in the world have at least some form of computational device. But I think we’re still only at the very beginning of absorbing the implications of the idea of computation. And what I want to do here today is to talk about some things that are happening, and that I think are going to happen, as a result of the idea of computation.

Word cloud

I’ve been working on this stuff since I was teenager—which is now about a third of a century. And I think I’ve been steadily understanding more and more.

Our computational knowledge engine, Wolfram|Alpha, which was launched on the web about three years ago now, is one of the latest fruits of this understanding.

What it does—many millions of times every day—is to take questions people ask, and try to use the knowledge that it has inside it to compute answers to them. If you’ve used Siri on the iPhone, or a bunch of other services, you’ll probably have seen Wolfram|Alpha answers.

Here’s the basic idea of Wolfram|Alpha: we want to take all the systematic knowledge that’s been accumulated in our civilization, and make it computable. So that if there’s a question that can in principle be answered on the basis of that knowledge, we can just compute the answer.

So how do we do that? Well, one starts off from data about the world. And we’ve been steadily accumulating data from primary sources about thousands of different kinds of things. Cities. Foods. Movies. Spacecraft. Species. Companies. Diseases. Sports. Chemicals. Whatever.

We’ve got a lot of data now, with more flowing in every second. And actually by now our collection of raw structured data is about as big in bytes as the text of all the human-written pages that one can find on the web.

But even all that data on its own isn’t enough. Because most questions require one not just to have the data, but to compute some specific answer from it. You want to know when some satellite is going to fly overhead? Well, we may have recent data about the satellite’s orbit. But we still have to do a bunch of physics and so on to figure when it’s going to be over us.

And so in Wolfram|Alpha a big thing we’ve done is to try to take all those models and methods and algorithms—from science, and technology, and other areas—and just implement them all.

You might be thinking: there’s got be some trick, some master algorithm, that you’re using. Well, no, there isn’t. It’s a huge project. And it involves experts from a zillion different areas. Giving us their knowledge, so we can make it computable.

Actually, even having the knowledge and being able to compute from it isn’t enough. Because we still have to solve the problem of how we communicate with the system. And when one’s dealing with, sort of, any kind of knowledge, any question, there’s only one practical way: we have to use human natural language.

So another big problem we’ve had to solve is how to take those ugly messy utterances that humans make, and turn them into something computable. Actually, I thought this might be just plain impossible. But it turned out that particularly as a result of some science I did—that I’ll talk about a bit later—we made some big breakthroughs.

The result is that when you type to Wolfram|Alpha, or talk to Siri… if you say something that humans could understand, there’s a really good chance we’ll be able to understand it too.

So we can communicate to our system with language. How does it communicate back to us?

What we want to do is to take whatever you ask, and generate the best report we can about it. Don’t just give you one answer, but contextualize that. Organize the information in a way that’s optimized for humans to understand.

All of this, as I say, happens many millions of times every day. And I’m really excited about what it means for the democratization of knowledge.

It used to be that if you want to answer all these kinds of questions, you’d have to go find an expert, and have them figure out the answer. But now in a sense we’ve automated a lot of those experts. So that means anyone, anywhere, anytime, can immediately get answers.

People are used to being able to search for things on the web. But this is something quite different.

We’re not finding web pages where what you’ve asked for was already written down by someone. We’re taking your specific question, and computing for you a specific idea. And in fact most of the questions we see every day never appear on the web; they’re completely new and fresh.

When you search the web, it’s liking asking a librarian a question, and having them hand you a pile of books—well, in this case, links to web pages—to read. What we’re trying to do is to give you an automated research analyst, who’ll instantly generate a complete research report about your question, complete with custom-created charts and graphs and so on.

OK. So this all seems like a pretty huge project. What’s made it possible?

Actually, I’d been thinking about basically this project since I was a kid. But at the beginning I had no idea what decade—or even century—it would become possible. And actually it was a big piece of basic science I did—that I’ll talk about a bit later—that convinced me that actually it might be possible.

I’ve been involved in some big technology projects over the years. But Wolfram|Alpha as a practical matter is by far the most complicated, with the largest number of different kinds of moving parts inside it.

And actually, it builds on something I’ve been working on for 25 years. Which is a system called Mathematica. Which is a computer language. That I guess one could say is these days by far the most algorithmically sophisticated computer language that exists.

Mathematica is the language that Wolfram|Alpha is implemented in. And the point is that in Mathematica, doing something like solving a differential equation is just one command. That’s how we manage to implement all those methods and models and so on. We’re starting from this very sophisticated language we already have.

Wolfram|Alpha is still about 15 million lines of code—in the Mathematica language—though.

Wolfram|Alpha is about knowing everything it can about the world—with all its messiness—and letting humans interact with it quickly using natural language.

Mathematica is about creating a precise computer language, that has built in to it, in a very coherent way, all the kinds of algorithmic functionality that we know about.

Over the past 25 years, Mathematica has become very widely used. There’s broad use on essentially all large university campuses, and all sophisticated corporate R&D operations around the world. And lots and lots of things have been discovered and invented with Mathematica.

In a sense, I see Mathematica as the implementation language for the idea of computation. Wolfram|Alpha is where that idea intersects with the sort of collective accumulation of knowledge that’s happened in our civilization.

So where does one go from here? Lots and lots of places.

First, Wolfram|Alpha is using public knowledge. What happens when we use internal knowledge of some kind?

Over the last couple of years there’ve been lots of custom versions of Wolfram|Alpha created, that take internal knowledge of some company or other organization, combine it with public knowledge, and compute answers.

What’s emerging is something pretty interesting. There’s lots of talk of “big data”. But what about “big answers”?

What one needs to do is to set things up so one makes all that data computable. So it’s possible to just ask a question in natural language, and automatically get answers, and automatically generate the most useful possible reports.

So far this is something that we’ve done as a custom thing for a limited number of large organizations. But we know how to generalize this, and in a sense provide a general way to automatically get analytics done, from data. We actually introduced the first step toward this a few months ago in Wolfram|Alpha.

You can not only ask Wolfram|Alpha questions, but you can also upload data to it. You can upload all kinds of data. Like a spreadsheet, or even an image. And then Wolfram|Alpha’s goal is to automatically tell you something interesting about that data. Or, if you ask a specific question, be able to give a report about the answer.

Right now what we have works rather nicely for decently small lumps of data. We’re gradually increasing to huge quantities of data.

Here’s a kind of fun example that I did. It relates to personal analytics—or what’s sometimes called “quantified self”. I’ve been a data-oriented guy for a long time. So I’ve been collecting all kinds of data about myself. Every email for 23 years. Every keystroke for a dozen years. Every walking step for a bunch of years. And so on. I’ve found these things pretty useful in sort of keeping my life organized and productive.

Earlier this year I thought I’d take all this data I’ve accumulated, and feed it to Mathematica and Wolfram|Alpha. And pretty soon I’m getting all these plots and analyses and so on. Sort of my automated personal historian, showing me all these events and trends in my life and so on.

I have to say that I thought there must be lots of people who were collecting all sorts of data about themselves. But when I wrote about this stuff earlier this year—and it got picked up in all the usual media places—I was pretty surprised to realize that nobody came out and said “I’ve got more data than you”.

So, a little bit embarrassingly, I think I have to conclude that for now, I might be the data-nerdiest—or maybe the most computable—human around. Though we’re working to change that.

Just a few weeks ago, for example, we released Wolfram|Alpha Personal Analytics for Facebook. So people can connect their Facebook accounts to Wolfram|Alpha and immediately get all this analytics about themselves and their friends and so on.

And so far a few million have done this. It’s kind of fun to see people’s lives made computable like this. There are all these different friend networks for example. Each one tells a story. And tells one some psychology too.

So we’re talking about making things computable. What can we really make computable? What about a city?

There’s all this data in a city, collected by all sorts of municipal agencies. There’s permits, there’s reports, there’s GIS data. And so on. And if you’re a sophisticated city, you’ve got lots of this data on the web somehow. But it’s in raw form. Where really only an expert can use it.

Well, what if we were about to feed it through the Wolfram|Alpha technology stack? If there’s a question that could be answered about the city on the basis of the data that exists, it’d be able to be answered.

What electric power line gets closest to such-and-such a building? What’s the voltage drop between some point and the nearest substation? Imagine just being able to ask those questions to a mobile phone, and having it automatically compute the answers.

Well, there are a lot of details about actually setting this up in the world, but we now have the technology to do it. To make a computable city. Or, for that matter, to make a computable country. Where all the government data that’s being generated can be set up so we can automatically answer questions from it. Either for the citizens of the country, or for the people who run it. It’ll be interesting what the first computable country is… but from a technology—and workflow—point of view, we’re now ready to do this.

So what else can be computable like this?

Here’s another example: large engineering systems. These days there’s a language called Modelica—yes, it was a Mathematica-inspired name—that’s an open standard for people who create large engineering systems. There used just to be spec sheets for engineering components. Now there are effectively little algorithms that describe each component.

We acquired a company recently that had been using Mathematica for many years to do large-scale systems engineering. And we just a couple of months ago released an integrated systems modeling product, which allows one to take, say, 50,000 components in an airplane, represent them in computable form, and then automatically compute how they’ll behave in some particular situation.

We haven’t yet assembled it all, but we now have the technology stack to do the following: you’ve got some big engineering system in front of you, and maybe it’s sent sensor data back to our servers. Now you talk to your mobile phone and you say “If I push it to 300 rpm, what will happen?” We understand the query, then run a model of the system, then tell you the answer; say “That wouldn’t be a very good idea” (preferably not a HAL voice or something).

So that’s about the operation of engineering systems. What about design?

Well, with everything being computable, it’s easy to run optimization algorithms on designs, or even to search a large space of possible designs. And increasingly what we’ll be doing is stating some design goal, then having the computer automatically figure out how to achieve that goal. It’ll know for example what components are available, with what specifications, and at what cost, and it’ll then figure out how to assemble what’s needed to achieve the design goal. Actually, there’s a much more everyday example of this that will come soon.

In Wolfram|Alpha, for example, we’ve been working with retailers to get data on consumer products. And the future will be to just ask in natural language for some product that meets some requirements, and then automatically to figure out what that is.

Or, more interestingly, to say: “I’m doing such-and-such a piece of home improvement. Figure out how much of what products I need to get to do that.” And the result should be an automatically generated bill of materials, and then the instructions about what to do with them.

There are just all these areas ripe to be made computable. Here’s another one: law.

Actually, back 300 years Leibniz was thinking about that when he first invented some precursors to the modern idea of computation. He imagined having some encoding of human laws, set up so one can ask a machine in effect to automatically figure out: “Is this legal or not?”

Well, today, there are some kinds of contracts that have already been “made computable”. Like contracts for derivative financial instruments and so on. But what if we could make the tax code computable? Or a mortgage computable? Or, more extremely, a patent.

Actually, some contracts like service-level agreements are beginning to become computable, of necessity, because in effect they have to be interpreted by computers in real time. And of course once things become computable, they can be applied in a much more democratized way, without all the experts needed, and so on.

Here’s a completely different area that I think is going to become computable, and actually that we’re planning to spin off a company to do. And that’s medical diagnosis.

When I look at the medical world, and the healthcare system, diagnosis is really a central problem. I mean, if you don’t have the right diagnosis, all the wonderful and expensive treatment in the world isn’t going to help, and actually it’s probably going to hurt.

Well, diagnosis is really hard for humans. I actually think it’s going to turn out not to be so hard for computers. It’s a lot easier for them to know more, and to not get confused about probabilities, and so on.

Of course, it’s a big project. You start off by encoding all those specialized decision trees and so on. But then you go on and grind up the medical literature and figure out what’s in there. Then you get lots of actual patient records—probably at first realistically from outside the US—and start doing analysis on those. There’s a lot about the getting of histories, and the delivery of diagnoses, that actually becomes a lot easier in an automated system.

But, you know, there’s actually something that’s inevitably going to disrupt existing medical diagnosis, and that’s sensor-based medicine. These days there are only a handful of consumer-level medical sensors, like thermometers and things. Very soon there are going to be lots. And—a little bit like the personal analytics I was talking about earlier—people are going to be routinely recording all sorts of medical information about themselves.

And the question is: how is this going to be used in diagnosis? Because when you come in with 10 megabytes of time series, that’s not just a “were you sweating a lot” question. That’s something that will have to be analyzed with an algorithm.

Actually, I think the whole medical process is going to end up being very algorithmic. Because you’ll be analyzing symptoms with algorithms, but then the treatment will also be specified by some algorithm. In fact, even though right now diagnosis is really important, I think in the end that’s sort of going to go away. One will be going straight from the observed data to the algorithm for treatment.

It’s sort of like in finance. You observe some behavior of some stock in the market. And, yes, there are technical traders who’ll start telling you that’s a “head and shoulders pattern” or something. But mostly—at least in the quant world—you’ll just be using an algorithm to decide what to do, and one doesn’t care about the sort of “descriptive diagnosis” of what’s happening.

And in medicine, I expect that the whole computation idea will extend all the way down to the molecules we use as drugs. Today drugs tend to just be molecules that do one particular thing. In the future, I think we’re going to have molecules that each act like little computers, looking around at cells they encounter, and effectively running algorithms to decide how to act.

You know, there’s some very basic questions about medical diagnosis. I like to think of the analogy of software diagnosis. You have a computer. It’s running an operating system. Things happen to it. Eventually all kinds of crud builds up, it starts running slower—and eventually it crashes; it dies.

And of course you can restart it—from the same program, effectively the same “genetic material” giving you the next generation. That’s all pretty analogous to biology. But it’s much less advanced. I mean, we have all those codes for medical conditions; there’s nothing analogous for software.

But in software, unlike in biology, in principle we can monitor every single bit of what’s happening. And we’ve just started doing some experiments trying to understand in a general way, sort of what’s optimal to monitor to do “software diagnosis”, or more interestingly, what do you have to fix on an ongoing basis to effectively “extend the lifespan” of the running program.

OK. So I’m going through lots of areas and talking about how computation affects them. Here’s another completely different one: journalism.

We’re in the interesting position now with Wolfram|Alpha of having by quite a large margin more data feeds—whose meaning we understand—coming into our system than anyone has ever had before. In other words, we sort of have this giant sensory system connected to lots of things in the world.

Now the question is: what’s interesting that’s going on in the world? We see all this data coming in. What’s unexpected? What’s newsworthy? In a sense what we want to create is computational journalism: automatically finding each hour what the “most interesting things happening in the world are”.

You know, in addition to algorithms to just monitor what’s going on, there are also algorithms to predict consequences. It might be solving the equations for the propagation of a tsunami across an ocean. I think we can pretty much do those. Or it might be—and this I’m much less sure will work—figuring out some economic or supply chain model, in kind of the same way that we figure out behavior of large engineering systems. So that we don’t just see raw news, but also compute consequences.

So that’s computation in journalism. What about computation in books? How can those be computational?

Well, actually it’s rather easy. In fact, we started a company a couple of years ago that’s effectively making computational books. It’s called Touch Press. Our first book was an interactive tour of the chemical elements, that conveniently came out the day the iPad shipped, and that showed up in lots and lots of iPad ads. I’m actually surprised there aren’t lots more entrants here. But Touch Press has become by far the most successful publisher of highly interactive ebooks—in effect computational books. And, yes, underneath it’s using pieces of our technology stack, like Mathematica and Wolfram|Alpha. And producing books on all sorts of things. The most recent two being Egyptian pyramids, and Shakespeare’s sonnets.

And actually, from Mathematica we’ve actually built what we call CDF—the Computable Document Format—that lets one systematically define computable documents: documents where there’s interaction and computation going on right in the document.

And from CDF—in addition to all sorts of corporate reports—we’re beginning to see a generation of textbooks that can interactively illustrate their points, perhaps pulling in real-time data too, and that can interactively let students try things out, or test themselves.

There’s actually a lot more to say about how computation relates to the future of education, both in form and content. We’ve been working to define a computer-based math curriculum, that’s kind of what it’s worth teaching in the 21st century, now that for example, a large fraction of US students routinely use Wolfram|Alpha to do their homework every day. It’s actually exciting how much more we can teach now that knowledge and computation have been so much more democratized.

We’re also realizing—particularly with Mathematica—how much it’s possible to teach about computation, and programming, even at very early stages in education.

Some other time, perhaps, I can talk about the thinking we’ve done about how to change the structure of education—in certain ways to de-institutionalize it.

Before I finish I’d like to make sure I say just a tiny bit about what computation means not just about all the practical things I’ve been discussing, but also at a sort of deeper intellectual level. Like in science. Some of you may know that I’ve spent a great many years—in a sense as a user of Mathematica—doing basic science.

My main idea was to depart from essentially 300 years of scientific tradition, that had to do with using mathematical equations to describe the natural world, and instead sort of generalize them to arbitrary computer programs.

Well, my big discovery was that in the universe of possible computer programs, it takes only a very simple program to get incredibly complex behavior.

And I think that’s very important in understanding many systems in nature. Maybe even in giving us a fundamental theory of physics for our whole universe. It also gives us new ways of thinking about things. For philosophy. For understanding systems, and organizations and so on.

Newtonian science gave us notions like momentum and forces and integrals. That we talk about nowadays in all kinds of contexts. The new kind of science gives us notions like computational irreducibility, and computational equivalence, that give us new ways to think about things.

There are also some very practical implications. Like in technology. In a sense, technology is all about taking what exists in the world, and seeing how to harness it for human purposes. Figuring out what good a magnetic material, or a certain kind of gas, is.

In the computational universe, we’ve got all these little programs and algorithms that do all these remarkable things. And now there’s a new kind of technology that we can do. Where we define some goal. Then we search this computational universe for a program that achieves it. In a sense, what this does is to make invention free.

Actually, we’ve used this for example for creating music, and other people have used it in areas like architecture. Using a computer to do creative work. And, if one wants, to do it very efficiently. Making it economical, for example, to do mass customization.

At a very practical level, for more than a decade now we’ve routinely been creating technology not by having human engineers build it up step by step, but instead by searching the computational universe—and finding all this stuff out there that we can harness for technology. It’s pretty interesting. Sometimes what one finds is readily understandable to a human. Sometimes one can verify it works, but it’s really a very non-human solution. Something that no human on their own would have come up with. But something that one just finds out there in the computational universe.

Well, I think this methodology of algorithm discovery—and related methodologies for finding actual structures, for mechanical devices, molecules, and so on—will, I think, inevitably grow in importance. In fact, I’m guessing that within a few decades we’re going to find that there’s more new technology being created by those methods, than by all existing traditional engineering methods put together.

Today, we tend to create in a sense using only simplified computations—because that’s what our existing methods let us work with. But in the future we’re going to be seeing in every aspect of our world much much more that’s visibly doing sophisticated computation.

I want to leave you with the thought that even after everything that’s happened with computers over the past 50 years, we haven’t seen anything yet. Computation is a much stronger concept—and actually my guess is it’s going to be the defining concept for much of the future of human history.

2 comments

  1. Stephen,

    We know how to produce zero energy buildings but, at this pony it takes a lot of upfront work to make this level of energy efficiency. Could we use Sketch up and your backend software to create energy efficient affordable housing design and production

  2. idea of computation… emerged… defining concept..

    idea – emergence – concept…

    such is human history…