“The Emerging Computation Revolution”–A Talk

Last week I gave a talk at the 2010 Emerging Technologies conference at MIT. I talked about many of my favorite topics, but with a particular orientation toward the future of the technology industry.

Stephen Wolfram at EmTech

Here’s a transcript of the talk:

The Emerging Computation Revolution

When we look back on the history of technology, I think we’ll see that the greatest revolution of the 20th century was the arrival of the concept of computation.

And in these years today, I think we’re seeing something else happen: the emergence of a second set of revolutions made possible by the concept of computation.

And it’s those revolutions that I want to talk about here today.

Now, needless to say, I’m quite involved in these.  And for me it’s really been about a 30-year journey getting to the point we’re at today—slowly understanding what’s possible.

Well, behind me here I have one of the fruits of that—Wolfram|Alpha.

And I want to talk about that, and about the idea of knowledge-based computing that it’s making possible.

There’s a lot of knowledge in the world.  A lot of data that’s been systematically collected.  A lot of methods, models, algorithms, expertise that have been built up.

And ever since I was a kid I’ve wondered whether we could somehow make all of this computable. Whether we could somehow build something that’s a bit like those old science fiction computers.

So that we could just walk up to a machine, and immediately be able to answer any question that can be answered on the basis of the knowledge that our civilization has accumulated.

It’s an ambitious goal. And when I first thought about this nearly 40 years ago, it seemed very far off.

But every decade or so since then I’ve returned to this. And finally, earlier this past decade, I started to think that perhaps it wasn’t crazy to actually try to build something like this.

There were several things that made that possible.

For me, particularly two things that I’ve worked on for nearly 30 years.

The first was Mathematica.

Long ago I was a physicist, who needed to compute all sorts of things. And back then I had to use some strange combination of handwork and random different computer systems to get things done.

And at some point I decided that really it should be possible to build one unified, integrated, system that just automates all this stuff I’d want to do.

And the way I thought about building such a system was a bit like the way I thought about physics.

Figure out what the fundamental components are—now of computations one wants to do, and then see how to build up from that.

The big technical idea is what’s called “symbolic programming”—the notion that any formal computational structure, or operation, can be represented in a very uniform way, as a symbolic expression.

Well, starting from that idea, we built Mathematica.  Basically just systematically implementing every method, every algorithm, that can be cast into pure computational form—and inventing lots of new ones.

And setting everything up so that it’s automated—so the human just has to sort of say what to do, and then all the figuring out of how to do things gets done by the computer.

Well, for the last couple of decades, Mathematica has been the tool of choice at the high end of R&D across pretty much every industry.

At first for math-oriented kinds of things, but now for pretty much anything that involves computation, analysis, visualization—or representing any kind of sophisticated knowledge.

Well, so, with Mathematica I had a way to compute things once they’re in any kind of formal form.

But so, what’s involved in putting the world’s knowledge into that sort of computable form?

For a while, that just seemed too big, too daunting.

But actually, in addition to building Mathematica, I’d been using Mathematica.

I viewed it a little bit like my version of Galileo’s telescope. But pointed not at the astronomical universe, but instead at the computational universe.

Usually when we have computer programs, they’re kind of complicated things—that we build up step by step to perform particular tasks we want.

But here’s the question that I first asked about 30 years ago now: what does the whole universe of possible programs look like?

Say we just start with the very simplest possible program. Just enumerate possibilities.

Well, here are some examples. These are called cellular automata.

Cellular Automata

Each one has a slightly different program. In a definite sequence.

But the result is kind of a zoo. Lots of different things going on.

Sometimes the behavior is very simple—like the programs underneath.

But here’s the big discovery: that’s not always true.

Out in the computational universe of possible programs, it’s very easy to find cases where very simple rules can just spontaneously create immense complexity.

Here’s my favorite example, because it’s the first example I discovered. It’s called “rule 30”.

Rule 30

And this discovery has gradually changed my whole world view—and led me to create a whole big new kind of science.

I think, for example, that what we’re seeing here is at the core of the big secret that nature seems to have—that allows it to so effortlessly create so much that seems to us so complex.

Well, the new kind of science around all this is leading to some pretty exciting new directions—in modeling nature, in understanding fundamental issues in biomedicine, and perhaps even in finding a fundamental theory of physics.

But it’s also leading to some exciting directions in technology.

You see, normally when you create programs you do it step-by-step. Because that seems like the only way to get a program that will really do something interesting.

But what we’re learning from NKS—the new kind of science—is that that’s not correct.

Just lying around out there in the computational universe are programs that do very interesting things.

The one I’m showing is a great random generator. There are others that we know that do all sorts of things, from image analysis to network routing to linguistics to function evaluation.

It’s a little like in the world of materials. You go out there and find all these different things. That are ferrimagnetic, or superconducting, or whatever.

And then you find out that actually you can harness these things to make technology.

Well, for the last decade or so—at an accelerating rate—we’ve been doing that in the computational universe.

Creating programs and algorithms not step-by-step—but just by “mining” the computational universe.

This is going to be a huge thing.

In fact, my guess is that within 50 years, more technology of all sorts will be being created this way than all forms of traditional engineering put together.

And of course the economics of things change.

It becomes cheap to make original, custom, stuff. Mass customization.

It becomes possible to effectively make discoveries on-the-fly all the time.

Whether in algorithmic drugs, new kinds of transaction systems, whatever.

But let’s get back to computable knowledge.

You see, working on NKS really changed my view of things.

I realized that even though all that knowledge out there in the world looks really messy and complicated, actually there can be manageably simple rules—a manageable simple framework—for handling it.

So it was really the new paradigm of NKS that made me think that perhaps it wasn’t so crazy to try to build a whole system for making the world’s knowledge computable.

Now even if one has an idea like that, it’s usually pretty difficult to actually execute it.

But I was in a pretty unique position.

Our company—Wolfram Research—had had more than 20 years of profitable growth, as a closely held private business.

And within the company I had collected a remarkable set of top people from a huge range of fields.

We were still fairly small—then about 500 people—and very used to doing very innovative things.  Used to just inventing pretty big stuff when we needed it.

So between Mathematica as an implementation language and deployment platform, NKS as a paradigm, and our company as an environment, we had sort of a perfect storm of what was needed to embark on what seemed like the insane project of making the world’s knowledge computable.

Well, in the middle of last year, we released what we’d done to the world—Wolfram|Alpha.

Let’s let it go on doing its stuff here.

And it’s been really satisfying to see what’s happened.

Zillions of people using it every day. Really democratizing knowledge.

You see, our goal has been to take expert-level knowledge in all areas, and make it computable. So that anyone can just walk up to the system, and ask a question that they might ask a human expert. And then automatically get a response.

So how does it work? What’s inside?

It’s not like a search engine. You see, a search engine takes the words you give as input, then tries to match them with pages that exist on the web—then gives you links to those.

But what we’re doing in Wolfram|Alpha is something different. We’re trying to compute answers to questions.

We’re taking the specific question you ask, then using the built-in computable knowledge we have, to compute a custom answer to that particular question—whether or not anyone has ever asked that particular question before.

So what’s involved in doing that?

First, we have to pull in all the data about the world. Thousands of domains.  Zillions of real-time feeds.  And so on.

And we have to curate this data. Make it computable.

Just having the raw data—even if it’s pretty clean—isn’t enough. You have to understand it, connect it to everything else, see how to compute from it.

We’ve built a whole pipeline for this kind of curation. A mixture of automated analysis with Mathematica, together with human experts—and it turns out you always need human experts if you actually want to get right answers.

We figure actually that getting the raw data ingested is about 5% of what we have to do; the other 95% is the whole curation process to make the data computable.

And there’s no magic bullet. If you actually want reliable, computable, data, every piece has to be validated. You can’t use natural language processing where you’re proud to have 85% success.  Because you don’t know which is the 15% that didn’t work. And in the end, the only thing is just to have clean primary source, then really go through the proper curation process.

Well, so OK. In Wolfram|Alpha we’ve accumulated and correlated and validated far more data from far more domains than has ever been possible before.

And as we get yet more domains, it becomes easier and easier to go further.

But once you have the data, what do you do with it?

Well, it turns out that only fairly few questions people want to ask just involve looking up the data.

Usually one wants to use the data, then compute an answer from it.

And over the course of the past few hundred years, lots of methods and models and algorithms have been invented for doing that.

So another part of the Wolfram|Alpha project is just to implement all those.

And at first that might just seem like an absurdly impossible task. But we have Mathematica. So we have all the raw material we need. And now it’s just work.

Well, Wolfram|Alpha is now about 10 million lines of Mathematica code.

Mathematica is a very succinct language. Probably the most succinct of the commonly used computer languages. So that might be equivalent to more like 30 or 50 million lines of a lower-level language.

So it’s big. But in all that code, we’ve now captured an awfully broad swath of all the things our civilization knows how to compute.

So, OK.  We’ve got data. Facts. And we can compute things from it.

But how are we supposed to interact with this?

Well, different scopes of systems require different mechanisms.

You know, if you have just a few choices about what to do, use a menu. A few more choices, give people a form to fill out.

Then there’s a big jump when one goes to a scripting language or a full computer language—when one can actually start writing programs.

But when things get really big, even that breaks down.

And in trying to interact with all the world’s knowledge, a formal computer language would just have to be too big. Too complex for us humans to learn or remember.

And really the only choice for having humans interact with the system is to use our own human, natural language.

So then there’s a huge algorithmic problem. How do we take those random human utterances, and actually understand them?

Well, particularly using ideas from NKS, I think we’ve made some pretty big practical breakthroughs in that.

I wasn’t at all sure it was going to be possible. But slowly, with new kinds of algorithms, informed by big corpuses and by watching zillions of actual queries by our users, we’re able to understand more and more of that strange natural language that humans feed to Wolfram|Alpha.

And actually, these days we’re typically running at about a 93% success rate: 93% of the time a query can successfully be interpreted—and converted from vague human natural language into our precise internal symbolic language.

Well, OK. So if we understand the question, we can usually compute all sorts of things to answer about it.

So then the final step is to automate figuring out how to generate the best possible “report” to send back to a user.

What should be shown?  How should things be optimally visualized? What’s the best hierarchy of information?

And this is a big area, where we’ve developed all sorts of new algorithms and heuristics—a whole area of computational aesthetics, for example.

Well, so we have all this technology. But then the good news is that Mathematica lets us really deploy it in a very large-scale production environment—so that we can have the Wolfram|Alpha website running robustly for the world, and supplying computational knowledge.

You know, not just from an intellectual and technology point of view, but also from a management and innovation point of view, Wolfram|Alpha is an interesting project.

I’m a person who does large projects. But Wolfram|Alpha is by far the most complex project I’ve ever tackled. More moving parts. More different areas of expertise involved.

It’s been fascinating to watch all those pieces grow and organize within our company. All sorts of strange new job titles: “linguistic curators”, “scanner design analysts”, “computable content managers”, “domain expert coordinators”, and so on.

It’s always fun to see a new kind of technology get built out. Which is what’s happening here.

But OK. So we have this website, which lots of people use, to do lots of things.

But that’s just the tip of the iceberg.

And over the next little while—particularly as we gradually understand just what’s possible—more and more of that iceberg is going to become visible.

There are mobile versions. There’s an API, for computers as well as humans to interact with Wolfram|Alpha.

There are ways to get computable knowledge in ebooks. And we just recently released a general Widget Builder, that makes it trivial to get a custom widget that computes a particular thing with Wolfram|Alpha, and deploy it on websites, etc.

Wolfram|Alpha is also getting into technology stacks, like Bing at Microsoft, and Siri at Apple. And there are going to be lots and lots more.

But the big picture is that what Wolfram|Alpha is doing is introducing a new kind of computing: knowledge-based computing.

In the past, one expected to write programs from the ground up, starting with raw computation primitives.

But with Wolfram|Alpha the idea is to start from the knowledge of the world—then build from there.

It makes all sorts of things suddenly get easy—and all sorts of new things suddenly be possible.

You know, one of the big areas that’s emerged for Wolfram|Alpha is in the enterprise. Large companies and other organizations that have all sorts of data, and all sorts of internal knowledge. That they want to really make computable.

We’re gradually figuring out how to make things more automated. But already we’re able to do some pretty impressive things with corporate data. Taking the Wolfram|Alpha experience, and combining internal corporate data with all our existing knowledge, and so on.

And you know, as I watch the process of data becoming computable it reminds me of the transition from paper to digital. There was a time when it was enough to have data on paper.  But then there were all these things that made one have to take it digital. And soon data that isn’t computable will seem marooned the same way as data on paper does today.

Well, so, I think knowledge-based computing is going to become ubiquitous, just like the web, and search, and so on have done. All those science-fiction scenes of having computers answer questions are going to happen.

But there’s going to be more.

So what can we do with this technology stack that we’ve assembled?

Well, here’s one thing.

Right now we mostly think of computers as giving outputs from computations—static outputs.

But from all the work we’ve done on computing things in Mathematica over the past 20-plus years, we’re soon going to be rolling out what we call CDF—the computable document format.

Which lets one produce dynamic interactive output from computations. And lets one easily embed dynamic interactivity in any document.

There are about 6000 examples of this technology on our Demonstrations site.

Demonstrations

And soon Wolfram|Alpha will actually be able to produce CDF as well. Effectively generating dynamic reports on the fly.

Well, I think CDF is going to be pretty interesting for all sorts of publishing—initially technical publishing. And in fact, as a pilot example there’s a calculus textbook being released using CDF technology.

But it’s also part of the future of how people will expect to consume structured information of all kinds.

A way to make it economically feasible to have interactivity all over the place—to really make use of the fact that we’re now reading things on computers, not on paper.

Actually, we’ve been involved in another publishing adventure recently.

If you watch Apple’s ads for the iPad you might have seen something like this:

Touch Press

That’s part of a dynamic ebook that a spin-off of ours, Touch Press, has created.

It’s yet another interesting Mathematica application. Using Mathematica to manage all those digital assets, do the image processing, and produce the final structure to deploy on the iPad.

Touch Press is going to be doing lots of trade ebooks. Making use of Wolfram|Alpha as a knowledge source.

And in time CDF too.

Well, OK. But there’s even more.

It’s interesting to compare Wolfram|Alpha with Mathematica.

In Mathematica we have this precise language—this precise way of specifying computations—that you can build potentially huge things with.  In Wolfram|Alpha we have this very broad, kind of drive-by way of specifying things.

What happens when one brings them together?

Well, it’s pretty interesting.

You can start talking to Mathematica not in its precise native language. But in plain English.

Having it figure out how to turn your utterances—the things you might say to another person—into precise specifications that it can understand, and process.

Well, you can do that for simple computations. But you can also do that for programming.

You can use plain English. Then use Wolfram|Alpha’s technology to create a precise program that corresponds to what you’ve asked.

This is pretty exciting. Because right now programming is necessarily a kind of expert activity. You have to know the language that computers speak to be able to do it.

But with this Wolfram|Alpha approach, we’re breaking down that barrier. If you as a human can describe in plain natural language what you want to do, then we can automatically create you a program to do it.

Later this fall we’ll be bringing out the first version of this capability. It’ll grow over the next few years. But I think this kind of free-form programming is really going to transform the curve of how people can use computers.

OK, so you can start using plain English to specify a program.

But what if you only know what general objective you want to achieve, not how to achieve it?

Well, here’s what we’re thinking about. You can try to use NKS. You can try to achieve your objective by automatically searching the computational universe to find a program that fits.

Automated on-the-fly discovery.

A few years ago we did this for an artistic area: we created the WolframTones website. Which pulls in musical compositions from the computational universe. And that actually has become rather popular among human composers.

Tones

Similar things have been done in the visual and mechanical domains. And as I mentioned earlier, we use a lot of automated algorithmic discovery in actually creating algorithms for Mathematica and for Wolfram|Alpha.

Making this work in general is still some distance away. And maybe it’ll require faster computers and so on to really be practical. But I think it’s an important part of the future of computing.

Well, I should wrap up soon.

In our company, we try to maintain a kind of portfolio of technology development projects, from ones where we can deliver results in the weekly code-pushes for Wolfram|Alpha, to ones which are perhaps a decade out.

With NKS it’s kind of scary: I can see potential applications—whether to nanotechnology, or biomedicine, or whatever. But I worry that it’s like saying one has calculus and Newton’s laws in the 17th century, then immediately starting a satellite launch company.

One has to pick the right century, and the right decade, to start applying the ideas.

For computable knowledge, that decade turned out to be this decade. For other things, it’s further out.

But my approach is to watch, and see when things are beginning to be ready. But always to build the tools and the platform to be able to make very practical useful things at every stage.

And, you know, we’re in an interesting position. One sees all these companies with 5 employees and so on that are creating things on the web.  And one thinks: “It’s amazing anything big enough to be important can be done with 5 people”. Well, of course, sometimes it can’t. But sometimes it is.

And the reason that’s possible is that the tools that exist now for web development and so on are good enough that one’s building on top of a very tall platform.

Well, we’re in that situation too. Though far fewer people understand it. With Mathematica, with Wolfram|Alpha and knowledge-based computing, with NKS, and in the future with CDF, we’ve got a bunch of platforms.

On which amazing things can now be built—comparatively easily.

I always want to push forward the basic research underneath. And it’s difficult to know quite how to do that in the modern world. Universities don’t seem to be set up to do the kind of innovation that’s needed.

They’re too bound by the structures that got set up in them half a century or so ago.

We’ve had success with some new educational approaches, and we’re considering rolling out a pretty broad new approach to this.

But when it comes to the company, and to creating products and things, we’re in a rather remarkable situation.

We have these platforms, and we can see a lot of things to do with them. Now we have to work out how to structure the business around them.

And we’re trying to develop a kind of internal-external startup structure—kind of a piece of business structure R&D. We’ll see how that goes.

I’m pretty proud of our record for innovation over the past nearly 25 years.

But really the most exciting thing is seeing all these intellectual and technological directions coming together.

We’ve seen the first stage of the computer revolution. Now we’re seeing just what computation really means for the future. Whether it’s computable knowledge. Or computable documents. Or automatic discovery in the computational universe.

These are going to be defining themes of technology in the 21st century. This is not only the future of computation, but the future of our technological world.

Thank you very much.

Stephen Wolfram (2010), "'The Emerging Computation Revolution'–A Talk," Stephen Wolfram Writings. writings.stephenwolfram.com/2010/10/the-emerging-computation-revolution-a-talk.
Text
Stephen Wolfram (2010), "'The Emerging Computation Revolution'–A Talk," Stephen Wolfram Writings. writings.stephenwolfram.com/2010/10/the-emerging-computation-revolution-a-talk.
CMS
Wolfram, Stephen. "'The Emerging Computation Revolution'–A Talk." Stephen Wolfram Writings. October 8, 2010. writings.stephenwolfram.com/2010/10/the-emerging-computation-revolution-a-talk.
APA
Wolfram, S. (2010, October 8). "The Emerging Computation Revolution"–A talk. Stephen Wolfram Writings. writings.stephenwolfram.com/2010/10/the-emerging-computation-revolution-a-talk.

Posted in: Big Picture