A World Run with Code

This is an edited transcript of a recent talk I gave at a blockchain conference, where I said I’d talk about “What will the world be like when computational intelligence and computational contracts are ubiquitous?”

We live in an interesting time today—a time when we’re just beginning to see the implications of what we might call “the force of computation”. In the end, it’s something that’s going to affect almost everything. And what’s going to happen is really a deep story about the interplay between the human condition, the achievements of human civilization—and the fundamental nature of this thing we call computation.

Stephen Wolfram on a world run with code

This essay is also in:
SoundCloud »Scientific American »

So what is computation? Well, it’s what happens when you follow rules, or what we call programs. Now of course there are plenty of programs that we humans have written to do particular things. But what about programs in general—programs in the abstract? Well, there’s an infinite universe of possible programs out there. And many years ago I turned my analog of a telescope towards that computational universe. And this is what I saw:

Cellular automata
&#10005

GraphicsGrid[
 Partition[
  Table[ArrayPlot[CellularAutomaton[n, {{1}, 0}, {30, All}], 
    ImageSize -> 40], {n, 0, 255}], 16]]

Each box represents a different simple program. And often they just do something simple. But look more carefully. There’s a big surprise. This is the first example I saw—rule 30:

Rule 30
&#10005

ArrayPlot[CellularAutomaton[30, {{1}, 0}, {300, All}], 
 PixelConstrained -> 1]
RulePlot[CellularAutomaton[30]]

You start from one cell, and you just follow that simple program—but here’s what you get: all that complexity. At first it’s hard to believe that you can get so much from so little. But seeing this changed my whole worldview, and made me realize just how powerful the force of computation is.

Because that’s what’s making all that complexity. And that’s what lets nature—seemingly so effortlessly—make the complexity it does. It’s also what allows something like mathematics to have the richness it does. And it provides the raw material for everything it’s possible for us humans to do.

Now the fact is that we’re only just starting to tap the full force of computation. And actually, most of the things we do today—as well as the technology we build—are specifically set up to avoid it. Because we think we have to make sure that everything stays simple enough that we can always foresee what’s going to happen.

But to take advantage of all that power out there in the computational universe, we’ve got to go beyond that. So here’s the issue: there are things we humans want to do—and then there’s all that capability out there in the computational universe. So how do we bring them together?

Well, actually, I’ve spent a good part of my life trying to solve that—and I think the key is what I call computational language. And, yes, there’s only basically one full computational language that exists in the world today—and it’s the one I’ve spent the past three decades building—the Wolfram Language.

Traditional computer languages—“programming languages”—are designed to tell computers what to do, in essentially the native terms that computers use. But the idea of a computational language is instead to take the kind of things we humans think about, and then have a way to express them computationally. We need a computational language to be able to talk not just about data types and data structures in a computer, but also about real things that exist in our world, as well as the intellectual frameworks we use to discuss them.

And with a computational language, we have not only a way to help us formulate our computational thinking, but also a way to communicate to a computer on our terms.

I think the arrival of computational language is something really important. There’s some analog of it in the arrival of mathematical notation 400 or so years ago—that’s what allowed math to take off, and in many ways launched our modern technical world. There’s also some analog in the whole idea of written language—which launched so many things about the way our world is set up.

But, you know, if we look at history, probably the single strongest systematic trend is the advance of technology. That over time there’s more and more that we’ve been able to automate. And with computation that’s dramatically accelerating. And in the end, in some sense, we’ll be able to automate almost everything. But there’s still something that can’t be automated: the question of what we want to do.

It’s the pattern of technology today, and it’s going to increasingly be the pattern of technology in the future: we humans define what we want to do—we set up goals—and then technology, as efficiently as possible, tries to do what we want. Of course, a critical part of this is explaining what we want. And that’s where computational language is crucial: because it’s what allows us to translate our thinking to something that can be executed automatically by computation. In effect, it’s a bridge between our patterns of thinking, and the force of computation.

Let me say something practical about computational language for a moment. Back at the dawn of the computer industry, we were just dealing with raw computers programmed in machine code. But soon there started to be low-level programming languages, then we started to be able to take it for granted that our computers would have operating systems, then user interfaces, and so on.

Well, one of my goals is to make computational intelligence also something that’s ubiquitous. So that when you walk up to your computer you can take for granted that it will have the knowledge—the intelligence—of our civilization built into it. That it will immediately know facts about the world, and be able to use the achievements of science and other areas of human knowledge to work things out.

Obviously with Wolfram Language and Wolfram|Alpha and so on we’ve built a lot of this. And you can even often use human natural language to do things like ask questions. But if you really want to build up anything at all sophisticated, you need a more systematic way to express yourself, and that’s where computational language—and the Wolfram Language—is critical.

OK, well, here’s an important use case: computational contracts. In today’s world, we’re typically writing contracts in natural language, or actually in something a little more precise: legalese. But what if we could write our contracts in computational language? Then they could always be as precise as we want them to be. But there’s something else: they can be executed automatically, and autonomously. Oh, as well as being verifiable, and simulatable, and so on.

Computational contracts are something more general than typical blockchain smart contracts. Because by their nature they can talk about the real world. They don’t just involve the motion of cryptocurrency; they involve data and sensors and actuators. They involve turning questions of human judgement into machine learning classifiers. And in the end, I think they’ll basically be what run our world.

Right now, most of what the computers in the world do is to execute tasks we basically initiate. But increasingly our world is going to involve computers autonomously interacting with each other, according to computational contracts. Once something happens in the world—some computational fact is established—we’ll quickly see cascades of computational contracts executing. And there’ll be all sorts of complicated intrinsic randomness in the interactions of different computational acts.

In a sense, what we’ll have is a whole AI civilization. With its own activities, and history, and memories. And the computational contracts are in effect the laws of the AI civilization. We’ll probably want to have a kind of AI constitution, that defines how generally we want the AIs to act.

Not everyone or every country will want the same one. But we’ll often want to say things like “be nice to humans”. But how do we say that? Well, we’ll have to use a computational language. Will we end up with some tiny statement—some golden rule—that will just achieve everything we want? The complexity of human systems of laws doesn’t make that seem likely. And actually, with what we know about computation, we can see that it’s theoretically impossible.

Because, basically, it’s inevitable that there will be unintended consequences—corner cases, or bugs, or whatever. And there’ll be an infinite hierarchy of patches one needs to apply—a bit like what we see in human laws.

You know, I keep on talking about computers and AIs doing computation. But actually, computation is a more general thing. It’s what you get by following any set of rules. They could be rules for a computer program. But they could also be rules, say, for some technological system, or some system in nature.

Think about all those programs out in the computational universe. In detail, they’re all doing different things. But how do they compare? Is there some whole hierarchy of who’s more powerful than whom? Well, it turns out that the computational universe is a very egalitarian place—because of something I discovered called the Principle of Computational Equivalence.

Because what this principle says is that all programs whose behavior is not obviously simple are actually equivalent in the sophistication of the computations they do. It doesn’t matter if your rules are very simple or very complicated: there’s no difference in the sophistication of the computations that get done.

It’s been more than 80 years since the idea of universal computation was established: that it’s possible to have a fixed machine that can be programmed to do any possible computation. And obviously that’s been an important idea—because it’s what launched the software industry, and much of current technology.

But the Principle of Computational Equivalence says something more: it says that not only is something like universal computation possible, it’s ubiquitous. Out in the computational universe of possible programs many achieve it, even very simple ones, like rule 30. And, yes, in practice that means we can expect to make computers out of much simpler—say molecular—components than we might ever have imagined. And it means that all sorts of even rather simple software systems can be universal—and can’t be guaranteed secure.

But there’s a more fundamental consequence: the phenomenon of computational irreducibility. Being able to predict stuff is a big thing, for example in traditional science-oriented thinking. But if you’re going to predict what a computational system—say rule 30—is going to do, what it means is that somehow you have to be smarter than it is. But the Principle of Computational Equivalence says that’s not possible. Whether it’s a computer or a brain or anything else, it’s doing computations that have exactly the same sophistication.

So it can’t outrun the actual system itself. The behavior of the system is computationally irreducible: there’s no way to find out what it will do except in effect by explicitly running or watching it. You know, I came up with the idea of computational irreducibility in the early 1980s, and I’ve thought a lot about its applications in science, in understanding phenomena like free will, and so on. But I never would have guessed that it would find an application in proof-of-work for blockchains, and that measurable fractions of the world’s computers would be spending their time purposefully grinding computational irreducibility.

By the way, it’s computational irreducibility that means you’ll always have unintended consequences, and you won’t be able to have things like a simple and complete AI constitution. But it’s also computational irreducibility that in a sense means that history is significant: that there’s something irreducible achieved by the course of history.

You know, so far in history we’ve only really had one example of what we’re comfortable calling “intelligence”—and that’s human intelligence. But something the Principle of Computational Equivalence implies is that actually there are lots of things that are computationally just as sophisticated. There’s AI that we purposefully build. But then there are also things like the weather. Yes, we might say in some animistic way “the weather has a mind of its own”. But what the Principle of Computational Equivalence implies is that in some real sense it does: that the hydrodynamic processes in the atmosphere are just as sophisticated as anything going on in our brains.

And when we look out into the cosmos, there are endless examples of sophisticated computation—that we really can’t distinguish from “extraterrestrial intelligence”. The only difference is that—like with the weather—it’s just computation going on. There’s no alignment with human purposes. Of course, that’s a slippery business. Is that graffiti on the blockchain put there on purpose? Or is it just the result of some computational process?

That’s why computational language is important: it provides a bridge between raw computation and human thinking. If we look inside a typical modern neural net, it’s very hard to understand what it does. Same with the intermediate steps of an automated proof of a theorem. The issue is that there’s no “human story” that can be told about what’s going on there. It’s computation, alright. But—a bit like the weather—it’s not computation that’s connected to human experience.

It’s a bit of a complicated thing, though. Because when things get familiar, they do end up seeming human. We invent words for common phenomena in the weather, and then we can effectively use them to tell stories about what’s going on. I’ve spent much of my life as a computational language designer. And in a sense the essence of language design is to identify what common lumps of computational work there are, that one can make into primitives in the language.

And it’s sort of a circular thing. Once one’s developed a particular primitive—a particular abstraction—one then finds that one can start thinking in terms of it. And then the things one builds end up being based on it. It’s the same with human natural language. There was a time when the word “table” wasn’t there. So people had to start describing things with flat surfaces, and legs, and so on. But eventually this abstraction of a “table” appeared. And once it did, it started to get incorporated into the environment people built for themselves.

It’s a common story. In mathematics there are an infinite number of possible theorems. But the ones people study are ones that are reached by creating some general abstraction and then progressively building on it. When it comes to computation, there’s a lot that happens in the computational universe—just like there’s a lot that happens in the physical universe—that we don’t have a way to connect to.

It’s like the AIs are going off and leading their own existence, and we don’t know what’s going on. But that’s the importance of computational language, and computational contracts. They’re what let us connect the AIs with what we humans understand and care about.

Let’s talk a little about the more distant future. Given the Principle of Computational Equivalence I have to believe that our minds—our consciousness—can perfectly well be represented in purely digital form. So, OK, at some point the future of our civilization might be basically a trillion souls in a box. There’ll be a complicated mixing of the alien intelligence of AI with the future of human intelligence.

But here’s the terrible thing: looked at from the outside, those trillion souls that are our future will just be doing computations—and from the Principle of Computational Equivalence, those computations won’t be any more sophisticated than the computations that happen, say, with all these electrons running around inside a rock. The difference, though, is that the computations in the box are in a sense our computations; they’re computations that are connected to our characteristics and our purposes.

At some level, it seems like a bad outcome if the future of our civilization is a trillion disembodied souls basically playing videogames for the rest of eternity. But human purposes evolve. I mean, if you tried to explain to someone from a thousand years ago why today we might walk on a treadmill, we’d find it pretty difficult. And I think the good news is that at any time in history, what’s happening then can seem completely meaningful at that time.

The Principle of Computational Equivalence tells us that in a sense computation is ubiquitous. Right now the computation we define exists mostly in the computers we’ve built. But in time, I expect we won’t just have computers: everything will basically be made of computers. A bit like a generalization of how it works with biological life, every object and every material will be made of components that do computations we’ve somehow defined.

But the pressure again is on how we do that definition. Physics gives some basic rules. But we get to say more than that. And it’s computational language that makes what we say be meaningful to us humans.

In the much nearer term, there’s a very important transition: the point at which literacy in computational language becomes truly commonplace. It’s been great with the Wolfram Language that we can now give kids a way to actually do computational thinking for real. It’s great that we can now have computational essays where people get to express themselves in a mixture of natural language and computational language.

But what will be possible with this? In a sense, human language was what launched civilization. What will computational language do? We can rethink almost everything: democracy that works by having everyone write a computational essay about what they want, that’s then fed to a big central AI—which inevitably has all the standard problems of political philosophy. New ways to think about what it means to do science, or to know things. Ways to organize and understand the civilization of the AIs.

A big part of this is going to start with computational contracts and the idea of autonomous computation—a kind of strange merger of the world of natural law, human law, and computational law. Something anticipated three centuries ago by people like Leibniz—but finally becoming real today. Finally a world run with code.

Stephen Wolfram (2019), "A World Run with Code," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/a-world-run-with-code.
Text
Stephen Wolfram (2019), "A World Run with Code," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/a-world-run-with-code.
CMS
Wolfram, Stephen. "A World Run with Code." Stephen Wolfram Writings. May 2, 2019. writings.stephenwolfram.com/2019/05/a-world-run-with-code.
APA
Wolfram, S. (2019, May 2). A world run with code. Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/a-world-run-with-code.

Posted in: Artificial Intelligence, Big Picture, Future Perspectives, Language & Communication, New Kind of Science