## My All-Time Favorite Science Discovery

June 1, 1984—forty years ago today—is when it would be fair to say I made my all-time favorite science discovery. Like with basically all significant science discoveries (despite the way histories often present them) it didn’t happen without several long years of buildup. But June 1, 1984, was when I finally had my “aha” moment—even though in retrospect the discovery had actually been hiding in plain sight for more than two years.

My diary from 1984 has a cryptic note that shows what happened on June 1, 1984:

There’s a part that says “BA 9 pm → LDN”, recording the fact that at 9pm that day I took a (British Airways) flight to London (from New York; I lived in Princeton at that time). “Sent vega monitor → SUN” indicates that I had sent the broken display of a computer I called “vega” to Sun Microsystems. But what’s important for our purposes here is the little “side” note:

*Take C10 pict.*

*R30*

*R110*

What did that mean? *C10*, *R30* and *R110* were my shorthand designations for particular, very simple programs of types I’d been studying: “code 10”, “rule 30” and “rule 110”. And my note reminded me that I wanted to take pictures of those programs with me that evening, making them on the laser printer I’d just got (laser printers were rare and expensive devices at the time).

I’d actually made (and even published) pictures of all these programs before, but at least for rule 30 and rule 110 those pictures were very low resolution:

But on June 1, 1984, my picture was much better:

For several years I’d been studying the question of “where complexity comes from”, for example in nature. I’d realized there was something very computational about it (and that had even led me to the concept of computational irreducibility—a term I coined just a few days before June 1, 1984). But somehow I had imagined that “true complexity” must come from something already complex or at least random. Yet here in this picture, plain as anything, complexity was just being “created”, basically from nothing. And all it took was following a very simple rule, starting from a single black cell.

Our usual intuition that to make something complex required “complex effort” was, I realized, simply wrong. In the computational universe one needed a new intuition. And the picture of rule 30 I generated that day was what finally made me understand that. Still, although I hadn’t internalized it before, several years of work had prepared me for this. And just days later I was at a conference already talking confidently about the implications of what I’d seen in rule 30.

Over the years that followed, rule 30 became basically the face of the phenomenon I had discovered. By 1985 I devoted a whole paper to it, in *A New Kind of Science* it was my initial and quintessential example, for the past quarter century a picture of rule 30 has adorned my personal business cards, and in 2019 we launched the Rule 30 Prizes to promote the rich basic science of rule 30:

But what about “*C10*”—the first item in my cryptic note? What was that? And what became of it?

## First Sightings of Code 10

Well, *C10* was “code 10”, or, more fully, “*k* = 2, *r* = 2 totalistic code 10 cellular automaton”. (I used the term “code” as a way to indicate a totalistic, rather than general, “rule”.) And, actually, I had looked at code 10 several times before, never really paying much attention to it.

The first explicit mention I find in my archives is from February 1983 (apparently reporting on something I’d done in January of that year). I had been doing all sorts of computer experiments on cellular automata, recording the results in a lab notebook. One page has observations about what I then called “summational rules” (I would soon rename these “totalistic”). And there’s code 10:

Mostly I had been studying the behavior starting from random initial conditions, but for code 10 I noted: “very irregular, even from simple initial state”. Within a couple of months I had even made (on an electrostatic printer) a high-resolution picture of code 10 starting from a single black cell—and here it is, prepared for publication, Scotch tape and all:

It appeared in a paper I wrote in May 1983. But the paper (entitled “Universality and Complexity in Cellular Automata”) was mostly about other things (for example, introducing my four general classes of cellular automaton behavior and talking quite a lot about code 20 as an example of class 4 rule), and it contained only a passing comment about code 10:

Code 10 is a range-2 rule, which means that the patterns it generates can grow by 2 cells on each side at each step. And the result is that the patterns quickly get quite wide, so that if one cuts them off when they “hit the edge of the page” (as my early programs “conveniently” tended to do) they don’t go very far, and one doesn’t get to see much of code 10’s behavior.

And it was this piece of “ergonomics” that caused me to basically ignore code 10—and not to recognize the “rule 30 phenomenon” until I happened to produce that high-resolution image of rule 30 on June 1, 1984.

I didn’t entirely forget code 10, for example mentioning it in a note to “Why These Discoveries Were Not Made Before” in my 2002 book *A New Kind of Science*:

But now that forty years have passed since I made—and basically ignored—that “*C10*” picture, I thought it would be nice to go back and see what I missed, and to use our modern Wolfram Language tools to spend a few hours checking out the story of code 10.

It’s an exercise in what I now call “ruliology”—the basic science of studying what simple rules do. And whenever one does ruliology there are certain standard things one can look at—that I showed many examples of in *A New Kind of Science*. But in a quintessential reflection of computational irreducibility there are also always “surprises”, and special phenomena one did not expect. And so it is with code 10.

## Code 10: The Basic Story

*Note: Click any diagram to get Wolfram Language code to reproduce it.*

Code 10 is a cellular automaton operating on a line of black and white cells, at each step adding up the values of the 5 cells up to distance 2 from any given cell (black is 1, white is 0). If the total is 1 or 3, the cell is black on the next step; otherwise it’s white (in base 2 the number 10 is 001010):

And, yes, in many ways this rule is even simpler to describe—at least in words—than rule 30. And if one thinks of it in terms of Boolean expressions, it can also be written in a very simple form:

(By the way, as a general *k* = 2, *r* = 2 rule, code 10 is rule 376007062.)

So what does code 10 do? Here are a few steps of its evolution starting from a single black cell:

And here are 2000 steps:

And, yes, even though it’s a simple rule, its behavior looks highly complex, and in many ways quite random. One immediate observation is that—unlike rule 30—code 10 is symmetric, so the pattern it generates is left-right symmetric. The center column isn’t interesting: after having black cells for 2 steps, it’s white thereafter. (And by substituting values *yx*0*xy* into the Boolean expression above, it’s easy to prove this.)

Filling the white region around the center column with red we get:

There doesn’t seem to be any long-range regularity to the way the width of this region changes:

And indeed the (even) widths seem at least close to exponentially distributed:

What if one goes one column to the left or right of the center? Here’s the beginning of the sequence one gets:

And, yes, every other cell is white. Picking only “even-numbered positions” we get:

Looking at the accumulated mean for 100,000 steps suggests that this sequence isn’t “uniformly random”, and that slightly fewer than 50% of the cells end up being black:

Going away from the center line, every other column has white cells every two steps. Sampling the pattern only at “odd positions” in both “space and time” we get a pattern that looks similar—though not identical—to our original one:

Looking at every cell, the overall density of the pattern seems to approach about 0.361. Looking only at “odd positions” the overall density seems to be about 0.49. And, yes, the fact that it doesn’t seem to become exactly 1/2 is one of those typical “not-quite-as-expected” things that one routinely finds in doing ruliology.

There are some aspects of the code 10 pattern, though, that inevitably work in particular ways. For example, if we “rotate” the pattern so that its boundary is vertical, we can see that close to the boundary the pattern is periodic:

The period progressively doubles at depths separated by 1, 1, 4, 6, 8, 14, 124, …—yielding what may perhaps ultimately be a logarithmic growth of period with depth:

## Other Initial Conditions, and a Surprise

We’ve looked at what happens with an initial condition consisting of a single black cell. But what about other initial conditions? Here are a few examples:

We might have thought that the “strength of randomness” would be large enough that we’d get patterns that look basically the same in all cases. But so what’s going on in the case? Running twice and five times as long reveals it’s actually nothing special; there just happen to be a few large triangles near the top:

So will nothing else notable happen with larger initial conditions?

What about ? Let’s run that a little longer:

And OMG! It’s not random and unpredictable at all. It’s a nested pattern!

Even in the midst of all that randomness and computational irreducibility, here is a dash of computational reducibility—and a reminder that there are always pockets of reducibility to be found in any computationally irreducible system, though there’s no guarantee how difficult they will be to find in any given case.

The particular nested pattern we get here is a bit like the one from the additive elementary rule 150, that simply computes the total mod 2 of the three cells in each neighborhood:

And it turns out to be almost exactly the *r* = 2 analog of this—the additive rule (code 42) that takes the total mod 2 of the five cells in the *r* = 2 neighborhood:

The limiting fractal dimension of this pattern is:

Is unique, or does this same phenomenon happen with other “seeds”? It turns out to happen again for:

So what’s going on here? Comparing the detailed pattern in the code 10 case with the additive rule case, there’s no immediate obvious correspondence:

But if we look at the rules for code 10 and code 42 respectively:

We notice that there’s really only one difference. In code 10, gives while in code 42, it gives . In other words, if code 10 avoids ever generating any block, it will inevitably behave just like code 42—and shows nested before. And that’s what happens for the initial conditions above; they can for example lead to blocks, but never .

Another notable and at first unexpected phenomenon concerns the overall density of black cells in patterns from different initial conditions:

And what we find is that for even-length initial blocks the density is about 0.47, while for odd ones it’s about 0.36. At first it might seem very strange that something as global as overall density could be affected by the initial conditions. But once again, it’s a story of what blocks can occur: in the odd-length case, there’s a checkerboard of guaranteed-white cells, which just doesn’t exist in the even-length case.

## Other Things to Study

We’ve been looking at what code 10 does with specific, simple initial conditions. What about with random initial conditions? Well, it’s not terribly exciting. It basically just looks random all the way through—which, by the way, is part of the reason I didn’t pay much attention to code 10 back in 1983:

But even though this looks quite random, it’s for example not the case that every single possible block of values can occur. Though it’s very close. Let’s say we start from all possible sequences of 0s and 1s in the initial conditions. Then—using methods I developed in 1984 based on finite automata—it’s possible to determine that even after 1 step there are some blocks of values that can’t occur. But it turns out that one has to go all the way to blocks of length 36 before one finds an example:

Although the patterns generated by code 10 generally look quite random, if we look closely we can see at least patches that are fairly regular. The most obvious examples are white triangles. But there are other examples, most notably associated with regions consisting of repetitions of blocks with periodic behavior:

Complementary to this is the question of what code 10 does in regions of limited size—say with cyclic boundary conditions, starting from a single black cell. The result is quite different for regions of different sizes:

For a region of size *n*, a symmetric rule like code 10 must repeat with a period of at most 2^{n/2}. Here are the actual repetition periods as a function of size, shown on a log plot:

These are results specifically for the single-cell initial condition. We can also generate state transition diagrams for all 2^{n} possible states in a size *n* code 10 system:

And mostly what we see is highly contractive behavior, with many different initial states evolving to the same final state—even though “eventually” we should start seeing larger cycles of the kind we picked up above when we looked at evolution from a single-cell initial condition.

And, yes, I could go on, for example repeating analyses I’ve done in the past for rule 30. A lot of what we’d see would be at least qualitatively much the same as for rule 30—and essentially the result of the appearance of computational irreducibility in both cases. But it’s a feature of the computational universe—and indeed one of the many consequences of computational irreducibility—that different computational systems will inevitably have different “idiosyncrasies”. And so it is for rule 30 and code 10. Rule 30 has an “Xor on one side” which gives it special surjectivity properties. Code 10 on the other hand has its block emulations, which lead, for example, to the surprise of nesting.

I’ve now spent many years studying the ruliology of simple programs, and if there’s one thing that still amazes me after all that time it’s that there are always surprises. Even with very simple underlying rules one can never be sure what will happen; there’s no choice but to just do the experiments and see. And, in my experience, pretty much whenever one thinks one’s “got to the end” and “seen everything there is to see”, something completely unexpected will pop out—a reminder that, as the Principle of Computational Equivalence tells us, these simple computational systems are in some sense a microcosm of everything that’s possible.

Ruliology is in many ways the ultimate foundational science—a science concerned with pure abstract rules not set up with any particular reference either to nature or to human choice. In a sense ruliology is our best path to ultimate pure abstraction—and unfettered exploration of the ruliad. And at least for me, it’s also something very satisfying to do. These days, with modern Wolfram Language, it’s all very streamlined and fast. Sitting at one’s computer, one can immediately start visiting vast uncharted areas of the computational universe, seeing things—and often very beautiful things—that have never been seen before, and discovering new but everlasting things anchored in the bedrock of computation and of simple programs.

It’s been fun spending a few hours studying the ruliology of code 10. Essentially everything I’ve done here I could have done (though not nearly as efficiently) back in 1983 when I first came up with code 10. But as it was, code 10 in a sense had to “wait patiently” for someone to come and look at it. The form of the rule 30 pattern is in some ways more “human-scaled” than code 10. But, as we’ve seen here, code 10 still manifests the same core phenomenon as rule 30. And now, forty years after printing that “*C10*” picture, I’m happy to be able to say that I think I’ve finally gotten at least a passing acquaintance with another remarkable “computational world” out there in the computational universe: the world of code 10.