*Based on a talk at Numerous Numerosity: An interdisciplinary meeting on the notions of cardinality, ordinality and arithmetic across the sciences.*

## Everyone Has to Have Numbers… Don’t They?

The aliens arrive in a starship. Surely, one might think, to have all that technology they must have the idea of numbers. Or maybe one finds an uncontacted tribe deep in the jungle. Surely they too must have the idea of numbers. To us numbers seem so natural—and “obvious”—that it’s hard to imagine everyone wouldn’t have them. But if one digs a little deeper, it’s not so clear.

It’s said that there are human languages that have words for “one”, “a pair” and “many”, but no words for specific larger numbers. In our modern technological world that seems unthinkable. But imagine you’re out in the jungle, with your dogs. Each dog has particular characteristics, and most likely a particular name. Why should you ever think about them collectively, as all “just dogs”, amenable to being counted?

Imagine you have some sophisticated AI. Maybe it’s part of the starship. And in it this computation is going on:

✕
Cell[BoxData[ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{ RowBox[{"{", RowBox[{"126403", ",", RowBox[{"{", RowBox[{"5", ",", "1"}], "}"}]}], "}"}], ",", RowBox[{"BlockRandom", "[", RowBox[{ RowBox[{"SeedRandom", "[", "234234", "]"}], ";", RowBox[{"RandomInteger", "[", RowBox[{"4", ",", "400"}], "]"}]}], "]"}], ",", RowBox[{"{", RowBox[{"150", ",", "All"}], "}"}]}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "Black"}], ",", RowBox[{"1", "->", "Red"}], ",", RowBox[{"2", "->", "Blue"}], ",", RowBox[{"3", "->", "Yellow"}], ",", RowBox[{"4", "\[Rule]", "Green"}]}], "}"}]}], ",", RowBox[{"Frame", "->", "None"}], ",", RowBox[{"ImageSize", "->", "600"}]}], "]"}]], "Input", CellChangeTimes->{{3.8306420749230623`*^9, 3.830642123424028*^9}, { 3.8306421623971853`*^9, 3.830642162868969*^9}}, CellID->872715772] |

Where are the numbers here? What is there to count?

Let’s change the rule for the computation a bit. Now here’s what we get:

✕
Cell[BoxData[ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{ RowBox[{"{", RowBox[{"641267", ",", RowBox[{"{", RowBox[{"5", ",", "1"}], "}"}]}], "}"}], ",", RowBox[{"BlockRandom", "[", RowBox[{ RowBox[{"SeedRandom", "[", "234234", "]"}], ";", RowBox[{"RandomInteger", "[", RowBox[{"4", ",", "400"}], "]"}]}], "]"}], ",", RowBox[{"{", RowBox[{"220", ",", "All"}], "}"}]}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "Black"}], ",", RowBox[{"1", "->", "Red"}], ",", RowBox[{"2", "->", "Blue"}], ",", RowBox[{"3", "->", "Yellow"}], ",", RowBox[{"4", "\[Rule]", "Green"}]}], "}"}]}], ",", RowBox[{"Frame", "->", "None"}], ",", RowBox[{"ImageSize", "->", "600"}]}], "]"}]], "Input", CellChangeTimes->{{3.8306420749230623`*^9, 3.830642123424028*^9}, { 3.83064219850028*^9, 3.8306422038874493`*^9}, {3.830642239353552*^9, 3.830642246431511*^9}}, CellID->2060456635] |

And now we’re beginning to have something where numbers seem more relevant. We can identify a bunch of structures. They’re not all the same, but they have certain characteristics in common. And we can imagine describing what we’re seeing by just saying for example “There are 11 objects…”.

## What Underlies the Idea of Numbers?

Dogs. Sheep. Trees. Stars. It doesn’t matter what kinds of things they are. Once you have a collection that you view as all somehow being “of the same kind”, you can imagine producing a count of them. Just consider each of them in turn, at every step applying some specific operation to the latest result from your count—so that computationally you build up something like:

✕
Cell[BoxData[ RowBox[{"Append", "[", RowBox[{ RowBox[{"NestList", "[", RowBox[{"s", ",", "0", ",", "8"}], "]"}], ",", "\[Ellipsis]"}], "]"}]], "Input", CellChangeTimes->{{3.8306430011394978`*^9, 3.8306430157806883`*^9}, 3.830643443470106*^9, {3.830644248491436*^9, 3.830644262369773*^9}, 3.830905650127592*^9}, CellID->1694952072] |

For our ordinary integers, we can interpret *s* as being the “successor function”, or “add 1”. But at a fundamental level all that really matters is that we’ve reduced considering each of our original things separately to just repeatedly applying one operation, that gives a chain of results.

To get to this point, however, there’s a crucial earlier step: we have to have some definite concept of “things”—or essentially a notion of distinct objects. Our everyday world is of course full of these. There are distinct people. Distinct giraffes. Distinct chairs. But it gets a lot less clear if we think about clouds, for example. Or gusts of wind. Or abstract ideas.

So what is it that makes us able to identify some definite “countable thing”? Somehow the “thing” has to have some distinct existence—some degree of permanence or universality, and some ability to be independent and separated from other things.

There are many different specific criteria we could imagine. But there’s one general approach that’s very familiar to us humans: the way we talk about “things” in human language. We take in some visual scene. But when we describe it in human language we’re always in effect coming up with a symbolic description of the scene.

There’s a cluster of orange pixels over there. Brown ones over there. But in human language we try to reduce all that detail to a much simpler symbolic description. There’s a chair over there. A table over there.

It’s not obvious that we would be able to do this kind of “symbolicization” in any meaningful way. But what makes it possible is that pieces of what we see are repeatable enough that we can consider them “the same kind of thing”, and, for example, give them definite names in human language. “That’s a table; that’s a chair; etc.”.

There’s a complicated feedback loop, that I’ve written about elsewhere. If we see something often enough, it makes sense to give it a name (“that’s a shrub”; “that’s a headset”). But once we’ve given it a name, it’s much easier for us to talk and think about it. And so we tend to find or produce more of it—which makes it more common in our environment, and more familiar to us.

In the abstract, it’s not obvious that “symbolicization” will be possible. It could be that the fundamental behavior of the world will always just generate more and more diversity and complexity, and never produce any kind of “repeated objects” that could, for example, reasonably be given consistent names.

One might imagine that as soon as one believes that the world follows definite laws, then it’d be inevitable that there’d be enough regularity to guarantee that “symbolicization” is possible. But that ignores the phenomenon of computational irreducibility.

Consider the rule:

✕
Cell[BoxData[ RowBox[{"RulePlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{"{", RowBox[{"11497", ",", "3", ",", RowBox[{"1", "/", "2"}]}], "}"}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", RowBox[{"Darker", "[", RowBox[{"Yellow", ",", ".05"}], "]"}]}], ",", RowBox[{"1", "->", RowBox[{"Darker", "[", "Red", "]"}]}], ",", RowBox[{"2", "->", RowBox[{"Darker", "[", "Blue", "]"}]}]}], "}"}]}]}], "]"}]], "Input", CellChangeTimes->{{3.8306502296691*^9, 3.8306502564832697`*^9}, { 3.830650293899901*^9, 3.83065030003624*^9}, {3.8306504803757343`*^9, 3.830650516874546*^9}}, CellID->297907960] |

We might imagine that with such a simple rule we’d inevitably be able to describe the behavior it produces in a simple way. And, yes, we can always run the rule to find out what behavior it produces. But it’s a fundamental fact of the computational universe that the result doesn’t have to be simple:

✕
Cell[BoxData[ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{ RowBox[{"{", RowBox[{"11497", ",", "3", ",", RowBox[{"1", "/", "2"}]}], "}"}], ",", RowBox[{"{", RowBox[{ RowBox[{"{", "1", "}"}], ",", "0"}], "}"}], ",", RowBox[{"{", RowBox[{"300", ",", "All"}], "}"}]}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "Yellow"}], ",", RowBox[{"1", "->", "Red"}], ",", RowBox[{"2", "->", "Blue"}]}], "}"}]}], ",", RowBox[{"Frame", "->", "None"}]}], "]"}]], "Input", CellChangeTimes->{{3.8306501039736633`*^9, 3.830650148180499*^9}, { 3.8306502643578453`*^9, 3.8306502688904037`*^9}, 3.830650349517728*^9}, CellID->215317600] |

And in general we can expect that the behavior will be computationally irreducible, in the sense that there’s no way to reproduce it without effectively tracing through each step in the application of the rule.

With behaviors like these

✕
Cell[BoxData[ RowBox[{ RowBox[{ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{ RowBox[{"{", RowBox[{"#", ",", "3", ",", RowBox[{"1", "/", "2"}]}], "}"}], ",", RowBox[{"{", RowBox[{ RowBox[{"{", "1", "}"}], ",", "0"}], "}"}], ",", RowBox[{"{", RowBox[{"100", ",", "All"}], "}"}]}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "Yellow"}], ",", RowBox[{"1", "->", "Red"}], ",", RowBox[{"2", "->", "Blue"}]}], "}"}]}], ",", RowBox[{"Frame", "->", "None"}]}], "]"}], "&"}], "/@", RowBox[{"{", RowBox[{"16451", ",", "4983", ",", "8624"}], "}"}]}]], "Input", CellChangeTimes->{{3.8306514949747677`*^9, 3.830651530814527*^9}, { 3.830651825483645*^9, 3.8306518257404203`*^9}}, CellID->1121547572] |

it’s perfectly possible to imagine giving a complete symbolic description of what’s going on. But as soon as there’s computational irreducibility, this won’t be possible. There’ll be no way to have a “compressed” symbolicized description of the whole behavior.

So how come we manage to describe so much with language, in a “symbolic” way? It turns out that even when a system—such as our universe—is fundamentally computationally irreducible, it’s inevitable that it will have “pockets” of computational reducibility. And these pockets of computational reducibility are crucially important to how we operate in the universe. Because they’re what let us have a coherent experience of the world, with things happening predictably according to identifiable laws, and so on.

And they also mean that—even though we can’t expect to describe everything symbolically—there’ll always be some things we can. And some places where we can expect the concept of numbers to be useful.

## What the Universe Is Like

The history of physics might make one think that numbers would be a necessary part of the structure of any fundamental theory of our physical universe. But the models of physics suggested by our Physics Project have no intrinsic reference to numbers.

Instead, they just involve a giant network of elements that’s continually getting rewritten according to certain rules. There aren’t intrinsically coordinates, or quantities, or anything that would normally be associated with numbers. And even though the underlying rules may be simple, the detailed overall behavior of the system is highly complex, and full of computational irreducibility.

But the key point is that as observers with particular characteristics embedded in this system we’re only sampling certain features of it. And the features we sample in effect tap into pockets of reducibility. Which is where “simplifying concepts” like numbers can enter.

Let’s talk first about time. We’re used to the experience that time progresses in some kind of linear fashion, perhaps marked off by something like counting rotations of our planet (i.e. days). But at the lowest level in our models, time doesn’t work that way. Instead, what happens is that the universe evolves by virtue of lots of elementary updating events happening throughout the network.

These updating events have certain causal relationships. (A particular updating event, for example, might “causally depend” on another event because it uses as “input” something that’s the “output” of the other event.) In the end, there’s a whole “causal graph” of causal relationships between updating events:

✕
CloudGet["https://wolfr.am/KXgcRNRJ"];(*drawFoliation*)gg=Graph[ResourceFunction["WolframModel"][{{x,y},{z,y}}->{{x,z},{y,z},{w,z}},{{0,0},{0,0}},14,"LayeredCausalGraph"]]; semiRandomWMFoliation={{1},{1,2,3, 4,5, 6,7,8, 10}, {1,2,3, 4,5, 6,7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17,18,19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 42, 43, 58, 59}, {1,2,3, 4,5, 6,7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17,18,19, 20, 21, 22, 23, 24, 25, 26,27, 28, 29, 30,31, 32,33, 34, 35,36, 37, 38, 39, 40, 41, 42, 43, 44, 45,46,47, 48, 49, 50, 51, 52, 53,58, 59, 61, 62, 64, 65, 66, 68, 69, 70, 79, 80, 81, 83, 84, 95}, {1,2,3, 4,5, 6,7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17,18,19, 20, 21, 22, 23, 24, 25, 26,27, 28, 29, 30,31, 32,33, 34, 35,36, 37, 38, 39, 40, 41, 42, 43, 44, 45,46,47, 48, 49, 50, 51, 52,53,54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85,86, 87,88, 89, 90, 91, 92,93, 94, 95, 96,97, 102, 101, 104, 109, 110, 111, 113, 112,114, 115, 116, 117,118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 130, 131, 132,133, 134, 147, 148, 166}}; Quiet[drawFoliation[gg,semiRandomWMFoliation,Directive[Red]],FindRoot::cvmit] |

The full causal graph is immensely complex, and suffused with computational irreducibility. But we—as the observers we are—sample only certain features of this graph. And—as I’ve recently discussed elsewhere—it seems that the essence of our concept of consciousness is to define certain aspects of that sampling. In particular, despite all the updating events in the universe, and the complex causal relationships between them, we end up “parsing” the samples we take by imagining that we have a definite “sequentialized” thread of experience, or in effect that time progresses in a purely linear fashion.

How do we achieve this? One convenient idealization—developed for thinking about spacetime and relativity—is to set up a “reference frame” in which we imagine dividing the causal graph into a sequence of slices (as in the picture above) that we consider to correspond to “instantaneous complete states of the universe” at successive “moments in time”. It’s not obvious that it’ll be consistent to do this. But between causal invariance and assumptions about the computational boundedness of the observer it turns out that it is—and that the “experience” of the universe for such an observer must follow the laws of physics that we know from general relativity.

So what does this tell us about the emergence of numbers? At the lowest level, the universe is full of computational irreducibility in which there’s no obvious sign of anything like numbers. But in experiencing the universe through the basic features of our consciousness we essentially force some kind of “number-like” sequentiality in time, reflected in the validity of general relativity, with its “essentially numericalized” notion of time. Or, in other words, “time” (or the “progress of the universe”) isn’t intrinsically “numerical”. But the way we—as “conscious observers”—sample it, it’s necessarily sequentialized, with one moment of time being succeeded by another, in a fundamentally “numerical” sequence.

It’s one thing, though, to sample the behavior of the universe in “time slices” in which all of space has been elided together. But for one to be able to “count” the moments in the passage of time (say aggregated into days), there has to be a certain “sameness” to those moments. The universe can’t do wildly different things at each successive moment; it has to have a certain coherence and uniformity that let us consider different moments as somehow “equivalent enough” to be able simply to be “counted”.

And in fact the emergence of general relativity as the large-scale limit of our models (as viewed by observers like us) pretty much guarantees this result, except in certain pathological or extreme cases.

OK, so for observers like us, time in our universe is in some sense “inevitably numerical”. But what about space? At the lowest level in our models, space just consists of a giant and continually updating network of “atoms of space”. And to talk about something like “distance in space” we first have to get some kind of “time-consistent” version of the network. It’s very much the same situation as with time. To get a simple definition of how time works, we have to elide space. Now, to have any chance of getting a simple definition of how space works, we have to somehow “elide time”.

Or, put another way, we have to think about dividing up the causal graph into “spatial regions” (the vertical “timelike” analog of the horizontal “spacelike slices” we used above) where we can in effect combine all events that occur at any time, in that “region of space”. (Needless to say, in practice we don’t want it to be “any time”—just some span of time that is long compared to what elapses between individual updating events.)

What is the analog for space of the “consciousness assumption” that time progresses in a single, sequential thread? Presumably it’s that we can sample space without having to think about time, or in other words, that we can consistently construct a stable notion of space.

Let’s say we’re trying to find the shortest “travel path” between two “points in space”. At the outset, the definition is quite subtle—not least because there are no “statically defined” “points in space”. Every part of the network is being continually rewritten, so in a sense by the time you “get to the other point”, it certainly won’t be the same “atom of space” as when you started out. And to avoid this, you essentially have to elide time. And just like for the case of spacelike slices for sequentialization in time, there are certain consistent choices of timelike slices that can be made.

And assuming such a choice is made, there will then be “time-elided” (or, roughly, time-independent) paths between points in space, analogous to our previous “space-elided” “path through time”. So then how might we measure the length of a path in space, or, effectively the distance between two points? In direct analogy to the case of time, if there is sufficient uniformity in the spatial structure then we can expect to just “count things” to get a numerical version of distance.

Sequentialization in time is what allows us to have the sense that we maintain a coherent existence—and a coherent thread of experience—through time. The ability to do something similar in space is what gives us the sense that we have a coherent existence through space, or, in other words, that we can maintain our identity when we move around in space.

In principle, there might be nothing like “pure motion”: it might be that any “movement in space” would necessarily change the structure and character of things. But the point is that one can consistently label positions in space so that this doesn’t happen, and “pure motion” is possible. And once we’ve done that, we’re again essentially forcing there to be a notion of distance, that can be measured with numbers.

OK, but so if we sample the universe in the way we expect a conscious observer who maintains their identity as they move to do, then there’s a certain inevitable “numerical character” to the way we measure time and space. But what “stuff in the universe”? Can we expect that also to be characterized by numbers? We talked above about “things”. Can the universe contain “things” that can for example readily be counted?

Remember that in our models the whole universe—and everything in it—is just a giant network. And at the lowest level this network is just atoms of space and connections between them—and nothing that we can immediately consider a “thing”. But we expect that within the structure of the network there are essentially topological features that are more like “things”.

A good example is black holes. When we look at the network—and particularly the causal graph—we can potentially identify the signature of event horizons and a black hole. And we can imagine “counting black holes”.

What makes this possible? First, that black holes have a certain degree of permanence. And second, that they can be to a large extent treated as independent. And third, that they can all readily be identified as “the same kind of thing”. Needless to say, none of these features is absolute. Black holes form, merge, evaporate—and so aren’t completely permanent. Black holes can have gravitational—and also presumably quantum—effects on each other, and so aren’t completely independent. But they’re permanent and independent enough that it’s a useful approximation to treat them as “definite things” that can readily be counted.

Beyond black holes, there’s another clear example of “countable” things in the universe: particles, like electrons, photons, quarks and so on. (And, yes, it won’t be a big surprise if there’s a deep connection between particles and black holes in our models.) Particles—like black holes—are somewhat permanent, somewhat independent and have a high degree of “sameness”.

A defining feature of particles is that they’re somewhat localized (for us, presumably in both physical and branchial space), and maintain their identity with time. They can be emitted and absorbed, so aren’t completely permanent, but somehow they exist for long enough to be identified.

It’s then a fundamental observation in physics that particles come only in certain discrete species—and within these species every particle (say, every electron) is identical, save for its position and momentum (and spin direction). We don’t yet know within our models exactly how such particles work, but the assumption is that they correspond to certain discrete possible “topological obstructions” in the behavior of the network. And much like a vortex in a fluid, their topological character endows them with a certain permanence.

It’s worth understanding that in our models, not everything that “goes on in the universe” can necessarily be best characterized in terms of particles. In principle one might be able to think of every piece of activity in the network as somehow related to a sufficiently small or short-lived “particle”. But mostly there won’t be “room for” the characteristics of something we can identify as a particular “countable” particle to emerge.

An extreme case is what would be considered zero-point fluctuations in traditional quantum field theory: an ever-present infinite collection of short-lived virtual particle pairs. In our models this is not something one immediately thinks of in terms of particles: rather, it is continual activity in the network that in effect “knits space together”.

But in answering the question of whether physics inevitably leads to a notion of numbers, one can certainly point to situations where definite “countable” particles can be identified. But is this like the case of time and space that we discussed above: the numbers are somehow “not intrinsic” but just appear for “observers like us”?

Once again I suspect the answer is “yes”. But now the special feature of us as observers is that we think about the universe in terms of multiple, independent processes or experiments. We set things up so that we can concentrate, say, on the scattering of two particles that are initially sufficiently separated from everything else to be independent of it. But without this separation, we’d have no real way to reliably “count the particles”, and characterize what’s happening in terms of specific particles.

There’s actually a direct analog of this in a simple cellular automaton. On the left is a process involving “separated countable particles”; on the right—using exactly the same rule—is one where there are no similar particle-based “asymptotic states”:

✕
GraphicsRow[{With[{bkg=ResourceData["7ef9f422-9541-4a0f-bec2-cf3e310fe5f2",All]["Background"],collisions=Normal@ResourceData["7ef9f422-9541-4a0f-bec2-cf3e310fe5f2",All]["Collisions"]},ArrayPlot[CellularAutomaton[110,#,{300,200+{-100,100}}],ColorRules->{0->LightYellow,1->Red},Frame->None]&[collisions[[1,"InitialConditions"]][[1]]]],ArrayPlot[CellularAutomaton[110,BlockRandom[SeedRandom[2423];RandomInteger[1,200]],300],ColorRules->{0->LightYellow,1->Red},Frame->None]}] |

## Is All Computational Reducibility Numerical?

As we’ve discussed, even with simple underlying rules, many systems behave in computationally irreducible ways. But when there’s computational reducibility—and when, in a sense, we can successfully “jump ahead” in the computation—are numbers always involved in doing that?

In cases like these

✕
Cell[BoxData[ RowBox[{ RowBox[{ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"CellularAutomaton", "[", RowBox[{ RowBox[{"{", RowBox[{"#", ",", "3", ",", RowBox[{"1", "/", "2"}]}], "}"}], ",", RowBox[{"{", RowBox[{ RowBox[{"{", "1", "}"}], ",", "0"}], "}"}], ",", RowBox[{"{", RowBox[{"100", ",", "All"}], "}"}]}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "Yellow"}], ",", RowBox[{"1", "->", "Red"}], ",", RowBox[{"2", "->", "Blue"}]}], "}"}]}], ",", RowBox[{"Frame", "->", "None"}], ",", RowBox[{"ImageSize", "->", "200"}]}], "]"}], "&"}], "/@", RowBox[{"{", RowBox[{"3363", ",", "2098", ",", "13753"}], "}"}]}]], "Input", CellChangeTimes->{{3.830814021443989*^9, 3.830814088320113*^9}}, CellID->2039815390] |

where there’s clear repetition in the behavior, numbers are an obvious path to figuring out what’s going to happen. Want to know what the system will do at step number *t*? Just take the number *t* and do some “numerical computation” on it (typically here involving modulo arithmetic) and immediately get the result.

But very often you end up treating t as a “number in name only”. Consider nested patterns like these:

✕
GraphicsRow[ArrayPlot[CellularAutomaton[{#,3,1/2},{{1},0},200],ColorRules->{0->Yellow,1->Red,2->Blue},Frame->None,ImageSize->{Automatic,170}]&/@{17920,18363,18323,4358}] |

It’s possible to work out the behavior at step *t* in a computationally reduced way, but it involves treating *t* not so much as a number (that one might, say, do arithmetic on) but instead more just a sequence of bits that one computes bitwise functions like `BitXor` on.

There are definitely other cases where the ability to jump ahead in a computation relies specifically on the properties of numbers. A somewhat special example is a cellular automaton whose rows can be thought of as digits of a number in base 6, that at each step gets multiplied by 3 (it’s not obvious that this procedure will be local to digits, “cellular-automaton-style”, but it is):

✕
Cell[BoxData[ RowBox[{"ArrayPlot", "[", RowBox[{ RowBox[{"PadLeft", "[", RowBox[{"Table", "[", RowBox[{ RowBox[{"IntegerDigits", "[", RowBox[{ RowBox[{"3", "^", "t"}], ",", "6"}], "]"}], ",", RowBox[{"{", RowBox[{"t", ",", "200"}], "}"}]}], "]"}], "]"}], ",", RowBox[{"ColorRules", "\[Rule]", RowBox[{"Flatten", "[", RowBox[{"{", RowBox[{ RowBox[{"0", "\[Rule]", "White"}], ",", RowBox[{"Table", "[", RowBox[{ RowBox[{"i", "->", RowBox[{"Darker", "[", RowBox[{"Cyan", ",", RowBox[{".1", " ", "i"}]}], "]"}]}], ",", RowBox[{"{", RowBox[{"i", ",", "5"}], "}"}]}], "]"}]}], "}"}], "]"}]}]}], "]"}]], "Input", CellChangeTimes->{{3.830815710139923*^9, 3.8308158542493687`*^9}}, CellID->2141486456] |

In this case, repeated squaring of the rows thought of as numbers quickly gets the result—though actually *t* is again used more for its digits than its “numerical value”.

When one explores the computational universe, by far the most common sources of computational reducibility are repetition and nesting. But other examples do show up. A few are obviously “numerical”. But most are not. And typically what happens is just that there is an alternative, very much more efficient program that exists to compute the same results as the original program. But the more efficient program is still “just a program” with no particular connection to anything involving numbers.

Fast numbers-based ways to do particular computations are often viewed as representing “exact solutions” to corresponding mathematical problems. Such exact solutions tend to be highly prized. But they also tend to be few and far between—and rather specific.

Could there be other “generic” forms of computational reducibility beyond repetition and nesting? In general we don’t know—though it’d be an important thing to find out. Still, there is in a sense one other kind of computational reducibility that we do know about, and that’s been very widely used in mathematical science: the phenomenon of continuity.

So far, we’ve mostly been talking about numbers that are integers, and that can at some level be used to “count distinct things”. But in mathematics and mathematical science it’s very common to think not about discrete integers, but about the continuum of real numbers.

And even when there’s some discrete process going on underneath—that might even show computational irreducibility—it can still be the case that in the continuum limit there’s a “numerical description”, say in terms of a differential equation. If one looks, say, at cellular automata, it’s fairly rare to find examples that have such continuum limits. But in the models from our Physics Project—that have much less built-in structure—it seems to be almost a generic feature that there’s a continuum limit that can be described by continuous equations of just the kind that have shown up in traditional mathematical physics.

But beyond taking limits to derive continuum behavior, one can also just symbolically specify equations whose variables are from the start, say, real numbers. And in such cases one might think that everything would always “work out in terms of numbers”. But actually, even in cases like this, things can be more complicated.

Yes, for the equations that are typically discussed in textbooks, it’s common to get solutions that can be represented just as evaluating certain functions of numbers. But if one looks at other equations and other situations, there’s often no known way to get these kinds of “exact solutions”. And instead one basically has to try to find an explicit computation that can approximate the behavior of the equation.

And it seems likely that in many cases such computations will end up being computationally irreducible. Yes, they’re in principle being done in terms of numbers. But the dominant force in determining what happens is a general computational process, not something that depends on the specific structure of numbers.

And, by the way, it’s no coincidence that in the past couple of decades, as more and more modeling of systems with complex behavior is done, there’s been an overwhelming shift away from models that are based on equations (and numbers) to ones that are based directly on computation and computational rules.

## But Do We Have to Use Numbers? The Computational Future

Why do we use numbers so much? Is it something about the world? Or is it more something about us?

We discussed above the example of fundamental physics. And we argued that even though at the most fundamental level numbers really aren’t involved, our sampling of what happens in the universe leads us to a description that does involve numbers. And in this case, the origin of the way we sample the universe has deep roots in the nature of our consciousness, and our fundamental way of experiencing the universe, with our particular sensory apparatus, place in the universe, etc.

What about the appearance of numbers in the history of science and engineering? Why are they so prevalent there? In a sense, like the situation with the universe, I don’t think it’s that the underlying systems we’re dealing with have any fundamental connection to numbers. Rather, I think it’s that we’ve chosen to “sample” aspects of these systems that we can somehow understand or control, and these often involve numbers.

In science—and particularly physical science—we have tended to concentrate on setting up situations and experiments where there’s computational reducibility and where it’s plausible that we can make predictions about what’s going to happen. And similarly in engineering, we tend to set up systems that are sufficiently computationally reducible that we can foresee what they’re going to do.

As I discussed above, working with numbers isn’t the only way to tap into computational reducibility, but it’s the most familiar way, and it’s got an immense weight of historical experience behind it.

But do we even expect that computational reducibility will be a continuing feature of science and engineering? If we want to make the fullest use of computation, it’s inevitable that we’ll have to bring in computational irreducibility. It’s a new kind of science, and it’s a new kind of engineering. And in both cases we can expect that the role of numbers will be at least much reduced.

If we look at human history, numbers have played a quite crucial role in the organization of human society. They’re used to keep records, specify value in commerce, define how resources should be allocated, determine how governance should happen, and countless other things.

But does it have to be that way, or is it merely that numbers provide a convenient way to set things up so that we humans can understand what’s going on? Let’s say that we’re trying to achieve the objective of having an efficient transportation system for carrying people around. The traditional “numbers-based” way of doing that would be to have, say, trains that run at specific “numerical” times (“every 15 minutes”, or whatever).

In a sense, this is a simple, “computationally reducible” solution—that for example we can easily understand. But there’s potentially a much better solution, at least if we’re able to make use of sophisticated computation. Given the complete pattern of who wants to go where, we can dispatch specific vehicles to drive in whatever complicated arrangement is needed to optimally deliver people to their destinations. It won’t be like the trains, with their regular times. Instead, it’ll be something that looks more complex, and computationally irreducible. And it won’t be easy to characterize in terms of numbers.

And I think it’s a pretty general phenomenon: numbers provide a good “computationally reducible” way to set something up. But there are other—perhaps much more efficient—ways, that make more serious use of computation, and involve computational irreducibility, but don’t rely on numbers.

None of these computational approaches are possible until we have sophisticated computation everywhere. And even today we’re just in the early stages of broadly deploying the level of computational sophistication that’s needed. But as another example of how this can play out, consider economic systems.

One of the first and historically strongest uses of numbers has been in characterizing amounts of money and prices of things. But are “numerical prices” the only possible setup for an economic system? We already have plenty of examples of dynamic pricing, where there’s no “list price”, but instead AIs or bots are effectively bidding in real time to determine what transaction will happen.

Ultimately an economic system is based on a large network of transactions. One person wants to get a cookie. The person they’re getting it from wants to rent a movie. Somewhat in analogy to the transportation example above, with enough computation available, we could imagine a situation where at every node in the network there are bots dynamically arranging transactions and deciding what can happen and what cannot, ultimately based on certain goals or preferences expressed by people. This setup is slightly reminiscent of our model of fundamental physics—with causal graphs from physics now being something like supply chains.

And as in the physics case, there’s no necessity to have numbers involved at the lowest level. But if we want to “sample the system in a human way” we’ll end up describing it in collective terms, and potentially end up with an emergent notion of price a bit like the way there’s an emergent notion of gravitational field in the case of physics.

So in other words, if it’s just the bots running our economic system, they’ll “just be doing computation” without any particular need for numbers. But if we try to understand what’s going on, that’s when numbers will appear.

And so it is, I suspect, with other examples of the appearance of numbers in the organization of human society. If things have to be implemented—and understood—by humans, there’s no choice but to leverage computational reducibility, which is most familiarly done through numbers. But when things are instead done by AIs or bots, there’s no such need for computational reducibility.

Will there still be “human-level descriptions” that involve numbers? No doubt there’ll at least be some “natural-science-like” characterizations of what’s going on. But perhaps they’ll most conveniently be stated in terms of computational reducibility that’s set up using concepts other than numbers—that humans in the future will learn about. Or perhaps numbers will be such a convenient “implementation layer” that they’ll end up being used for essentially all human-level descriptions.

But at a fundamental level my guess is that ultimately numbers will fall away in importance in the organization of human society, giving way to more detailed computation-based decision making. And maybe in the end numbers will come to seem a little like the way logic as used in the Middle Ages might seem to us today: a framework for determining things that’s much less complete and powerful than what we now have.

## Are Numbers Even Inevitable in Mathematics?

Whatever their role in science, technology and society, one place where numbers seem fundamentally central is mathematics. But is this really something that is necessary, or is it instead somehow an artifact of the particular history or presentation of human mathematics?

A common view is that at the most fundamental level mathematics should be thought of as an exploration of the consequences of certain abstract underlying axioms. But which axioms should these be? Historically a fairly small set has been used. And a first question is whether these implicitly or explicitly lead to the appearance of numbers.

The axioms for ordinary logic (which are usually assumed in all areas of mathematics) don’t have what’s needed to support the usual concept of numbers. The same is true of axioms for areas of abstract algebra like group theory—as well as basic Euclidean geometry (at least for integers). But the Peano axioms for arithmetic are specifically set up to support integers.

But there is a subtlety here. What the Peano axioms actually do is effectively define certain constraints on abstract constructs. Ordinary integers are one “solution” to those constraints. But Gödel’s theorem shows that there are also an infinite number of other solutions: non-standard “numbers” with weird properties that also happen to follow the same overall axioms.

So in a sense mathematics based on the Peano axioms can be interpreted as being “about” ordinary numbers—but it can also be interpreted as being about other, exotic things. And it’s pretty much the same story with the standard axioms of set theory: the mathematics they generate can be interpreted as covering ordinary numbers, but it can also be interpreted as covering other things.

But what happens if we ignore the historical development of human mathematics, and just start picking axiom systems “at random”? Most likely they won’t have any immediately recognizable interpretation, but we can still go ahead and build up a whole network of theorems and results from them. So will such axiom systems end up leading to constructs that can be interpreted as numbers?

This is again a somewhat tricky question. The Principle of Computational Equivalence suggests that axiom systems with nontrivial behavior will typically show computation universality. And that means that (at least in some metamathematical sense) it’s possible to set up an encoding of any other axiom system within them.

So in particular it should be possible to reproduce what’s needed to support numbers. (Again, there are subtleties here to do with axiom schemas, and their use in supporting the concept of induction, which seems quite central to the idea of numbers.) But if we just look at the raw theorems from a particular axiom system—say as generated by an automated theorem-proving system—it’ll be very hard to tell what can be interpreted as being “related to numbers”.

But what if we restrict ourselves to mathematical results that have been proved by humans—of which there are a few million? There are a number of recent efforts to formalize at least a few tens of thousands of these, and show how they can be formally derived from specific axioms.

But now we can ask what the dependencies of these results are. How many of them need to “go through the idea of numbers”? We can get a sense of this by doing “empirical metamathematics” on a particular math formalization system (here Metamath):

✕
extensibleStructures = {"df-struct","df-ndx","df-slot","df-base","df-base","df-sets","df-ress","brstruct","isstruct2","isstruct","structcnvcnv","structfun","structfn","slotfn","strfvnd","wunndx","strfvn","strfvn","strfvss","wunstr","ndxarg","ndxid","ndxid","strndxid","reldmsets","setsvalg","setsval","setsval","setsidvald","fvsetsid","fsets","wunsets","setsres","setsres","setsabs","setscom","setscom","strfvd","strfv2d","strfv2","strfv","strfv","strfv3","strssd","strssd","strss","strss","str0","str0","base0","strfvi","setsid","setsid","setsnid","setsnid","sbcie2s","sbcie3s","baseval","baseid","elbasfv","elbasov","strov2rcl","strov2rcl","basendx","reldmress","ressval","ressid2","ressval2","ressbas","ressbas2","ressbasss","ressbasss","resslem","resslem","ress0","ress0","ressid","ressinbas","ressval3d","ressress","ressress","ressabs","wunress","df-rest","df-rest","df-topn","restfn","topnfn","restval","restval","elrest","elrest","elrestr","elrestr","0rest","restid2","restsspw","firest","restid","restid","topnval","topnid","topnpropd","df-0g","df-gsum","df-gsum","df-gsum","df-topgen","df-pt","df-prds","df-prds","reldmprds","reldmprds","df-pws","prdsbasex","imasvalstr","imasvalstr","imasvalstr","prdsvalstr","prdsvalstr","prdsvalstr","prdsvallem","prdsvallem","prdsval","prdsval","prdsval","prdssca","prdssca","prdssca","prdsbas","prdsbas","prdsbas","prdsplusg","prdsplusg","prdsplusg","prdsmulr","prdsmulr","prdsmulr","prdsvsca","prdsvsca","prdsvsca","prdsip","prdsle","prdsle","prdsless","prdsds","prdsds","prdsdsfn","prdstset","prdstset","prdshom","prdshom","prdsco","prdsco","prdsbas2","prdsbas2","prdsbasmpt","prdsbasfn","prdsbasprj","prdsplusgval","prdsplusgval","prdsplusgfval","prdsmulrval","prdsmulrfval","prdsleval","prdsdsval","prdsvscaval","prdsvscafval","prdsbas3","prdsbasmpt2","prdsbasmpt2","prdsbascl","prdsdsval2","prdsdsval3","pwsval","pwsbas","pwselbasb","pwselbas","pwselbas","pwsplusgval","pwsmulrval","pwsle","pwsleval","pwsvscafval","pwsvscaval","pwssca","pwsdiagel","pwssnf1o","df-ordt","df-xrs","df-qtop","df-imas","df-qus","df-xps","imasval","imasval","imasval","imasbas","imasbas","imasbas","imasds","imasds","imasds","imasdsfn","imasdsval","imasdsval2","imasplusg","imasplusg","imasplusg","imasmulr","imasmulr","imasmulr","imassca","imassca","imasvsca","imasvsca","imasip","imastset","imasle","f1ocpbllem","f1ocpbl","f1ovscpbl","f1olecpbl","imasaddfnlem","imasaddvallem","imasaddflem","imasaddfn","imasaddfn","imasaddval","imasaddf","imasmulfn","imasmulval","imasmulf","imasvscafn","imasvscaval","imasvscaf","imasless","imasleval","qusval","quslem","qusin","qusbas","quss","divsfval","divsfval","ercpbllem","ercpbl","ercpbl","erlecpbl","erlecpbl","qusaddvallem","qusaddflem","qusaddval","qusaddf","qusmulval","qusmulf","xpsc","xpscg","xpscfn","xpsc0","xpsc1","xpscfv","xpsfrnel","xpsfeq","xpsfrnel2","xpscf","xpsfval","xpsff1o","xpsfrn","xpsfrn2","xpsff1o2","xpsval","xpslem","xpsbas","xpsaddlem","xpsadd","xpsmul","xpssca","xpsvsca","xpsless","xpsle","df-plusg","df-plusg","df-mulr","df-mulr","df-starv","df-starv","df-sca","df-sca","df-vsca","df-vsca","df-ip","df-ip","df-tset","df-tset","df-ple","df-ple","df-ocomp","df-ocomp","df-ds","df-ds","df-unif","df-hom","df-cco","strlemor0","strlemor1","strlemor1","strlemor2","strlemor2","strlemor3","strlemor3","strleun","strle1","strle2","strle3","plusgndx","plusgid","1strstr","1strbas","1strwunbndx","1strwun","2strstr","2strbas","2strop","grpstr","grpstr","grpbase","grpbase","grpplusg","grpplusg","ressplusg","grpbasex","grpplusgx","mulrndx","mulrid","rngstr","rngstr","rngbase","rngbase","rngplusg","rngplusg","rngmulr","rngmulr","starvndx","starvid","ressmulr","ressstarv","srngfn","srngfn","srngbase","srngbase","srngplusg","srngmulr","srnginvl","scandx","scaid","vscandx","vscaid","vscaid","lmodstr","lmodstr","lmodbase","lmodbase","lmodplusg","lmodplusg","lmodsca","lmodsca","lmodvsca","lmodvsca","ipndx","ipid","ipsstr","ipsstr","ipsstr","ipsbase","ipsbase","ipsbase","ipsaddg","ipsaddg","ipsaddg","ipsmulr","ipsmulr","ipsmulr","ipssca","ipssca","ipssca","ipsvsca","ipsvsca","ipsvsca","ipsip","ipsip","ipsip","resssca","ressvsca","ressip","phlstr","phlstr","phlbase","phlbase","phlplusg","phlplusg","phlsca","phlsca","phlvsca","phlvsca","phlip","phlip","tsetndx","tsetid","topgrpstr","topgrpbas","topgrpplusg","topgrptset","resstset","plendx","pleid","otpsstr","otpsbas","otpstset","otpsle","ressle","ocndx","ocid","dsndx","dsid","unifndx","unifid","odrngstr","odrngbas","odrngplusg","odrngmulr","odrngtset","odrngle","odrngds","ressds","homndx","homid","ccondx","ccoid","resshom","ressco","slotsbhcdif"}; metamathGraph = EdgeDelete[CloudGet["https://wolfr.am/PLbmdhRv"], Select[EdgeList[CloudGet["https://wolfr.am/PLbmdhRv"]], MemberQ[extensibleStructures, #[[2]]] &]]; metamathAssoc =CloudGet["https://wolfr.am/PLborw8R"]/. {"TG (TARSKI-GROTHENDIECK) SET THEORY"-> "ARITHMETIC & SET THEORY", "ZFC (ZERMELO-FRAENKEL WITH CHOICE) SET THEORY"-> "ARITHMETIC & SET THEORY", "ZF (ZERMELO-FRAENKEL) SET THEORY"-> "ARITHMETIC & SET THEORY"}; metamathDomains = Union[Values[metamathAssoc]]; metamathInfrastructure = {"SUPPLEMENTARY MATERIAL (USER'S MATHBOXES)", "GUIDES AND MISCELLANEA"}; metamathColors = Merge[{AssociationThread[Complement[metamathDomains, metamathInfrastructure] -> Take[ColorData[54, "ColorList"], Length[Complement[metamathDomains, metamathInfrastructure]]]], AssociationThread[metamathInfrastructure -> LightGray]}, Identity]; metamathDomainWeights = Tally[Values[metamathAssoc]]; metamathEdgeWeights = Tally[{metamathAssoc[#[[1]]], metamathAssoc[#[[2]]]} & /@ EdgeList[metamathGraph]]; metamathEdgesOutSimple = Append[Merge[AssociationThread[{#[[1, 1, 1]]}-> Total[#[[2]]]] & /@ (Transpose /@ GatherBy[Select[metamathEdgeWeights, #[[1, 1]] != #[[1, 2]] &], #[[1, 1]] &]), Identity], "CLASSICAL FIRST-ORDER LOGIC WITH EQUALITY" -> {7649}]; metamathNormalizedEdgeWeights = DirectedEdge[#[[1, 1]], #[[1, 2]]] -> #[[2]]/ Flatten[metamathEdgesOutSimple[#[[1,1]]]] & /@ metamathEdgeWeights; diskedLine[{line_,radii_}]:={RegionIntersection[Line[line],Circle[line[[1]],radii[[1]]]][[1,1]], RegionIntersection[Line[line],Circle[line[[2]],radii[[2]]]][[1,1]]}; weightedArrow[line_,weight_]:= Module[{len,start,end,angle, thick, rec, mid}, start=line[[1]]; end=line[[2]]; mid=Mean[line]; len=EuclideanDistance[start,end]; angle=Arg[(start-end).{1,I}]; thick=weight/len; rec= #+mid&/@(RotationMatrix[angle].#&/@{{-len/2,- thick/2},{len/2,- thick/2},{len/2, thick/2},{-len/2, thick/2}}); Polygon[rec]]; Show[VertexDelete[SimpleGraph[Graph[metamathDomains, First /@ metamathNormalizedEdgeWeights, EdgeStyle->Thread[First/@metamathNormalizedEdgeWeights -> ({AbsoluteThickness[15Last[#][[1]]],Arrowheads[0.15*Last[#][[1]]], GrayLevel[0.5, 0.5]}&/@metamathNormalizedEdgeWeights)], VertexSize->Thread[First/@metamathDomainWeights -> (Sqrt[#]/100&/@(Last/@ metamathDomainWeights))], VertexStyle -> (# -> {Lighter /@ metamathColors[#]} & /@ metamathDomains), VertexLabels->{"BASIC ALGEBRAIC STRUCTURES" -> "algebraic structures","BASIC CATEGORY THEORY" -> "category theory","BASIC LINEAR ALGEBRA" -> "linear algebra","BASIC ORDER THEORY" -> "order theory","BASIC REAL AND COMPLEX ANALYSIS" -> "real & complex analysis","BASIC REAL AND COMPLEX FUNCTIONS" -> "real & complex functions","BASIC STRUCTURES" -> "basic structures","BASIC TOPOLOGY" -> "topology","CLASSICAL FIRST-ORDER LOGIC WITH EQUALITY" -> "logic","ELEMENTARY GEOMETRY" -> "geometry","ELEMENTARY NUMBER THEORY" -> "number theory","GRAPH THEORY" -> "graph theory","GUIDES AND MISCELLANEA" -> "miscellaneous","REAL AND COMPLEX NUMBERS" -> "real & complex numbers","SUPPLEMENTARY MATERIAL (USER'S MATHBOXES)" -> "supplementary material","ARITHMETIC & SET THEORY" -> "arithmetic & set theory"}, GraphLayout -> "SpringElectricalEmbedding", PerformanceGoal->"Quality", AspectRatio->1]], {"SUPPLEMENTARY MATERIAL (USER'S MATHBOXES)","CLASSICAL FIRST-ORDER LOGIC WITH EQUALITY", "GUIDES AND MISCELLANEA"}], Editable -> True] |

And what we see is that at least in a human formalization of mathematics, numbers do indeed seem to play a very central role. Of course, this doesn’t tell us whether in principle results, say in topology, could be proved “without numbers”; it just tells us that in this particular formalization numbers are used to do that.

We also can’t tell whether numbers were just “convenient for proofs” or whether in fact the actual mathematical results picked to formalize were somehow based on their “accessibility” through numbers.

Given any (universal) axiom system there are an infinite number of theorems that can be proved from it. But the question is: which of these theorems will be considered “interesting”? And one should expect that theorems that can be interpreted in terms of concepts—like numbers—that have historically become well known in human mathematics will be preferred.

But is this just a story of accidents of the history of mathematics, or is there more to it?

The traditional view of the foundations of mathematics has involved imagining that some particular axiom system is picked, and then mathematics is some kind of exploration of the implications of this axiom system. It’s the analog of saying: pick some particular rule for a potential model of the universe, then see what consequences it has.

But what we’ve realized is that at least when it comes to studying the universe, we don’t fundamentally have to pick a particular rule: instead, we can construct a rulial multiway system in which, in effect, all possible rules are simultaneously used. And we can imagine doing something similar for mathematics. Instead of picking a particular underlying axiom system, just consider the structure made from simultaneously working out the consequences of all possible axiom systems.

The resulting object seems to be closely related to things like the infinity groupoid that arises in higher category theory. But the important point here is that in a sense this object is a representation of all possible results in all possible forms of mathematics. But now the question is: how should we humans sample this? If we’re in a sense computationally bounded, we basically have to pick a certain “reference frame”.

There seems to be a close analogy here to physics. In the case of physics, basic features of our consciousness seem to constrain us to certain kinds of reference frames, from which we inevitably “parse” the whole rulial multiway system as following known laws of physics.

So perhaps something similar is going on in mathematics. Perhaps here too something very much like the basic features of consciousness constrain our sampling of the limiting rulial object. But what then are the analogs of the laws of physics? Presumably they will be some kind of as-yet-undiscovered general “laws of bulk metamathematics”. Maybe they correspond to overall structural principles of “mathematics as we sample it” (conceivably related to category theory). Or maybe—as in the case of space and time in physics—they actually inevitably lead to something akin to numbers.

In other words, maybe—just as in physics the appearance of numbers can be thought of as reflecting aspects of our characteristics as observers—so too this may be happening in mathematics. Maybe given even the barest outline of our human characteristics, it’s inevitable that we’ll perceive numbers to be central to mathematics.

But what about our aliens in their starship? In physics we’ve realized that our view of the universe—and the laws of physics we consider it to follow—is not the only possible one, and there are others completely incoherent with ours that other kinds of observers could have. And so it will be with mathematics. We have a particular view—that’s perhaps ultimately based on things like features of our consciousness—but it’s not the only possible one. There can be other ones that still describe the same limiting rulial object, but are completely incoherent with what we’re used to.

Needless to say, by the time we can even talk about “aliens arriving in a starship”, we’ve got to assume that their “view of the universe” (or, in effect, their location in rulial space) is not too far from our own. And perhaps this also implies a certain alignment in the “view of mathematics”, perhaps even making numbers inevitable.

But in the abstract, I think we can expect that there are “views of mathematics” that are incoherently different from our own, and that while in a sense they are “still mathematics”, they don’t have any of the familiar features of our typical view of mathematics, like numbers.

## So, Are Numbers Inevitable?

Numbers have been part of human civilization throughout recorded history. But here we’ve asked the fundamental question of why that’s been the case. And what we’ve seen is that there doesn’t appear to be anything ultimately fundamental about the universe—or, for example, about mathematics—that inevitably leads to numbers. Instead, numbers seem to arise through our human efforts to “parse” what’s going on.

But it’s not just that numbers were invented at some point in human history, and then used. There’s something more fundamental and essential about us that makes numbers inevitable for us.

Our general capability for sophisticated computation—which the Principle of Computational Equivalence implies is shared by many systems—isn’t what does it. And in fact when there’s lots of sophisticated computation—and computational irreducibility—going on, numbers aren’t a particularly useful description.

Instead, it’s when there’s computational reducibility that numbers can appear. And the point is that there are fundamental things about us that lead us to pick out pockets of computational reducibility. In particular, what we view as consciousness seems to be fundamentally related to the fact that we sample things in a particular way that leverages computational reducibility.

Not all computational reducibility need be related to numbers, but some examples of it are. And it’s these that seem to lead to the widespread appearance of numbers in our experience of the universe.

Could things be different? If we were different, definitely. And, for example, there’s no reason to think that a distributed AI system would have to intrinsically make use of anything like numbers. Yes, in our attempts to understand or explain it, we might use numbers. But nothing in the system itself would “know about” numbers.

And indeed by operating like this, the system would be able to make richer use of the computational resources available in the computational universe of possible programs. Numbers have been widely used in science, engineering and many aspects of the organization of society. But as things become more computationally sophisticated, I think we can expect that the intrinsic use of numbers will progressively taper off.

But it’ll still be true that as long as we preserve core aspects of our experience as what we consider conscious observers some version of numbers will in the end be inevitable for us. We can aspire to generalize from numbers, and, for example, sample other representations of computational reducibility. But for now, numbers seem to be inextricably connected to core aspects of our existence.

*Thanks to the organizers of **Numerous Numerosity** for the “essay prompt” that led to this piece, and to Jonathan Gorard for some very helpful input.*