The Wolfram Physics Project:
The First Two Weeks

First, Thank You!

We launched the Wolfram Physics Project two weeks ago, on April 14. And, in a word, wow! People might think that interest in fundamental science has waned. But the thousands of messages we’ve received tell a very different story. People really care! They’re excited. They’re enjoying understanding what we’ve figured out. They’re appreciating the elegance of it. They want to support the project. They want to get involved.

It’s tremendously encouraging—and motivating. I wanted this project to be something for the world—and something lots of people could participate in. And it’s working. Our livestreams—even very technical ones—have been exceptionally popular. We’ve had lots of physicists, mathematicians, computer scientists and others asking questions, making suggestions and offering help. We’ve had lots of students and others who tell us how eager they are to get into doing research on the project. And we’ve had lots of people who just want to tell us they appreciate what we’re doing. So, thank you!

The Wolfram Physics Project: The First Two Weeks

Real-Time Science

Science is usually done behind closed doors. But not this project. This project is an open project where we’re sharing—in as real time as we can—what we’re doing and the tools we’re using. In the last two weeks, we’ve done more than 25 hours of livestreams about the project. We’ve given introductions to the project—both lecture style and Q&A. We’ve done detailed technical sessions. And we’ve started livestreaming our actual working research sessions. And in a couple of those sessions we’ve made the beginnings of some real discoveries—live and in public.

Wolfram Physics Livestream Archives

It’s pretty cool to see thousands of people joining us to experience real-time science. (Our peak so far was nearly 8000 simultaneous viewers, and a fairly technical 2-hour session ended up being watched for a total of more than three-quarters of a million minutes.) And we’re starting to see serious “public collaboration” happening, in real time. People are making technical suggestions, sending us links to relevant papers, even sending us pieces of Wolfram Language code to run—all in real time.

One of the great—and unexpected—things about the project is how well what we’ve discovered seems to dovetail with existing initiatives (like string theory, holographic principles, spin networks, higher categories, twistor theory, etc.) We’re keen to understand more about this, so one of the things we’ll be doing is having livestreamed discussions with experts in these various areas.

The Summer School Approaches

It’s only been two weeks since our project was launched—and there’ve already been some interesting things written about it that have helped sharpen my philosophical understanding. There hasn’t yet been time for serious scientific work to have been completed around the project… but we know people are on this path.

We also know that there are lots of people who want to get to the point where they can make serious contributions to the project. And to help with that, we’ve got an educational program coming up: we’ve added a Fundamental Physics track to our annual Wolfram Summer School.

Wolfram Summer School

Our Summer School—which has been running since 2003—is a 3-week program, focused on every participant doing a unique, original project. For the Fundamental Physics track, we’re going to have a “week 0” (June 22–27) that will be lectures and workshops about the Physics Project, followed by a 3-week project-based program (June 28–July 17).

This year’s Summer School will (for the first time) be online (though synchronous), so it’s going to be easier for students from around the world to attend. Many of the students for the Fundamental Physics track will be graduate students or postdocs, but we also expect to have students who are more junior, as well as professors and professionals. Since announcing the program last week, we’ve already received many good applications… but we’re going to try to expand the program to accommodate everyone who makes sense. (So if you’re thinking of applying, please just apply… though do it as soon as you can!)

I’m very excited about what’s going to be achieved at the Summer School. I never expected our whole project to develop as well—or as quickly—as it has. But at this point I think we’ve developed an approach and a methodology that are going to make possible rapid progress in many directions. And I’m fully expecting that there’ll be projects at the Summer School that lead, for example, to academic papers that rapidly become classics.

This is one of those rare times when there’s a lot of exceptionally juicy low-hanging fruit—and I’m looking forward to helping outstanding students find and pick that scientific fruit at our Summer School.

New Science in the First Two Weeks

It’s not too surprising that much of our time in the first two weeks after launching the project has been spent on “interfacing with the world”—explaining what we’re doing, trying to respond to thousands of messages, and setting up internal and external systems that can make future interactions easier.

But we’ve been very keen to go on working on the science, and some of that has been happening too. We’ve so far done five livestreamed working sessions, three on spin and charge, one on the interplay with distributed computing, and one on combinators and physics. Of course, this is just what we’re directly working on ourselves. We’ve also already helped several people get started on projects that use their expertise—in physics, mathematics or computer science—and it’s wonderful to see the beginning of this kind of “scaling up”.

But let me talk a bit about things I think I’ve learned in the past two weeks. Some of this comes from the working sessions we’ve had; some is in response to questions at our Q&As and some is just the result of my slowly growing understanding—particularly helped by my efforts in explaining the project to people.

What Is Angular Momentum?

OK, so here’s something concrete that came out of our working session last Thursday: I think we understand what angular momentum is. Here’s part of where we figured that out:

Physics working session

We already figured out a few months ago what linear momentum is. If you want to know the amount of linear momentum in a particular direction at a particular place in the hypergraph, you just have to see how much “activity” at that place in the hypergraph is being transferred in that “direction”.

Directions are defined by geodesics that give the shortest path between one point and another. Momentum in a particular direction then corresponds to the extent to which an update at one point leads to updates at nearby points along that direction. (More formally, the statement is that momentum is given by the flux of causal edges through timelike hypersurfaces.)

OK, so how about angular momentum? Well, it took us a total of nearly 6 hours, over three sessions, but here’s what we figured out. (And kudos to Jonathan Gorard for having had a crucial idea.)

So, first, what’s the usual concept of angular momentum in physics? It’s all about turning. It’s all about momentum that doesn’t add up to go in any particular direction but just circulates around. Here’s the picture we used on the livestream:

VectorPlot
&#10005

VectorPlot[{y, -x}, {x, -3, 3}, {y, -3, 3}]

Imagine this is a fluid, like water. The fluid isn’t flowing in a particular direction. Instead, it’s just circulating around, creating a vortex. And this vortex has angular momentum.

But what might the analog of this be in a hypergraph? To figure this out, we have to understand what rotation really is. It took us a little while to untangle this, but in the end it’s very simple. In any number of dimensions, a rotation is something that takes two vectors rooted at a particular point, and transforms one into the other. On the livestream, we used the simple example:

Graphics3D
&#10005

Graphics3D[{Thick, InfinitePlane[{{0, 0, 0}, {1, 0, 0}, {0, 1, 2}}], 
  Arrow[{{0, 0, 0}, {1, 0, 0}}], Arrow[{{0, 0, 0}, {0, 1, 2}}]}]

And in the act of transforming one of these vectors into the other we’re essentially sweeping out a plane. We imagined filling in the plane by making something like a string figure that joins points on the two vectors:

Graphics3D
&#10005

Graphics3D[Table[Line[{{i, 0, 0}, {0, j, 0}}], {i, 10}, {j, 10}]]

But now there’s an easy generalization to the hypergraph. A single geodesic defines a direction. Two geodesics—and the geodesics “strung” between them—define a plane. Here’s what we created to give an illustration of this:

Generalization to the hypergraph

So now we are beginning to have a picture of angular momentum: it is “activity” that “circulates around” in this little “patch of plane” defined by two geodesics from a particular point. We can get considerably more formal than this, talking about flux of causal edges in slices of tubes defined by pairs of geodesics. On the livestream, we started relating this to the tensor Jμν which defines relativistic angular momentum (the two indices of Jμν basically correspond to our two geodesics).

There are details to clean up, and further to go. (Rotating frames in general relativity? Rotating black holes? Black-hole “no hair” theorems? Etc.) But this was our first real “aha” moment in a public working session. And of course there’s an open archive both of the livestream itself, and the notebook created in it.

What about Quantum Angular Momentum and Spin?

One of the reasons I wanted to think about angular momentum was because of quantum mechanics. Unlike ordinary momentum, angular momentum is quantized, even in traditional physics. And, more than that, even supposedly point particles—like electrons—have nonzero quantized spin angular momentum.

We don’t yet know how this works in our models. (Stay tuned for future livestreamed working sessions!) But one point is clear: it has to involve not just the spatial hypergraph and the spacetime causal graph (as in our discussion of angular momentum above), but also the multiway causal graph.

And that means we’re dealing not just with a single rotation, but a whole collection of interwoven ones. I have a suspicion that the quantization is going to come from something essentially topological. If you’re looking at, say, fluid flow near a vortex, then when you go around a small circle adding up the flow at every point, you’ll get zero if the circle doesn’t include the center of the vortex, and some quantized value if it does (the value will be directly proportional to the number of times you wind around the vortex).

Assuming we’ve got a causal-invariant system, one feature of the multiway causal graph is that it must consist of many copies of the same spacetime causal graph—in a sense laid out (albeit with complicated interweaving) in branchial space. And it’s also possible (as Jonathan suggested on the livestream) that somehow when one measures an angular momentum—or a spin—one is effectively picking up just a certain discrete number of “histories”, or a discrete number of identical copies of the spacetime causal graph.

But we’ll see. I won’t be surprised if both ideas somehow dovetail together. But maybe we’ll need some completely different idea. Either way, I suspect there’s going to be somewhat sophisticated math involved. We have a guess that the continuum limit of the multiway causal graph is something like a twistor space. So then we might be dealing with homotopy in twistor space—or, more likely, some generalization of that.

On the livestream, various people asked about spinors. We ordinarily think of a rotation through 360° as bringing everything back to where it started from. But in quantum mechanics that’s not how things work. Instead, for something like an electron, it takes a rotation through 720°. And mathematically, that means we’re dealing with so-called spinors, rather than vectors. We don’t yet know how this could come out in our models (though we have some possible ideas)—but this is something we’re planning to explore soon. (It’s again mathematically complicated, because we’re not intrinsically dealing with integer-dimensional space, so we’ve got to generalize the notion of rotation, rotation groups, etc.)

And as I write this, I have a new idea—of trying to see how relativistic wave equations (like the Klein–Gordon equation for spin-0 particles or the Dirac equation for spin-1/2 particles) might arise from thinking about bundles of geodesics in the multiway causal graph. The suspicion is that there would be a subtle relationship between effective spacetime dimension and symmetries associated with the bundle of geodesics, mirroring the way that in traditional relativistic quantum mechanics one can identify different spins with objects transforming according to different irreducible representations of the symmetry group of spacetime.

CPT Invariance?

Related to the whole story about spinors, there’s a fundamental result in quantum field theory called the spin-statistics theorem that says that particles with half-integer spins (like electrons) are fermions (and so obey the exclusion principle), while particles with integer spins (like photons) are bosons (and so can form condensates). And this in turn is related to what’s called CPT invariance.

And one of the things that came out of a livestream last week is that there’s potentially a very beautiful interpretation of CPT invariance in our models.

What is CPT invariance? C, P and T correspond to three potential transformations applied to physical systems. T is time reversal, i.e. having time run in reverse. P is parity, or space inversion: reversing the sign of all spatial coordinates. And C is charge conjugation: turning particles (like electrons) into antiparticles (like positrons). One might think that the laws of physics would be invariant under any of these transformations. But in fact, each of C, P and T invariance is violated somewhere in particle physics (and this fact was a favorite of mine back when I did particle physics for a living). However, the standard formalism of quantum field theory implies that there is still invariance under the combined CPT transformation—and, so far as one can tell, this is experimentally correct.

OK, so what do C, P and T correspond to in our models? Consider the multiway causal graph. Here’s a toy version of it, that we discussed in a livestream last week:

Graph3D
&#10005

Graph3D[GridGraph[{6, 6, 6}]]

Edges in one direction (say, down) correspond to time. Edges in another direction correspond to space. And edges in the third direction correspond to branchial space (i.e. the space of quantum spaces).

T and P then have simple interpretations: they correspond to reversing time edges and space edges, respectively. C is a little less clear, but we suspect that it just corresponds to reversing branchial edges (and this very correspondence probably tells us something about the nature of antiparticles).

So then CPT is like a wholesale inversion of the multiway causal graph. But what can we say about this? Well, we’ve argued that (with certain assumptions) spacetime slices of the multiway causal graph must obey the Einstein equations. Similarly, we’ve argued that branchtime slices follow the Feynman path integral. But now there’s a generalization of both these things: in effect, a generalization of the Einstein equations that applies to the whole multiway causal graph. It’s mathematically complicated—because it must describe the combined geometry of physical and branchial space. But it looks as if CPT invariance must just correspond to a symmetry of this generalized equation. And to me this is something very beautiful—that I can hardly wait to investigate more.

What’s Going On in Quantum Computing?

One feature of our models is that they potentially make it a lot more concrete what’s going on in quantum computing. And over the past couple of weeks we’ve started to think about what this really means.

There are two basic points. First, the multiway graph provides a very explicit representation of “quantum indeterminacy”. And, second, thinking about branchial space (and quantum observation frames) gives more concreteness to the notion of quantum measurement.

A classical computer like an ordinary Turing machine is effectively just following one deterministic path of evolution. But the qualitative picture of a quantum computer is that instead it’s simultaneously following many paths of evolution, so that in effect it can do many Turing-machine-like computations in parallel.

But at the end, there’s always the issue of finding which path or paths have the answer you want: and in effect you have to arrange your measurement to just pick out these paths.

In ordinary Turing machines, there are problems (like multiplying numbers) that are in the class P, meaning that they can be done a number of steps polynomial in the size of the problem (say, the number of digits in the numbers). There are also problems (like factoring numbers) that are in the class NP, meaning that if you were to “non-deterministically” guess the answer, you could check it in polynomial time.

A core question in theoretical computing science (which I have views on, but won’t discuss here) is whether P=NP, that is, whether all NP problems can actually be done in polynomial time.

One way to imagine doing an NP problem in polynomial time is not to use an ordinary Turing machine, but instead to use a “non-deterministic Turing machine” in which there is a tree of possible paths where one can pick any path to follow. Well, our multiway system representing quantum mechanics essentially gives that whole tree (though causal invariance implies that ultimately the branches always merge).

For the last several years, we’ve been developing a framework for quantum computing in the Wolfram Language (which we’re hoping to release soon). And in this framework we’re essentially describing two things: how quantum information is propagated with time through some series of quantum operations, and how the results of quantum processes are measured. More formally, we have time evolution operators, and we have measurement operators.

Well, here’s the first neat thing we’ve realized: we can immediately reformulate our quantum computing framework directly in terms of multiway systems. The quantum computing framework can in effect just be viewed as an application of our MultiwaySystem function that we put in the Wolfram Function Repository for the Physics Project.

But now that we’re thinking in terms of multiway systems—or the multiway causal graph—we realize that standard quantum operations are effectively associated with timelike causal edges, while measurement operations are associated with branchlike causal edges. And the extent to which one can get answers before decoherence takes over has to do with a competition between these kinds of edges.

This is all very much in progress right now, but in the next few weeks we’re expecting to be able to look at well-known quantum algorithms in this context, and see whether we can analyze them in a way that treats time evolution and measurement on a common footing. (I will say that ever since the work I did with Richard Feynman on quantum computing back in the early 1980s, I have always wanted to really understand the “cost of measurement”, and I’m hoping that we’ll finally be able to do that now.)

Numerical Relativity; Numerical Quantum Field Theory

Although the traditional view in physics is that space and time are continuous, when it comes to doing actual computer simulations they usually in the end have to be discretized. And in general relativity (say for simulating a black hole merger) it’s usually a very subtle business, in which the details of the discretization are hard to keep track of, and hard to keep consistent.

In our models, of course, discretization is not something “imposed after the fact”, but rather something completely intrinsic to the model. So we started wondering whether somehow this could be used in practice to set up simulations.

It’s actually a very analogous idea to something I did rather successfully in the mid-1980s for fluid flow. In fluids, as in general relativity, there’s a traditional continuum description, and the most obvious way of doing simulations is by discretizing this. But what I did instead was to start with an idealized model of discrete molecules—and then to simulate lots of these molecules. My interest was to understand the fundamental origins of things like randomness in fluid turbulence, but variants of the method I invented have now become a standard approach to fluid simulation.

So can one do something similar with general relativity? The actual “hypergraph of the universe” would be on much too tiny a scale for it to be directly useful for simulations. But the point is that even on a much larger scale our models can still approximate general relativity—but unlike “imposed after the fact” discretization, they are guaranteed to have a certain internal consistency.

In usual approaches to “numerical relativity” one of the most difficult things is dealing with progressive “time evolution”, not least because of arbitrariness in what coordinates one should use for “space” and “time”. But in our models there’s a way of avoiding this and directly getting a discrete structure that can be used for simulation: just look at the spacetime causal graph.

There are lots of details, but—just like in the fluid flow case—I expect many of them won’t matter. For example, just like lots of rules for discrete molecules yield the same limiting thermodynamic behavior, I expect lots of rules for the updating events that give the causal graph will yield the same limiting spacetime structure. (Like in standard numerical analysis, though, different rules may have different efficiency and show different pathologies.)

It so happens that Jonathan Gorard’s “day job” has centered around numerical relativity, so he was particularly keen to give this a try. But even though we thought we had just started talking about the idea in the last couple of weeks, Jonathan noticed that actually it was already there on page 1053 of A New Kind of Science—and had been languishing for nearly 20 years!

Still, we immediately started thinking about going further. Beyond general relativity, what about quantum field theory? Things like lattice gauge theory typically involve replacing path integrals by “thermal averages”—or effectively operating in Euclidean rather than Minkowski spacetime. But in our models, we potentially get the actual path integral as a limit of the behavior of geodesics in a multiway graph. Usually it’s been difficult to get a consistent “after the fact” discretization of the path integral; but now it’s something that emerges from our models.

We haven’t tried it yet (and someone should!). But independent of nailing down precisely what’s ultimately underneath quantum field theory it seems like the very structure of our models has a good chance of being very helpful just in dealing in practice with quantum field theory as we already know it.

Surprise: It’s Not Just about Physics

One of the big surprises of the past two weeks has been our increasing realization that the formalism and framework we’re developing really aren’t just relevant to physics; they’re potentially very important elsewhere too.

In a sense this shouldn’t be too surprising. After all, our models were constructed to be as minimal and structureless as possible. They don’t have anything intrinsically about physics in them. So there’s no reason they can’t apply to other things too.

But there’s a critical point here: if a model is simple enough, one can expect that it could somehow be a foundation for many different kinds of things. Long ago I found that with the 1D cellular automata I studied. The 256 “elementary” cellular automata are in a sense the very simplest models of a completely discrete system with a definite arrangement of neighbors. And over the years essentially all of these 256 cellular automata found uses as models for bizarrely different things (pigmentation, catalysis, traffic, vision, etc.).

Well, our models now are in a sense the most minimal that describe systems with rules based on arbitrary relationships (as represented by collections of relations).

And the first big place where it seems the models can be applied is in distributed computing. What is distributed computing? Essentially it’s about having a whole collection of computing elements that are communicating with others to collectively perform a computation.

In the simplest setup, one just assumes that all the computing elements are operating in lockstep—like in a cellular automaton. But what if the computing elements are instead operating asynchronously, sending data to each other when it happens to be ready?

Well, this setup immediately seems a lot more like the situation we have in our models—or in physics—where different updates can happen in any order, subject only to following the causal relationships defined by the causal graph.

But now there start to be interesting analogies between the distributed computing case and physics. And indeed what’s got me excited is that I think there’s going to be a very fruitful interplay between these areas. Ideas in distributed computing are going to be useful for thinking about physics—and vice versa.

I’m guessing that phenomena and results in distributed computing are going to have direct analogs in general relativity and in quantum mechanics. (“A livelock is like a closed timelike curve”, etc.) And that ideas from physics in the context of our models are going to give one new ways to think about distributed computing. (Imagine “programming in a particular reference frame”, etc.)

In applying our models to physics, a central idea is causal invariance. And this has an immediate analog in distributed computing: it’s the idea of eventual consistency, or in other words that it doesn’t matter what order operations are done in; the final result is always the same.

But here’s something from physics: our universe (fortunately!) doesn’t seem like it’s going to halt with a definite “final result”. Instead, it’s just continually evolving, but with causal invariance implying various kinds of local equivalence and consistency. And indeed many modern distributed computing systems are again “just running” without getting to “final results” (think: the internet, or a blockchain).

Well, in our approach to physics the way we handle this is to think in terms of foliations and reference frames—which provide a way to organize and understand what’s going on. And I think it’s going to be possible to think about distributed computing in the same kind of way. We need some kind of “calculus of reference frames” in terms of which we can define good distributed computing primitives.

In physics, reference frames are most familiar in relativity. The most straightforward are inertial frames. But in general relativity there’s been slow but progressive understanding of other kinds of frames. And in our models we’re also led to think about “quantum observation frames”, which are essentially reference frames in the branchial space of quantum states.

Realistically, at least for me, it’s so far quite difficult to wrap one’s head around these various kinds of reference frames. But I think in many ways this is at its root a language design problem. Because if we had a good way to talk about working with reference frames we’d be able to use them in distributed computing and so we’d get familiar with them. And then we’d be able to import our understanding to physics.

One of the most notable features of our models for physics when it comes to distributed computing is the notion of multiway evolution. Usually in distributed computing one’s interested in looking at a few paths, and making sure that, for example, nothing bad can happen as a result of different orders of execution. But in multiway systems we’re not just looking at a few paths; we’re looking at all paths.

And in our models this isn’t just some kind of theoretical concept; it’s the whole basis for quantum mechanics. And given that we’re looking at all paths, we’re led to invent things like quantum observation frames, and branchial space. We can think of the branching of paths in the multiway system as corresponding to elementary pieces of ambiguity. And in a sense the handling of our model—and the features of physics that emerge—is about having ways to deal with “ambiguity in bulk”.

Is there an analog of the Feynman path integral in distributed computing? I expect so—and I wouldn’t be surprised if it’s very useful in giving us a way to organize our thinking and our programs.

In theoretical analyses of distributed computing, one usually ignores physical space—and the speed of light. But with our models, it’s going to be possible to account for such things, alongside branchial connections, which are more like “instantaneous network connections”. And, for example, there’ll be analogs of time dilation associated with motion in both physical space and branchial space. (Maybe such effects are already known in distributed computing; I’m not sure.)

I think the correspondence between distributed computing and physics in the context of our models is going to be incredibly fertile. We already did one livestreamed working session about it (with Taliesin Beynon as a guest); we’ll be doing more.

Distributed computing

In the working session we had, we started off discussing vector clocks in distributed computing, and realized that they’re the analog of geodesic normal coordinates in physics. Then we went on to discuss more of the translation dictionary between distributed computing and physics. We realized that race conditions correspond to branch pairs. The branchial graph defines sibling tasks. Reads and writes are just incoming and outgoing causal edges. We invented the idea of a “causal exclusion graph”, which is a kind of complement of a causal graph, saying not what events can follow a given event, but rather what events can’t follow a given event.

We started discussing applications. Like clustered databases, multiplayer games and trading in markets. We talked about things like Git, where merge conflicts are like violations of causal invariances. We talked a bit about blockchains—but it seemed like there were richer analogs in hashgraphs and things like NKN and IOTA. Consensus somehow seemed to be the analog of “classicality”, but then there’s the question of how much can be achieved in the “quantum regime”.

Although for me the notion of seriously using ideas from physics to think about distributed computing is basically less than two weeks old, I’ve personally been wondering about how to do programming for distributed computing for a very long time. Back in the mid-1980s, for example, when I was helping a company (Thinking Machines Corporation) that was building a 65536-processor computer (the Connection Machine), I thought the most plausible way to do programming on such a system would be through none other than graph rewriting.

But at the time I just couldn’t figure out how to organize such programming so that programmers could understand what was going on. But now—through thinking about physics—I’m pretty sure there’s going to be a way. We’re already used to the idea (at least in the Wolfram Language) that we can write a program functionally, procedurally, declaratively, etc. I think there are going to be ways to write distributed programs “in different reference frames”. It’s probably going to be more structured and more parametrized than these different traditional styles of programming. But basically it’ll be a framework for looking at a given program in different ways, and using different foliations to understand and describe what it’s supposed to do.

I have to mention one more issue that’s been bugging me since 1979. It has to do with recursive evaluation. Imagine we’ve defined a Fibonacci recursion:

f[n_] := f[n - 1] + f[n - 2]
&#10005

f[n_] := f[n - 1] + f[n - 2]

f[1] = f[2] = 1
&#10005

f[1] = f[2] = 1

Now imagine you enter f[10]. How should you evaluate this? At the first step you get f[9]+f[8]. But after that, do you just keep “drilling down” the evaluation of f[9] in a “depth-first way”, until it gets to 1s, or do you for example notice that you get f[8]+f[7]+f[8], and then collect the f[8]s and evaluate them only once?

In my Mathematica-precursor system SMP, I tried to parametrize this behavior, but realistically nobody understood it. So my question now is: given the idea of reference frames, can we invent some kind of notion of “evaluation fronts” that can be described like foliations, and that define the order of recursive evaluation?

An extreme case of this arises in evaluating S, K combinators. Even though S, K combinators are 100 years old this year, they remain extremely hard to systematically wrap one’s head around. And part of the reason has to do with evaluation orders. It’s fine when one manages to get a combinator expression that can successfully be evaluated (through some path) to a fixed point. But what about one that just keeps “evolving” as you try to evaluate it? There doesn’t seem to be any good formalism for handling that. But I think our physics-based approach may finally deliver this.

So, OK, the models that we invented for physics also seem highly relevant for distributed computing. But what about for other things? Already we’ve thought about two other—completely different—potential applications.

The first, that we actually discussed a bit even the week before the Physics Project was launched, has to do with digital contact tracing in the context of the current pandemic. The basic idea—that we discussed in a livestreamed brainstorming session—is that as people move around with their cellphones, Bluetooth or other transactions can say when two phones are nearby. But the graph of what phones were close to what phones can be thought of as being like a causal graph. And now the question of whether different people might have been close enough in space and time for contagion becomes one of reconstructing spatial graphs by making plausible foliations of the causal graph. There are bigger practical problems to solve in digital contact tracing, but assuming these are solved, the issues that can be informed by our models are likely to become important. (By the way, given a network of contacts, the spreading of a contagious disease on it can be thought of as directly analogous to the growth of a geodesic ball in it.)

One last thing that’s still just a vague idea is to apply our models to develop a more abstract approach to biological evolution and natural selection (both for the overall tree of life, and for microorganisms and tumors). Why might there be a connection? The details aren’t yet clear. Perhaps something like the multiway graph (or rule-space multiway graph) can be used to represent the set of all possible sequences of genetic variations. Maybe there’s some way of thinking about the genotype-phenotype correspondence in terms of the correspondence between multiway graphs and causal graphs. Maybe different sequences of “environments” correspond to different foliations, sampling different parts of the possible sequence of genetic variations. Maybe speciation has some correspondence with event horizons. Most likely there’ll need to be some other layer or variation on the models to make them work. But I have a feeling that something is going to be possible.

It’s been possible for a long time to make “aggregated” models of biological evolution, where one’s looking at total numbers of organisms of some particular type (with essentially the direct analog of differential-equation-based aggregated epidemiological models). But at a more individual-organism level one’s typically been reduced to doing simulations, which tend to have messy issues like just how many “almost fittest” organisms should be kept at every “step” of natural selection. It could be that the whole problem is mired in computational irreducibility. But the robust way in which one seems to be able to reason in terms of natural selection suggests to me that—like in physics—there’s some layer of computational reducibility, and one just has to find the right concepts to be able to develop a more general theory on the basis of it. And maybe the models we’ve invented for physics give us the framework to do this.

Some Coming Attractions

We’re at a very exciting point—where there are an incredible amount of “obvious directions” to go. But here are a few that we’re planning on exploring in the next few days, in our livestreamed working sessions.

The Fine Structure of Black Holes

In traditional continuum general relativity it always seems a bit shocking when there’s some kind of discontinuity in the structure of spacetime. In our fundamentally discrete model it’s a bit less shocking, and in fact things like black holes (and other kinds of spacetime singularities) seem to arise very naturally in our models.

But what exactly are black holes like in our models? Do they have the same kind of “no hair” perfection as in general relativity—where only global properties like mass and angular momentum affect how they ultimately look from outside? And how do our black holes generate things like Hawking radiation?

In a livestream last week, we generated a very toy version of a black hole, with a causal graph of the form:

ResourceFunction["MultiwaySystem"]
&#10005

ResourceFunction["MultiwaySystem"][{"A" -> "AB", "XABABX" -> "XXXX", \
"XXXX" -> "XXXXX"}, {"XAAX"}, 8, "CausalGraphStructure"] // \
LayeredGraphPlot

This “black hole” has the feature that causal edges go into it, but none come out. In other words, things can affect the black hole, but the black hole can’t causally affect anything else. It’s the right basic idea, but there’s a lot missing from the toy version, which isn’t surprising, not least because it’s based on a simple string substitution system, and not even a hypergraph.

What we now need to do is to find more realistic examples. Then what we’re expecting is that it’ll actually be fairly obvious that the black hole only has certain properties. The mass will presumably relate to the number of causal edges that go into the black hole. And now that we have an idea what angular momentum is, we should be able to identify how much of that is going in as well. And maybe we’ll be able to see that there’s a limit on the amount of angular momentum a black hole of a given mass can have (as there seems to be in general relativity).

Some features of black holes we should be able to see by looking at ordinary spacetime causal graphs. But to understand Hawking radiation we’re undoubtedly also going to have to look at multiway causal graphs. And we’re hoping that we’ll actually be able to explicitly see the presence of both the causal event horizon and the entanglement event horizon—so that we’ll be able to trace the fate of quantum information in the “life cycle” of the black hole.

All the Spookiness of Quantum Mechanics

Quantum mechanics is notorious for yielding strange phenomena that can be computed within its formalism, but which seem essentially impossible to account for in any other way. Our models, however, finally provide a definite suggestion for what is “underneath” quantum mechanics—and from our models we’ve already been able to derive many of the most prominent phenomena in quantum mechanics.

But there are plenty more phenomena to consider, and we’re planning to look at this in working sessions starting later this week. One notable phenomenon that we’ll be looking at is the violation of Bell’s inequality—which is often said to “prove” that no “deterministic” theory can reproduce the predictions of quantum mechanics. Of course, our theory isn’t “deterministic” in the usual sense. Yes, the whole multiway graph is entirely determined by the underlying rule. But what we observe depends on measurements that sample collections of branches determined by the quantum observation frames we choose.

But we’d still like to see explicitly how Bell’s inequality is violated—and in fact we suspect that in our multiway graph formalism it’ll be much more straightforward to see how this and its various generalizations work. But we’ll see.

In Q&A sessions that we’ve done, and messages that we’ve received, there’ve been many requests to reproduce a classic quantum result: interference in the double-slit experiment. A few months ago, I would have been very pessimistic about being able to do this. I would have thought that first we’d have to understand exactly what particles are, and then we’d only slowly be able to build up something we could consider a realistic “double slit”.

But one of the many surprises has been that quantum phenomena seem much more robust than I expected—and it seems possible to reproduce their essential features without putting all the details in. So maybe we’ll be able, for example, just to look at a multiway system generated by a string substitution system, and already be able to see something like interference fringes in an idealized double-slit experiment. We’ll see.

When we’re talking about quantum mechanics, many important practical phenomena arise from looking at bound states where for example some particle is restricted to a limited region (like an electron in a hydrogen atom), and we’re interested in various time-repeating eigenstates. My first instinct—as in the case of the double-slit experiment—was to think that studying bound states in our models would be very complicated. After all, at some level, bound states are a limiting idealization, and even in quantum field theory (or with quantum mechanics formulated in terms of path integrals) they’re already a complicated concept.

But actually, it seems as if it may be possible to capture the essence of what’s going on in bound states with even very simple toy examples in our models—in which for instance there are just cycles in the multiway graph. But we need to see just how this works, and how far we can get, say in reproducing the features of the harmonic oscillator in quantum mechanics.

In traditional treatments of quantum mechanics, the harmonic oscillator is the kind of thing one starts with. But in our models its properties have to be emergent, and it’ll be interesting to see just how “close to the foundations” or how generic their derivation will be able to be.

People Do Care about Thermodynamics

Understanding the Second Law of thermodynamics was one of the things that first got me interested in fundamental physics, nearly 50 years ago. And I was very pleased that by the 1990s I thought I finally understood how the Second Law works: basically it’s a consequence of computational irreducibility, and the fact that even if the underlying rules for a system are reversible, they can still so “encrypt” information about the initial conditions that no computationally limited observer can expect to recover it.

This phenomenon is ultimately crucial to the derivation of continuum behavior in our models—both for spacetime and for quantum mechanics. (It’s also critical to my old derivation of fluid behavior from idealized discrete underlying molecules.)

The Second Law was big news at the end of the 1800s and into the early 1900s. But I have to say that I thought people had (unfortunately) by now rather lost interest in it, and it had just become one of those things that everyone implicitly assumes is true, even though if pressed they’re not quite sure why. So in the last couple of weeks I’ve been surprised to see so many people asking us whether we’ve managed to understand the Second Law.

Well, the answer is “Yes!”. And in a sense the understanding is at an even more fundamental level than our models: it’s generic to the whole idea of computational models that follow the Principle of Computational Equivalence and exhibit computational irreducibility. Or, put another way, once everything is considered to be computational, including both systems and observers, the Second Law is basically inevitable.

But just where are its limits, and what are the precise mathematical conditions for its validity? And how, for example, does it relate in detail to gravity? (Presumably the reference frames that can be set up are limited by the computational capabilities of observers, which must be compared to the computations being done in the actual evolution of spacetime.) These are things I’ve long wanted to clarify, and I’m hoping we’ll look at these things soon.

What about Peer Review and All That?

There’s a lot in our Physics Project. New ideas. New methods. New conclusions. And it’s not easy to deliver such a thing to the world. We’ve worked hard the last few months to write the best expositions we can, and to make software tools that let anyone reproduce—and extend—everything we’ve done. But the fact remains that to seriously absorb what we just put into the world is going to take significant effort.

It isn’t the way science usually works. Most of the time, progress is slow, with new results trickling out, and consensus about them gradually forming. And in fact—until a few months ago—that’s exactly how I expected things would go with our Physics Project. But—as I explained in my announcement—that’s not how it worked out. Because, to my great surprise, once we started seriously working on the ideas I originally hatched 30 years ago we suddenly discovered that we could make dramatic progress.

And even though we were keen to open the project up, even the things we discovered—together with their background ideas and methods—are a lot to explain, and, for example, fill well over 800 pages.

But how does that fit into the normal, academic way of doing science? It’s not a great fit. When we launched the project two weeks ago, I sent mail to a number of people. A historian of science I’ve known for a long time responded:

Please remember as you go forward that, many protestations to the contrary, most scientists hate originality, which feels strange, uncomfortable, and baffling. They like novelty well within the boundaries of what they’re doing and the approach that they’re taking, but originality is harder for them to grasp. Therefore expect opposition based on incomprehension rather than reasoned disagreement. Hold fast.

My knowledge of history, and my own past experiences, tell me that there’s a lot of truth to this. Although I’m happy to say that in the case of our project it seems like there are actually a very good number of scientists who are enthusiastically making the effort to understand what we’ve done.

Of course, there are people who think “This isn’t the way science usually works; something must be wrong”. And the biggest focus seems to be around “What about peer review?”. Well, that’s an interesting question.

What’s ultimately the point of peer review? Basically it’s that people want external certification that something is correct—before they go to the effort of understanding it themselves, or start building on it. And that’s a reasonable thing to want. But how should it actually work?

When I used to publish academic papers in the 1970s and early 1980s I quickly discovered something disappointing about actual peer review—that closely mirrors what my historian-of-science friend said. If a paper of mine was novel though not particularly original, it sailed right through peer review. But if it was actually original (and those are the papers that have had the most impact in the end) it essentially always ran into trouble with peer review.

I think there’s also always been skullduggery with anonymous peer review—often beyond my simplified “natural selection” model: “If paper cites reviewer, accept; otherwise reject”. But particularly for people outside of science, it’s convenient to at least imagine that there’s some perfect standard of academic validity out there.

I haven’t published an ordinary academic paper since 1986, but I was rather excited two weeks ago to upload my first-ever paper to arXiv. I was surprised it took about a week to get posted, and I was thinking it might have run into some filter that blocks any paper about a fundamental theory of physics—on the “Bayesian” grounds that there’s never been a meaningful paper with such a claim during the time arXiv has been operating. But my friend Paul Ginsparg (founder of arXiv) tells me there’s nothing like that in place; it’s just a question of deciding on categories and handling hundreds of megabytes of data.

OK, but is there a good way to achieve the objectives of peer review for our project? I was hoping I could submit my paper to some academic journal and then leave it to the journal to just “run the peer-review process”. But on its own, it doesn’t seem like it could work. And in particular, it’s hard to imagine that in the normal course of peer reviewing there could be serious traditional peer review on a 450-page document like this that would get done in less than several years.

So over the past week we’ve been thinking about additional, faster things we can do (and, yes, we’ve also been talking to people to get “peer reviews” of possible peer-review processes, and even going to another meta level). Here’s what we’ve come up with. It’s based on the increasingly popular concept of “post-publication peer review”. The idea is to have an open process, where people comment on our papers, and all relevant comments and comments-on-comments, etc. are openly available on the web. We’re trying—albeit imperfectly—to get the best aspects of peer review, and to do it as quickly as possible.

Among other things, what we’re hoping is that people will say what they can “certify” and what they cannot: “I understand this, but don’t have anything to say about that”. We’re fully expecting people will sometimes say “I don’t understand this” or “I don’t think that is correct”. Then it’s up to us to answer, and hopefully before long consensus will be reached. No doubt people will point out errors and limitations (including “you should also refer to so-and-so”)—and we look forward to using this input to make everything as good as possible. (Thanks, by the way, to those who’ve already pointed out typos and other mistakes; much appreciated, and hopefully now all fixed.)

One challenge about open post-publication peer review is who will review the reviewers. Here’s what we’ve set up. First, every reviewer gives information about themselves, and we validate that the person posting is who they say they are. Then we ask the reviewer to fill out certain computable facts about themselves. (Academic affiliation? PhD in physics? Something else? Professor? Published on arXiv? ISI highly cited author? Etc.) Then when people look at the reviews, they can filter by these computable facts, essentially deciding for themselves how they want to “review the reviewers”.

I’m optimistic that this will work well, and will perhaps provide a model for review processes for other things. And as I write this, I can’t help noticing that it’s rather closely related to work we’ve done on validating facts for computable contracts, as well as to the ideas that came up in my testimony last summer for the US Senate about “ranking providers” for automated content selection on the internet.

Submit a Peer Review

Other Things

The Project’s Twitter

There’s a lot going on with the Wolfram Physics Project, and we’re expecting even much more, particularly as an increasing number of other people get involved. I’m hoping I’ll be able to write “progress reports” like this one from time to time, but we’re planning on consistently using the new Twitter feed for the project to give specific, timely updates:

@wolframphysics

Please follow us! And send us updates about what you’re doing in connection with the Wolfram Physics Project, so we can post about it.

Wolfram Physics on Twitter

Can We Explain the Project to Kids?

The way I know I really understand something is when I can explain it absolutely from the ground up. So one of the things I was pleased to do a week or so ago was to try to explain our fundamental theory of physics on a livestream aimed at kids, assuming essentially no prior knowledge.

How does one explain discrete space? I decided to start by talking about pixels on a screen. How about networks? Who’s friends with whom. Dimension? Look at 2×2×2… grid graphs. Etc. I thought I managed to get decently far, talking about general relativity, and even quantum mechanics, all, I hope, without relying on more than extremely everyday knowledge.

And particularly since my livestream seemed to get good reviews from both kids and others, I’m planning in the next week or two to put together a written version of this as a kind of “very elementary” introduction to our project.

Length scales

Project Q&A

Thousands of people have been asking us questions about our project. But fortunately, many of the questions have been the same. And over the last couple of weeks we’ve been progressively expanding the Q&A section of the project website to try to address the most common of the questions:

Wolfram Physics Q&A

Visual Gallery

In addition to being (we hope) very interesting from a scientific point of view, our models also produce interesting visual forms. And we’ve started to assemble a “Visual Gallery” of these forms.

They can be screen backgrounds, or Zoom backgrounds. Or they can be turned into stickers or T-shirt designs (or put on mouse pads, if people other than me still use those).

Wolfram Physics Visual Gallery

We’ll be adding lots more items to the Visual Gallery. But it won’t just be pictures. We’ll also be adding 3D geometry for rendering of graphs and hypergraphs.

In principle, this 3D geometry should let one immediately 3D print “universes”. But so far we’ve had difficulty doing this. It seems as if unless we thicken up the connections to the point where they merge into each other, it’s not possible to get enough structural integrity to successfully make a 3D printout with existing technologies. But there’s undoubtedly a solution to this, and we’re hoping someone will figure it out, say using our Wolfram Language computational geometry capabilities.

VR

It’s pretty difficult (at least for me) to “understand” the structure of the graphs and hypergraphs we’re generating. And ever since I started thinking about network models for physics in the 1990s, I’ve wanted to try to use VR to do this. Well, we’re just starting to have a system that lets one interactively manipulate graphs in 3D in VR. We’ll be posting the code soon, and we hope other people will help add features. But it’s getting closer…

Wolfram Physics in VR

It’s an Exciting Time…

This piece is already quite long, but there’s even much more I could say. It’s very exciting to be seeing all this activity around our Physics Project, after only two weeks.

There’s a lot to do in the project, and with the project. This is a time of great opportunity, where all sorts of discoveries are ripe to be made. And I’m certainly enjoying trying to figure out more with our models—and trying to understand all sorts of things I’ve wondered about for nearly half a century. But for me it’s been particularly wonderful to see so many other people engaging with the project. I personally think physics is great. And I really love the elegance of what’s emerging from our models. But right now what’s most important to me is what a tremendous pleasure it is to share all this with such a broad spectrum of people.

I’m looking forward to seeing what the next few weeks bring. We’re off to a really great start…

1 comment

  1.  

    // One way to imagine doing an NP problem in polynomial time is not to use an ordinary Turing machine, but instead to use a “non-deterministic Turing machine” in which there is a tree of possible paths where one can pick any path to follow. Well, our multiway system representing quantum mechanics essentially gives that whole tree (though causal invariance implies that ultimately the branches always merge). //

    This is super interesting to me. I’ve always considered the P=NP problem in terms of infinities, though. P=NP is easily considerable as true for cases where the problem is NP, but the set of “possible answers” is finite – just run that probabilistic answer-checking routine in parallel on all the “possible answers”. For instance, factoring numbers is solvable in polynomial time if you have enough resources to check all possible sequences of “numbers that multiply to less than N” in parallel. It’s just that this is a lot of sequences. No (classical) computer system can actually do this for any N. So you would need an infinite system.

    But it is intriguing to consider that a quantum computer could actually handle “all the threads” simultaneously, and this could definitely explain why. Very interesting.