What Is Consciousness? Some New Perspectives from Our Physics Project

What Is Consciousness?--Visual Summary—click to enlarge

“What about Consciousness?”

For years I’ve batted it away. I’ll be talking about my discoveries in the computational universe, and computational irreducibility, and my Principle of Computational Equivalence, and people will ask “So what does this mean about consciousness?” And I’ll say “that’s a slippery topic”. And I’ll start talking about the sequence: life, intelligence, consciousness.

I’ll ask “What is the abstract definition of life?” We know about the case of life on Earth, with all its RNA and proteins and other implementation details. But how do we generalize? What is life generally? And I’ll argue that it’s really just computational sophistication, which the Principle of Computational Equivalence says happens all over the place. Then I’ll talk about intelligence. And I’ll argue it’s the same kind of thing. We know the case of human intelligence. But if we generalize, it’s just computational sophistication—and it’s ubiquitous. And so it’s perfectly reasonable to say that “the weather has a mind of its own”; it just happens to be a mind whose details and “purposes” aren’t aligned with our existing human experience.

I’ve always implicitly assumed that consciousness is just a continuation of the same story: something that, if thought about in enough generality, is just a feature of computational sophistication, and therefore quite ubiquitous. But from our Physics Project—and particularly from thinking about its implications for the foundations of quantum mechanics—I’ve begun to realize that at its core consciousness is actually something rather different. Yes, its implementation involves computational sophistication. But its essence is not so much about what can happen as about having ways to integrate what’s happening to make it somehow coherent and to allow what we might see as “definite thoughts” to be formed about it.

And rather than consciousness being somehow beyond “generalized intelligence” or general computational sophistication, I now instead see it as a kind of “step down”—as something associated with simplified descriptions of the universe based on using only bounded amounts of computation. At the outset, it’s not obvious that a notion of consciousness defined in this way could consistently exist in our universe. And indeed the possibility of it seems to be related to deep features of the formal system that underlies physics.

In the end, there’s a lot going on in the universe that’s in a sense “beyond consciousness”. But the core notion of consciousness is crucial to our whole way of seeing and describing the universe—and at a very fundamental level it’s what makes the universe seem to us to have the kinds of laws and behavior it does.

Consciousness is a topic that’s been discussed and debated for centuries. But the surprise to me is that with what we’ve learned from exploring the computational universe and especially from our recent Physics Project it seems there may be new perspectives to be had, which most significantly seem to have the potential to connect questions about consciousness to concrete, formal scientific ideas.

Inevitably the discussion of consciousness—and especially its connection to our new foundations of physics—is quite conceptually complex, and all I’ll try to do here is sketch some preliminary ideas. No doubt quite a bit of what I say can be connected to existing philosophical and other thinking, but so far I’ve only had a chance to explore the ideas themselves, and haven’t yet tried to study their historical context.

Observers and Their Physics

The universe in our models is full of sophisticated computation, all the way down. At the lowest level it’s just a giant collection of “atoms of space”, whose relationships are continually being updated according to a computational rule. And inevitably much of that process is computationally irreducible, in the sense that there’s no general way to “figure out what’s going to happen” except, in effect, by just running each step.

But given that, how come the universe doesn’t just seem to us arbitrarily complex and unpredictable? How come there’s order and regularity that we can perceive in it? There’s still plenty of computational irreducibility. But somehow there are also pockets of reducibility that we manage to leverage to form a simpler description of the world, that we can successfully and coherently make use of. And a fundamental discovery of our Physics Project is that the two great pillars of twentieth-century physics—general relativity and quantum mechanics—correspond precisely to two such pockets of reducibility.

There’s an immediate analog—that actually ends up being an example of the same fundamental computational phenomenon. Consider a gas, like air. Ultimately the gas consists of lots of molecules bouncing around in a complicated way that’s full of computational irreducibility. But it’s a central fact of statistical mechanics that if we look at the gas on a large scale, we can get a useful description of what it does just in terms of properties like temperature and pressure. And in effect this reflects a pocket of computational reducibility, that allows us to operate without engaging with all the computational irreducibility underneath.

How should we think about this? An idea that will generalize is that as “observers” of the gas, we’re conflating lots of different microscopic configurations of molecules, and just paying attention to overall aggregate properties. In the language of statistical mechanics, it’s effectively a story of “coarse graining”. But within our computational approach, there’s now a clear, computational way to characterize this. At the level of individual molecules there’s an irreducible computation happening. And to “understand what’s going on” the observer is doing a computation. But the crucial point is that if there’s a certain boundedness to that computation then this has immediate consequences for the effective behavior the observer will perceive. And in the case of something like a gas, it turns out to directly imply the Second Law of Thermodynamics.

In the past there’s been a certain amount of mystery around the origin and validity of the Second Law. But now we can see it as a consequence of the interplay between underlying computational irreducibility and the computational boundedness of observers. If the observer kept track of all the computationally irreducible motions of individual molecules, they wouldn’t see Second Law behavior. The Second Law depends on a pocket of computational reducibility that in effect emerges only when there’s a constraint on the observer that amounts to the requirement that the observer has a “coherent view” of what’s going on.

So what about physical space? The traditional view had been that space was something that could to a large extent just be described as a coherent mathematical object. But in our models of physics, space is actually made of an immense number of discrete elements whose pattern of interconnections evolves in a complex and computationally irreducible way. But it’s much like with the gas molecules. If an observer is going to form a coherent view of what’s going on, and if they have bounded computational capabilities, then this puts definite constraints on what behavior they will perceive. And it turns out that those constraints yield exactly relativity.

In other words, for the “atoms of space”, relativity is the result of the interplay between underlying computational irreducibility and the requirement that the observer has a coherent view of what’s going on.

It may be helpful to fill in a little more of the technical details. Our underlying theory basically says that each elementary element of space follows computational rules that will yield computationally irreducible behavior. But if that was all there was to it, the universe would seem like a completely incoherent place, with every part of it doing irreducibly unpredictable things.

But imagine there’s an observer who perceives coherence in the universe. And who, for example, views there as being a definite coherent notion of “space”. What can we say about such an observer? The first thing is that since our model is supposed to describe everything in the universe, it must in particular include our observer. The observer must be an embedded part of the system—made up of the same atoms of space, and following the same rules, as everything else.

And there’s an immediate consequence to this. From “inside” the system there are only certain things about the system that the observer can perceive. Let’s say, for example, that in the whole universe there’s only one point at which anything is updated at any given time, but that that “update point” zips around the universe (in “Turing machine style”), sometimes updating a piece of the observer, and sometimes updating something they were observing. If one traces through scenarios like this, one realizes that from “inside the system” the only thing the observer can ever perceive is causal relationships between events.

They can’t tell “specifically when” any given event happens; all they can tell is what event has to happen before what other one, or in other words, what the causal relationships between events are. And this is the beginning of what makes relativity inevitable in our models.

But there are two other pieces. If the observer is going to have a coherent description of “space” they can’t in effect be tracking each atom separately; they’ll have to fit them into some overall framework, say by assigning each of them particular “coordinates”, or, in the language of relativity, defining a “reference frame” that conflates many different points in space. But if the observer is computationally bounded, then this puts constraints on the structure of the reference frame: it can’t for example be so wild that it separately traces the computationally irreducible behavior of individual atoms of space.

But let’s say an observer has successfully picked some reference frame. What’s to say that as the universe evolves it’s still possible to consistently maintain that reference frame? Well, this relies on a fundamental property that we believe either directly or effectively defines the operation of our universe: what we call “causal invariance”. The underlying rules just describe possible ways that the connections between atoms of space can be updated. But causal invariance implies that whatever actual sequence of updatings is used, there must always be the same graph of causal relationships.

And it’s this that gives observers the ability to pick different reference frames, and still have the same consistent and coherent perception of the behavior of the universe. And in the end, we have a definite result: that if there’s underlying computational irreducibility—plus causal invariance—then any observer who forms their perception of the universe in a computationally bounded way must inevitably perceive the universe to follow the laws of general relativity.

But—much like with the Second Law—this conclusion relies on having an observer who forms a coherent perception of the universe. If the observer could separately track every atom of space they won’t “see general relativity”; that only emerges for an observer who forms a coherent perception of the universe.

The Quantum Observer

OK, so what about quantum mechanics? How does that relate to observers? The story is actually surprisingly similar to both the Second Law and general relativity: quantum mechanics is again something that emerges as a result of trying to form a coherent perception of the universe.

In ordinary classical physics one considers everything that happens in the universe to happen in a definite way, in effect defining a single thread of history. But the essence of quantum mechanics is that actually there are many threads of history that are followed. And an important feature of our models is that this is inevitable.

The underlying rules define how local patterns of connections between atoms of space should be updated. But in the hypergraph of connections that represents the universe there will in general be many different places where the rules can be applied. And if we trace all the possibilities we get a multiway graph that includes many possible threads of history, sometimes branching and sometimes merging.

So how will an observer perceive all this? The crucial point is that the observer is themselves part of this multiway system. So in other words, if the universe is branching, so is the observer. And in essence the question becomes how a “branching brain” will perceive a branching universe.

It’s fairly easy to imagine how an observer who is “spatially large” compared to individual molecules in a gas—or atoms of space—could conflate their view of these elements so as to perceive only some aggregate property. Well, it seems like very much the same kind of thing is going on with observers in quantum mechanics. It’s just that instead of being extended in physical space, they’re extended in what we call branchial space.

Consider a multiway graph representing possible histories for a system. Now imagine slicing through this graph at a particular level that in effect corresponds to a particular time. In that slice there will be a certain set of nodes of the multiway graph, representing possible states of the system. And the structure of the multiway graph then defines relationships between these states (say through common ancestry). And in a large-scale limit we can say that the states are laid out in branchial space.

In the language of quantum mechanics, the geometry of branchial space in effect defines a map of entanglements between quantum states, and coordinates in branchial space are like phases of quantum amplitudes. In the evolution of a quantum system, one might start from a certain bundle of quantum states, then follow their threads of history, looking at where in branchial space they go.

But what would a quantum observer perceive about this? Even if they didn’t start that way, over time a quantum observer will inevitably become spread out in branchial space. And so they’ll always end up sampling a whole region in branchial space, or a whole bundle of “threads of history” in the multiway graph.

What will they make of them? If they considered each of them separately no coherent picture would emerge, not least since the underlying evolution of individual threads of history can be expected to be computationally irreducible. But what if the observer just defines their way of viewing things to be one that systematically organizes different threads of history, say by conflating “computationally nearby” ones? It’s similar to setting up a reference frame in relativity, except that now the coherent representation that this “quantum frame” defines is of branchial space rather than physical space.

But what will this coherent representation be like? Well, it seems to be exactly quantum mechanics as it was developed over the past century. In other words, just like general relativity emerges as an aggregate description of physical space formed by a computationally bounded observer, so quantum mechanics emerges as an aggregate description of branchial space.

Does the observer “create” the quantum mechanics? In some sense, yes. Just as in the spacetime case, the multiway graph has all sorts of computationally irreducible things going on. But if there’s an observer with a coherent description of what’s going on, then their description must follow the laws of quantum mechanics. Of course, there are lots of other things going on too—but they don’t fit into this coherent description.

OK, but let’s say that we have an observer who’s set up a quantum frame that conflates different threads of history to get a coherent description of what’s going on. How will their description correlate with what another observer—with a different quantum frame—would perceive? In the traditional formalism of quantum mechanics it’s always been difficult to explain why different observers—making different measurements—still fundamentally perceive the universe to be working the same.

In our model, there’s a clear answer: just like in the spacetime case, if the underlying rules show causal invariance, then regardless of the frame one uses, the basic perceived behavior will always be the same. Or, in other words, causal invariance guarantees the consistency of the behavior deduced by different observers.

There are many technical details to this. The traditional formalism of quantum mechanics has two separate parts. First, the time evolution of quantum amplitudes, and second, the process of measurement. In our models, there’s a very beautiful correspondence between the phenomenon of motion in space and the evolution of quantum amplitudes. In essence, both are associated with the deflection of (geodesic) paths by the presence of energy-momentum. But in the case of motion this deflection (that we identify as the effect of gravity) happens in physical space, while in the quantum case the deflection (that we identify as the phase change specified by the path integral) happens in branchial space. (In other words, the Feynman path integral is basically just the direct analog in branchial space of the Einstein equations in physical space.)

OK, so what about quantum measurement? Doing a quantum measurement involves somehow taking many threads of history (corresponding to a superposition of many quantum states) and effectively reducing them to a single thread that coherently represents the “outcome”. A quantum frame defines a way to do this—in effect specifying the pattern of threads of history that should be conflated. In and of itself, a quantum frame—like a relativistic reference frame—isn’t a physical thing; it just defines a way of describing what’s going on.

But as a way of probing possible coherent representations that an observer can form, one can consider what happens if one formally conflates things according to a particular quantum frame. In an analogy where the multiway graph defines inferences between propositions in a formal system, conflating things is like “performing certain completions”. And each completion is then like an elementary step in the act of measurement. And by looking at the effect of all necessary completions one gets the “Completion Interpretation of Quantum Mechanics” suggested by Jonathan Gorard.

Assuming that the underlying rule for the universe ultimately shows causal invariance, doing these completions is never fundamentally necessary, because different threads of history will always eventually give the same results for what can be perceived within the system. But if we want to get a “possible snapshot” of what the system is doing, we can pick a quantum frame and formally do the completions it defines.

Doing this doesn’t actually “change the system” in a way that we would “see from outside”. It’s only that we’re in effect “doing a formal projection” to see how things would be perceived by an observer who’s picked a particular quantum frame. And if the observer is going to have a coherent perception of what’s going on, they in effect have to have picked some specific quantum frame. But then from the “point of view of the observer” the completions associated with that frame in some sense “seem real” because they’re the way the observer is accessing what’s going on.

Or, in other words, the way a computationally bounded “branching brain” can have a coherent perception of a “branching universe” is by looking at things in terms of quantum frames and completions, and effectively picking off a computationally reducible slice of the whole computationally irreducible evolution of the universe—where it then turns out that the slice must necessarily follow the laws of quantum mechanics.

So, once again, for a computationally bounded observer to get a coherent perception of the universe—with all its underlying computational irreducibility—there’s a strong constraint on what that perception can be. And what we’ve discovered is that it turns out to basically have to follow the two great core theories of twentieth-century physics: general relativity and quantum mechanics.

It’s not immediately obvious that there has to be any way to get a coherent perception of the universe. But what we now know is that if there is, it essentially forces specific major results about physics. And, of course, if there wasn’t any way to get a coherent perception of the universe there wouldn’t really be systematic overall laws, or, for that matter, anything like physics, or science as we know it.

So, What Is Consciousness?

What’s special about the way we humans experience the world? At some level, the very fact that we even have a notion of “experiencing” it at all is special. The world is doing what it does, with all sorts of computational irreducibility. But somehow even with the computationally bounded resources of our brains (or minds) we’re able to form some kind of coherent model of what’s going on, so that, in a sense, we’re able to meaningfully “form coherent thoughts” about the universe. And just as we can form coherent thoughts about the universe, so also we can form coherent thoughts about that small part of the universe that corresponds to our brains—or to the computations that represent the operation of our minds.

But what does it mean to say that we “form coherent thoughts”? There’s a general notion of computation, which the Principle of Computational Equivalence tells us is quite ubiquitous. But it seems that what it means to “form coherent thoughts” is that computations are being “concentrated down” to the point where a coherent stream of “definite thoughts” can be identified in them.

At the outset it’s certainly not obvious that our brains—with their billions of neurons operating in parallel—should achieve anything like this. But in fact it seems that our brains have a quite specific neural architecture—presumably produced by biological evolution—that in effect attempts to “integrate and sequentialize” everything. In our cortex we bring together sensory data we collect, then process it with a definite thread of attention. And indeed in medical settings observed deficits in this are what are normally used to identify absence of levels of consciousness. There may still be neurons firing but without integration and sequentialization there doesn’t really seem to be what we normally consider consciousness.

These are biological details. But they seem to point to a fundamental feature of consciousness. Consciousness is not about the general computation that brains—or, for that matter, many other things—can do. It’s about the particular feature of our brains that causes us to have a coherent thread of experience.

But what we have now realized is that the notion of having a coherent thread of experience has deep consequences that far transcend the details of brains or biology. Because in particular what we’ve seen is that it defines the laws of physics, or at least what we consider the laws of physics to be.

Consciousness—like intelligence—is something of which we only have a clear sense in the single case of humans. But just as we’ve seen that the notion of intelligence can be generalized to the notion of arbitrary sophisticated computation, so now it seems that the notion of consciousness can be generalized to the notion of forming a coherent thread of representation for computations.

Operationally, there’s potentially a rather straightforward way to think about this, though it depends on our recent understanding of the concept of time. In the past, time in fundamental physics was usually viewed as being another dimension, much like space. But in our models of fundamental physics, time is something quite different from space. Space corresponds to the hypergraph of connections between the elements that we can consider as “atoms of space”. But time is instead associated with the inexorable and irreducible computational process of repeatedly updating these connections in all possible ways.

There are definite causal relationships between these updating events (ultimately defined by the multiway causal graph), but one can think of many of the events as happening “in parallel” in different parts of space or on different threads of history. But this kind of parallelism is in a sense antithetical to the concept of a coherent thread of experience.

And as we’ve discussed above, the formalism of physics—whether reference frames in relativity or quantum mechanics—is specifically set up to conflate things to the point where there is a single thread of evolution in time.

So one way to think about this is that we’re setting things up so we only have to do sequential computation, like a Turing machine. We don’t have multiple elements getting updated in parallel like in a cellular automaton, and we don’t have multiple threads of history like in a multiway (or nondeterministic) Turing machine.

The operation of the universe may be fundamentally parallel, but our “parsing” and “experience” of it is somehow sequential. As we’ve discussed above, it’s not obvious that such a “sequentialization” would be consistent. But if it’s done with frames and so on, the interplay between causal invariance and underlying computational irreducibility ensures that it will be—and that the behavior of the universe that we’ll perceive will follow the core features of twentieth-century physics, namely general relativity and quantum mechanics.

But do we really “sequentialize” everything? Experience with artificial neural networks seems to give us a fairly good sense of the basic operation of brains. And, yes, something like initial processing of visual scenes is definitely handled in parallel. But the closer we get to things we might realistically describe as “thoughts” the more sequential things seem to get. And a notable feature is that what seems to be our richest way to communicate thoughts, namely language, is decidedly sequential.

When people talk about consciousness, something often mentioned is “self-awareness” or the ability to “think about one’s own processes of thinking”. Without the conceptual framework of computation, this might seem quite mysterious. But the idea of universal computation instead makes it seem almost inevitable. The whole point of a universal computer is that it can be made to emulate any computational system—even itself. And that is why, for example, we can write the evaluator for Wolfram Language in Wolfram Language itself.

The Principle of Computational Equivalence implies that universal computation is ubiquitous, and that both brains and minds, as well as the universe at large, have it. Yes, the emulated version of something will usually take more time to execute than the original. But the point is that the emulation is possible.

But consider a mind in effect thinking about itself. When a mind thinks about the world at large, its process of perception involves essentially making a model of what’s out there (and, as we’ve discussed, typically a sequentialized one). So when the mind thinks about itself, it will again make a model. Our experiences may start by making models of the “outside world”. But then we’ll recursively make models of the models we make, perhaps barely distinguishing between “raw material” that comes from “inside” and “outside”.

The connection between sequentialization and consciousness gives one a way to understand why there can be different consciousnesses, say associated with different people, that have different “experiences”. Essentially it’s just that one can pick different frames and so on that lead to different “sequentialized” accounts of what’s going on.

Why should they end up eventually being consistent, and eventually agreeing on an objective reality? Essentially for the same reason that relativity works, namely that causal invariance implies that whatever frame one picks, the causal graph that’s eventually traced out is always the same.

If it wasn’t for all the interactions continually going on in the universe, there’d be no reason for the experience of different consciousnesses to get aligned. But the interactions—with their underlying computational irreducibility and overall causal invariance—lead to the consistency that’s needed, and, as we’ve discussed, something else too: particular effective laws of physics, that turn out to be just the relativity and quantum mechanics we know.

Other Consciousnesses

The view of consciousness that we’ve discussed is in a sense focused on the primacy of time: it’s about reducing the “parallelism” associated with space—and branchial space—to allow the formation of a coherent thread of experience, that in effect occurs sequentially in time.

And it’s undoubtedly no coincidence that we humans are in effect well placed in the universe to be able to do this. In large part this has to do with the physical sizes of things—and with the (undoubtedly not coincidental) fact that human scales are intermediate between those at which the effects of either relativity or quantum mechanics become extreme.

Why can we “ignore space” to the point where we can just discuss things happening “wherever” at a sequence of moments in time? Basically it’s because the speed of light is large compared to human scales. In our everyday lives the important parts of our visual environment tend to be at most tens of meters away—so it takes light only tens of nanoseconds to reach us. Yet our brains process information on timescales measured in milliseconds. And this means that as far as our experience is concerned, we can just “combine together” things at different places in space, and consider a sequence of instantaneous states in time.

If we were the size of planets, though, this would no longer work. Because—assuming our brains still ran at the same speed—we’d inevitably end up with a fragmented visual experience, that we wouldn’t be able to think about as a single thread about which we can say “this happened, then that happened”.

Even at standard human scale, we’d have somewhat the same experience if we used for example smell as our source of information about the world (as, say, dogs to a large extent do). Because in effect the “speed of smell” is quite slow compared to brain processing. And this would make it much less useful to identify our usual notion of “space” as a coherent concept. So instead we might invent some “other physics”, perhaps labeling things in terms of the paths of air currents that deliver smells to us, then inventing some elaborate gauge-field-like construct to talk about the relations between different paths.

In thinking about our “place in the universe” there’s also another important effect: our brains are small and slow enough that they’re not limited by the speed of light, which is why it’s possible for them to “form coherent thoughts” in the first place. If our brains were the size of planets, it would necessarily take far longer than milliseconds to “come to equilibrium”, so if we insisted on operating on those timescales there’d be no way—at least “from the outside”—to ensure a consistent thread of experience.

From “inside”, though, a planet-size brain might simply assume that it has a consistent thread of experience. And in doing this it would in a sense try to force a different physics on the universe. Would it work? Based on what we currently know, not without at least significantly changing the notions of space and time that we use.

By the way, the situation would be even more extreme if different parts of a brain were separated by permanent event horizons. And it seems as if the only way to maintain a consistent thread of experience in this case would be in effect to “freeze experience” before the event horizons formed.

What if we and our brains were much smaller than they actually are? As it is, our brains may contain perhaps 10300 atoms of space. But what if they contained, say, only a few hundred? Probably it would be hard to avoid computational irreducibility—and we’d never even be able to imagine that there were overall laws, or generally predictable features of the universe, and we’d never be able to build up the kind of coherent experience needed for our view of consciousness.

What about our extent in branchial space? In effect, our perception that “definite things happen even despite quantum mechanics” implies a conflation of the different threads of history that exist in the region of branchial space that we occupy. But how much effect does this have on the rest of the universe? It’s much like the story with the speed of light, except now what’s relevant is a new quantity that appears in our models: the maximum entanglement speed. And somehow this is large enough that over “everyday scales” in branchial space it’s adequate for us just to pick a quantum frame and treat it as something that can be considered to have a definite state at any given instant in time—so that we can indeed consistently maintain a “single thread of experience”.

OK, so now we have a sense of why with our particular human scale and characteristics our view of consciousness might be possible. But where else might consciousness be possible?

It’s a tricky and challenging thing to ask. To achieve our view of consciousness we need to be able to build up something that “viewed from the inside” represents a coherent thread of experience. But the issue is that we’re in effect “on the outside”. We know about our human thread of experience. And we know about the physics that effectively follows from it. And we can ask how we might experience that if, for example, our sensory systems were different. But to truly “get inside” we have to be able to imagine something very alien. Not only different sensory data and different “patterns of thinking”, but also different implied physics.

An obvious place to start in thinking about “other consciousnesses” is with animals and other organisms. But immediately we have the issue of communication. And it’s a fundamental one. Perhaps one day there’ll be ways for various animals to fluidly express themselves through something like human-relatable videogames. But as of now we have surprisingly little idea how animals “think about things”, and, for example, what their experience of the world is.

We can guess that there will be many differences from ours. At the simplest level, there are organisms that use different sensory modalities to probe the world, whether those be smell, sound, electrical, thermal, pressure, or other. There are “hive mind” organisms, where whatever integrated experience of the world there may be is built up through slow communication between different individuals. There are organisms like plants, which are (quite literally) rooted to one place in space. There are also things like viruses where anything akin to an “integrated thread of experience” can presumably only emerge at the level of something like the progress of an epidemic.

Meanwhile, even in us, there are things like the immune system, which in effect have some kind of “thread of experience” though with rather different input and output than our brains. Even if it seems bizarre to attribute something like consciousness to the immune system, it is interesting to try to imagine what its “implied physics” would be.

One can go even further afield, and think about things like the complete tree of life on Earth, or, for that matter, the geological history of the Earth, or the weather. But how can these have anything like consciousness? The Principle of Computational Equivalence implies that all of them have just the same fundamental computational sophistication as our brains. But, as we have discussed, consciousness seems to require something else as well: a kind of coherent integration and sequentialization.

Take the weather as an example. Yes, there is lots of computational sophistication in the patterns of fluid flow in the atmosphere. But—like fundamental processes in physics—it seems to be happening all over the place, with nothing, it seems, to define anything like a coherent thread of experience.

Coming a little closer to home, we can consider software and AI systems. One might expect that to “achieve consciousness” one would have to go further than ever before and inject some special “human-like spark”. But I suspect that the true story is rather different. If one wants the systems to make the richest use of what the computational universe has to offer, then they should behave a bit like fundamental physics (or nature in general), with all sorts of components and all sorts of computationally irreducible behavior.

But to have something like our view of consciousness requires taking a step down, and effectively forcing simpler behavior in which things are integrated to produce a “sequentialized” experience. And in the end, it may not be that different from picking out of the computational universe of possibilities just what can be expressed in a definite computational language of the kind the Wolfram Language provides.

Again we can ask about the “implied physics” of such a setup. But since the Wolfram Language is modeled on picking out the computational essence of human thinking it’s basically inevitable that its implied physics will be largely the same as the ordinary physics that is derived from ordinary human thinking.

One feature of having a fundamental model for physics is that it “reduces physics to mathematics”, in the sense that it provides a purely formal system that describes the universe. So this raises the question of whether one can think about consciousness in a formal system, like mathematics.

For example, imagine a formal analog of the universe constructed by applying axioms of mathematics. One would build up an elaborate network of theorems, that in effect populate “metamathematical space”. This setup leads to some fascinating analogies between physics and metamathematics. The notion of time effectively remains as always, but here represents the progressive proving of new mathematical theorems.

The analog of our spatial hypergraph is a structure that represents all theorems proved up to a given time. (And there’s also an analog of the multiway graph that yields quantum mechanics, but in which different paths now in effect represent different possible proofs of a theorem.) So what about things like reference frames?

Well, just as in physics, a reference frame is something associated with an observer. But here the observer is observing not physical space, but metamathematical space. And in a sense any given observer is “discovering mathematics in a particular order”. It could be that all the different “points in metamathematical space” (i.e. theorems) are behaving in completely incoherent—and computationally irreducible—ways. But just as in physics, it seems that there’s a certain computational reducibility: causal invariance implies that different reference frames will in a sense ultimately always “see the same mathematics”.

There’s an analog of the speed of light: the speed at which a new theorem can affect theorems that are progressively further away in metamathematical space. And relativistic invariance then becomes the statement that “there’s only one mathematics”—but it can just be explored in different ways.

How does this relate to “mathematical consciousness”? The whole idea of setting up reference frames in effect relies on the notion that one can “sequentialize metamathematical space”. And this in turn relies on a notion of “mathematical perception”. The situation is a bit like in physics. But now one has a formalized mathematician whose mind stretches over a certain region of metamathematical space.

In current formalized approaches to mathematics, a typical “human-scale mathematical theorem” might correspond to perhaps 105 lowest-level mathematical propositions. Meanwhile, the “mathematician” might “integrate into their experience” some small fraction of the metamathematical universe (which, for human mathematics, is currently perhaps 3 × 106 theorems). And it’s this setup—which amounts to defining a “sequentialized mathematical consciousness”—that means it makes sense to do analysis using reference frames, etc.

So, just as in physics, it’s ultimately the characteristics of our consciousness that lead to the physics we attribute to the universe, so something similar seems to happen in mathematics.

Clearly we’ve now reached a quite high level of abstraction, so perhaps it’s worth mentioning one more wrinkle that involves an even higher level of abstraction.

We’ve talked about applying a rule to update the abstract structure that represents the universe. And we’ve discussed the fact that the rule can be applied at different places, and on different threads of history. But there’s another freedom: we don’t have to consider a specific rule; we can consider all possible rules.

The result is a rulial multiway graph of possible states of the universe. On different paths, different specific rules are followed. And if you slice across the graph you can get a map of states laid out in rulial space, with different positions corresponding to the outcomes of applying different rules to the universe.

An important fact is then that at the level of the rulial multiway graph there is always causal invariance. So this means that different “rulial reference frames” must always ultimately give equivalent results. Or, in other words, even if one attributes the evolution of the universe to different rules, there is always fundamental equivalence in the results.

In a sense, this can be viewed as a reflection of the Principle of Computational Equivalence and the fundamental idea that the universe is computational. In essence it is saying that since whatever rules one uses to “construct the universe” are almost inevitably computation universal, one can always use them to emulate any other rules.

How does this relate to consciousness? Well, one feature of different rulial reference frames is that they can lead to utterly and incoherently different basic descriptions of the universe.

One of them could be our hypergraph-rewriting-based setup, with a representation of space that corresponds well with what emerged in twentieth-century physics. But another could be a Turing machine, in which one views the updating of the universe as being done by a single head zipping around to different places.

We’ve talked about some possible systems in which consciousness could occur. But one we haven’t yet mentioned—but which has often been considered—is “extraterrestrial intelligences”. Before our Physics Project one might reasonably have assumed that even if there was little else in common with such “alien intelligences”, at least they would be “experiencing the same physics”.

But it’s now clear that this absolutely does not need to be the case. An alien intelligence could perfectly well be experiencing the universe in a different rulial reference frame, utterly incoherent with the one we use.

Is there anything “sequentializable” in a different rulial reference frame? Presumably it’s possible to find at least something sequentializable in any rulial reference frame. But the question of whether the alien intelligence can be thought of as sampling it is a quite different one.

Does there need to be a “sequentializable consciousness” to imply “meaningful laws of physics”? Presumably meaningful laws have to somehow be associated with computational reducibility; certainly that would be true if they were going to be useful to a “computationally bounded” alien intelligence.

But it’s undoubtedly the case that “sequentializability” is not the only way to access computational reducibility. In a mathematical analogy, using sequentializability is a bit like using ordinary mathematical induction. But there are other axiomatic setups (like transfinite induction) that define other ways to do things like prove theorems.

Yes, human-like consciousness might involve sequentializability. But if the general idea of consciousness is to have a way of “experiencing the universe” that accesses computational reducibility then there are no doubt other ways. It’s a kind of “second-order alienness”: in addition to using a different rulial reference frame, it’s using a different scheme for accessing reducibility. And the implied physics of such a setup is likely to be very different from anything we currently think of as physics.

Could we ever expect to identify what some of these “alien possibilities” are? The Principle of Computational Equivalence at least implies that we can in principle expect to be able to set up any possible computational rule. But if we start doing experiments we can’t have an expectation that scientific induction will work, and it is potentially arbitrarily difficult to identify computational reducibility. Yes, we might recognize some form of prediction or regularity that we are familiar with. But to recognize an arbitrary form of computational reducibility in effect relies on some analog of a definition of consciousness, which is what we were looking for in the first place.

What Now?

Consciousness is a difficult topic, that has vexed philosophers and others for centuries. But with what we know now from our Physics Project it at least seems possible to cast it in a new light much more closely connected to the traditions of formal science. And although I haven’t done it here, I fully anticipate that it’ll be possible to take the ideas I’ve discussed and use them to create formal models that can answer questions about consciousness and capture its connections, particularly to physics.

It’s not clear how much realistic physics there will need to be in models to make them useful. Perhaps one will already be able to get worthwhile information about how branching brains perceive a branching universe by looking at some simple case of a multiway Turing machine. Perhaps some combinator system will already reveal something about how different versions of physics could be set up.

In a sense what’s important is that it seems we may have a realistic way to formalize issues about consciousness, and to turn questions about consciousness into what amount to concrete questions about mathematics, computation, logic or whatever that can be formally and rigorously explored.

But ultimately the way to tether the discussion—and to have it not for example devolve into debates about the meaning of words—is to connect it to actionable issues and applications.

As a first example, let’s discuss distributed computing. How should we think about computations that—like those in our model of physics—take place in parallel across many different elements? Well—except in very simple or structured cases—it’s hard, at least for us humans. And from what we’ve discussed about consciousness, perhaps we can now understand why.

The basic issue is that consciousness seems to be all about forming a definite “sequentialized” thread of experience of the world, which is directly at odds with the idea of parallelism.

But so how can we proceed if we need to do distributed computing? Following what we believe about consciousness, I suspect a good approach will be to essentially mirror what we do in parsing the physical universe—and for example to pick reference frames in which to view and integrate the computation.

Distributed computing is difficult enough for us humans to “wrap our brains around”. Multiway or nondeterministic computing tends to be even harder. And once again I suspect this is because of the “limitations imposed by consciousness”. And that the way to handle it will be to use ideas that come from physics, and from the interaction of consciousness with quantum mechanics.

A few years ago at an AI ethics conference I raised the question of what would make us think AIs should have rights and responsibilities. “When they have consciousness!” said an enthusiastic philosopher. Of course, that begs the question of what it would mean for AIs to have consciousness. But the point is that attributing consciousness to something has potential consequences, say for ethics.

And it’s interesting to see how the connection might work. Consider a system that’s doing all sorts of sophisticated and irreducible computation. Already we might reasonably say that the system is showing a generalization of intelligence. But to achieve what we’re viewing as consciousness the system also has to integrate this computation into some kind of single thread of experience.

And somehow it seems much more appropriate to attribute “responsibility” to that single thread that we can somehow “point to” than to a whole incoherent distributed computation. In addition, it seems much “more wrong” to imagine “killing” a single thread, probably because it feels much more unique and special. In a generic computational system there are many ways to “move forward”. But if there’s a single thread of experience it’s more like there’s only one.

And perhaps it’s like the death of a human consciousness. Inevitably the history around that consciousness has affected all sorts of things in the physical universe that will survive its disappearance. But it’s the thread of consciousness that ties it all together that seems significant to us, particularly as we try to make a “summary” of the universe to create our own coherent thread of experience.

And, by the way, when we talk about “explaining AI” what it tends to come down to is being able not just to say “that’s the computation that ran”, but being able to “tell a story” about what happened, which typically begins with making it “sequential enough” that we can relate to it like “another consciousness”.

I’ve often noted that the Principle of Computational Equivalence has important implications for understanding our “place in the universe”. We might have thought that with our life and intelligence there must be something fundamentally special about us. But what we’ve realized is that the essence of these is just computational sophistication—and the Principle of Computational Equivalence implies that that’s actually quite ubiquitous and generic. So in a sense this promotes the importance of our human details—because that’s ultimately all that’s special about us.

So what about consciousness? In full generality it too has a certain genericity. Because it can potentially “plug into” any pocket of reducibility of which there are inevitably infinitely many—even though we humans would not yet recognize most of them. But for our particular version of consciousness the idea of sequentialization seems to be central.

And, yes, we might have hoped that our consciousness would be something that even at an abstract level would put us “above” other parts of the physical universe. So the idea that this vaunted feature of ours is ultimately associated with what amounts to a restriction on computation might seem disappointing. But I view this as just part of the story that what’s special about us are not big, abstract things, but specific things that reflect all that specific irreducible computation that has gone into creating our biology, our civilization and our lives.

In a sense the story of science is a story of struggle between computational irreducibility and computational reducibility. The richness of what we see is a reflection of computational irreducibility, but if we are to understand it we must find computational reducibility in it. And from what we have discussed here we now see how consciousness—which seems so core to our existence—might fundamentally relate to the computational reducibility we need for science, and might ultimately drive our actual scientific laws.

Notes

How does this all relate to what philosophers (and others) have said before? It will take significant work to figure that out, and I haven’t done it. But it’ll surely be valuable. Of course it’ll be fun to know if Leibniz or Kant or Plato already figured out—or guessed—this or that, even centuries or millennia before we discovered some feature of computation or physics. But what’s more important is that if there’s overlap with some existing body of work then this provides the potential to make a connection with other aspects of that work, and to show, for example, how what I discuss might relate, say, to other areas of philosophy or other questions in philosophy.

My mother, Sybil Wolfram, was a longtime philosophy professor at Oxford University, and I was introduced to philosophical discourse at a very young age. I always said, though, that if there was one thing I’d never do when I was grown up, it’s philosophy; it just seemed too crazy to still be arguing about the same issues after two thousand years. But after more than half a century of “detour” in science, here I am, arguably, doing philosophy after all….

Some of the early development of the ideas here were captured in the livestream: A Discussion about Physics Built by Alien Intelligences (June 25, 2020). Thanks particularly to Jeff Arle, Jonathan Gorard and Alexander Wolfram for discussions.

Stephen Wolfram (2021), "What Is Consciousness? Some New Perspectives from Our Physics Project," Stephen Wolfram Writings. writings.stephenwolfram.com/2021/03/what-is-consciousness-some-new-perspectives-from-our-physics-project.
Text
Stephen Wolfram (2021), "What Is Consciousness? Some New Perspectives from Our Physics Project," Stephen Wolfram Writings. writings.stephenwolfram.com/2021/03/what-is-consciousness-some-new-perspectives-from-our-physics-project.
CMS
Wolfram, Stephen. "What Is Consciousness? Some New Perspectives from Our Physics Project." Stephen Wolfram Writings. March 22, 2021. writings.stephenwolfram.com/2021/03/what-is-consciousness-some-new-perspectives-from-our-physics-project.
APA
Wolfram, S. (2021, March 22). What is consciousness? Some new perspectives from our physics project. Stephen Wolfram Writings. writings.stephenwolfram.com/2021/03/what-is-consciousness-some-new-perspectives-from-our-physics-project.

Posted in: Big Picture, New Kind of Science, Philosophy, Physics

47 comments

  1. I like the focus on sequence. This seems to emerge due to the unitary nature of consciousness and the extended nature of time. In so far as consciousness is an integrated whole, it is only one experience at a time. But in so far as there is time, there must be different experiences in sequence. Our minds can translate between the whole and a sequence, and the sequence and the whole (e.g., turning a string of words into a single experience).

    Plato and the Pythagoreans held that the oneness of consciousness is a harmony of diversity in unity, governed by the natural harmonies of mathematics. This implies the importance of temporal structure (hierarchies of rhythmic integration, for instance), yet so little of modern computing or AI makes use of temporal synchrony. Thus I wonder whether we will have to wait for alternative computational infrastructures to have better computational models of consciousness.

  2. Simply incredible! This has expanded my understanding of reality. The most incredible gift. Thank you Wolfram.

  3. I would love to see you tackle evolution next!

  4. Thank you for this update and reflection. So much to…..integrate into my world view and experience sum.

    Is numbering, the first step in mathematics, the most coarse of grains? In that in saying “this stone represents one” we are removing all previously known attributes and information about that object and reducing it to a cipher. An almost featureless object. Mathematics is as you said a formal construction up from this negation of attributes until it is now at the point where it is approaching a tracing, rather than a map.

  5. Hi Dr Wolfram,

    Have you come across the ideas of Jeremy England?

    https://en.wikipedia.org/wiki/Jeremy_England

    If I understand what he’s saying, evolution is baked in to thermodynamics, and the “purpose” of life is to facilitate the dissipation of energy.

    Perhaps consiousness is a tool of life to perfect this goal. Seeing that our oceans are full of plastic, our ground is full of pollution, and our air is full of CO2, that’s an idea I can get behind.

    It took a lot of smarts to burn this place to the ground, the best organization, the best people working ’round the clock. Maybe the microbes will have a better approach. Maybe the last person standing will finally know the answer to Fermi’s Question.

  6. Yes consciousness is a fully developed science in yoga, based not on intellectual speculation but on direct experience of reality. Krishna consciousness (god consciousness sort of) offers direct experiential knowledge of Reality unfolding completely unified at all levels and dimensions, from the highest to the lowest. It is called Self Realization and can only be found when your mind surrenders itself into the ocean of consciousness . Seek for the non dual consciousness and Self realization , not philosophy. Your mind is like a redundant reverb from your Consciousness, and only sees naya (forms.) Consciousness is doing just fine and is as all loving and pervasive as ever , just you r trapped in a secondary emanation with a body disconnected from consciousness (along with everybody else.) I love your books and you r my personal hero (as a physicist) but you don’t know much about the Reality of higher consciousness or Self realization. I or many other authentic traditional yoga teachers could help you… But until then, you will be making intellectual castles in the sand of maha, in complete ignorance of your Self

  7. This was truly mind-bending!
    Do you also have a theory on how consciousness may have developed only in humans, and not human sized rocks? The article claims that the fact that spatial scale of humans lies between quantum and relativistic scales is no accident, and I inferred that this was a contributing factor for why humans developed consciousness. But why not human sized rocks? Clearly there is sophisticated irreducible computation going on there as well.

  8. Who gets to decide what rules execute?

    Strikes me that he who controls the rules, controls the universe. Looks like a pretty good prize if you can figure out the hack.

  9. Wonderful

  10. That’s a long long-winded discussion of a very simple, yet intractable problem. Phenomenal consciousness is not a scientifically observable property of *anything*, including biological brains. The only way for a materialist to get around that is by denying its existence, if you’re prepared to do that. As far as I know, unobservable properties don’t emerge from observable systems. There is no (observable) algorithm for generating unobservable properties. Leibniz understood that 350 years ago. They don’t call it the ghost in the machine for nothing.

  11. Some very good ideas, and my own ideas on this are not too dissimilar. I have decomposed the notion of ‘time’ into 3 different models, which I call Causal, Compression and Compositional. I just want to summarize what I think these 3 different models of time are doing:

    CAUSALITY: Probability theory in its fullest sense is really about cause and effect and how to do prediction, retrodiction and imputation. We don’t just want to know about correlations between things, we want to know about causes and counterfactuals, which outcomes are possible, and how would those outcomes change if we intervene in some way.

    COMPLEXITY (COMPRESSION): Coding theory in its fullest sense is about dealing with complexity. We want to compress our representations of the world, to find efficient encodings to deal with limited resources in terms of space and time and limited information. In the real word, we are confronted with complex adaptive systems, and these embody a mix of randomness and determinism that makes them complex. How do such systems achieve open-endedness, efficiently exploring and creating new possibilities ?

    COMPOSITIONALITY: Constructive logic in it’s fullest sense is about compositionality: how are large systems built from smaller ones, and going in the other direction, how do we manage to split the world into smaller parts, objects and the relations between them? Mereology studies the relationship between the whole and it’s parts. We want to know how to engineer and combine ontologies based on the principle of compositionality.

    So, perhaps my first model (Causal model) matches up with the green thread in the diagram, this is giving the causal relations.

    My third model (compositional model) perhaps matches the pink thread in the diagram, which is what you called a serialization, but I think the essential thing here is the decomposition of the world into parts and wholes (mereology), with the serial aspect you mentioned being a subset of this.

    My second model might be equivalent to what you call ‘Laws+History’. I think this is a data compression (hence I called it the ‘Compression Model’). This is where we get out space-time along with a particular coordinate system. I would suggest a third coloured thread should be added to the diagram to represent it.

    I’m tending to think that consciousness is indeed equivalent to that pink thread (perceived time), which I called the ‘Compositional Model’. A system is modeling itself, and it needs have the property of compositionality, where it can represent itself as objects and relations. And this is serial in nature, as you mentioned. There’s a conception of ‘Self’ which persists through time, and this representation is sub-divided into parts via our moment-to-moment self models (different selves at each moment in time, but we combine them into a single coherent thread).

    The now infamous Jonathan Gorard needs to be called up for a dedicated working session devoted purely to understanding the notion of ‘time’ !

  12. I enjoyed that. Thanks.

  13. The principles in mathematics never change. 2&2 will always be 4. Just as the principle there is no matter there is only spirit. It is that simple. Meditate on that.

  14. This might save you some time: “Untangling the Worldknot of Consciousness” with John Vervaeke and Gregg Henriques ( https://www.youtube.com/watch?v=bD6Szbf1cHo&list=PLND1JCRq8VujnIBs58kZvrCjdFivR66Cbo ). You’ll have to excuse the low production value — they focus much more on that elsewhere; this series is purely meant to be a development-in-motion open to the public. They’re attempting to place the latest cognitive science, philosophy, and psychology in dialogue to synthesize a coherent whole that’s deeply consonant with the work being done here.

    Just to give you a (layman’s!) taste of the early parts:
    Time/Complexification → Matter + Energy → Self-organization → Autopoiesis → Metastability + Computational Reducibility → Realization and Framing of Relevant Structural-functional Aspects → Recursive Integration (into Concepts/Qualia) ≈ Experience

  15. Although I may not understand 100% of tye technicalities you had in this blogpost, but I think that I get the general idea of what you are suggesting (and it looks kinda similar to mine). And to to be honest I think it lacks lots of the aspects of the generic definition of consciousness and only focused on some others such as experiencing, but nonetheless I think what you wrote has more applications in the near and medium future (e.g the domain I study i.e Economics) than the other aspects of consciousness. So in that regard I enjoyed reading this blogpost, it actually has been a long time I didn’t read a long text like this one.
    Anyway, at the beginning of the 3rd paragraph, section “so, what is consciousness?”, you said:
    “At the outset it’s certainly not obvious that our brains—with their billions of neurons operating in parallel—should achieve anything like this. But in fact it seems that our brains have a quite specific neural architecture—presumably produced by biological evolution—that in effect attempts to “integrate and sequentialize” everything”

    I thought I can help & expand your knowledge about this by recomending to checkout one of 2020 breakthroughs in Biology about brain structure that suggest that dendrites of neurons also does computations, which would imply that our brains actually does bigger computations than what we have thought before:

    https://science.sciencemag.org/content/367/6473/83

    https://www.quantamagazine.org/quantas-year-in-biology-2020-20201223/

    https://m.youtube.com/watch?v=YpDsA7SE-3c

    https://www.quantamagazine.org/neural-dendrites-reveal-their-computational-power-20200114/

  16. just leave it be as it is..dont mess up with human consciousness..good day

  17. I giggled at how much time you spend here talking about everything but consciousness.

    However, you seem to be circling around two ideas that have been developed elsewhere: Integrated Information Theory, and panpsychism (particularly panprotoexperientialism).

    I’ve always been fascinated by the connection between pancomputationalism and panpsychism, and Integrated Information Theory provides a possible connection.

    I think you may want to check out Integrated Information Theory (Tononi).

  18. These conflated threads of history seem like a very sophisticated form of alien blockchain technology. A common, decentralized ledger where all bounded consciousnesses could agree on a definite thing happening, or having had happened. The past and future are encrypted in a way that appears invariable to us–beings perceiving in aggregate. Just sayin.

  19. Much like Mr. Wolfram’s large tome, A New Kind of Science, this article does not seem at all intended for the non-scientist. My impression is that anyone who is not a scientist will likely come out of the lengthy read having greater familiarity with current jargon, but no greater insight into the nature and origin of consciousness.

  20. Hi!
    Totally loved the article, very insightful.
    Also, I want to mention that there might be an overlap with existing philosophical work on conscience. Thomas Metzinger in his book Being No One: The Self-model Theory of Subjectivity, presents a somewhat similar view on conscience.

  21. A provisory Commentary to Stephen Wolfram’s new Conceptualization of Consciousness.
    In hindsight every narrative consciously or unconsciously suggests – especially for the creator of the narrative, but then for the reader, who expects it already, and will not be convinced if s(he) detects inconsistencies or contradictions etc. – that there is a consequent logical or determinist thread leading up from the pitchblack darknesses of some infathomable nothingness up to the latest explanation, that creates either a ‘new paradigma’ or the consequent peak and summit of all the vain attempts before them, and follows in some plausible way from the discussion of everything that was before or subsist as some kind of devalued fundament, like a sediment upon down to solid rock cratons of primordial sense beyond sense. That’s how we end up with cognitive science. But according to the newest schema, that is ‘paradigm chance’ (after: ‘scientific revolutions’ [plural]) what is presented to be the truth of today must be the error of tomorrow, such, that the actual paradigm is always from yesterday already while still dominating the sermon of today, as science for the masses, with the next paradigm already in the pipeline, like the next computer technology, so that what you buy today is potentially outdated when you look at what is just passing the end test to mass production, where ‘paradigm change’ now means that theories and conceptions follow one another like the orders of the pieces in the mirror chamber of a kaleidoscope falling into place at each turn into a new order – until the next turn of the apparatus.
    To think of Penrose’s contribution to (quantum) consciousness, would seem to be the next idea to think of. And it is true, that so far, physics is not capable to make sense of its own basis, in so far, as it is – no doubt – not simy part of its own conception of reality, and this goes trough all the rest of science through biology to cognitive science or neurology to questions like those the ‘Chinese Room’ thought experiment tries to solve.
    But: It is in no way illegitimate or absurd to make consciousness an object of any science. The problem is then not, if it could be understood as a result of physico-chemical characteristics of ‘matter/energy/time/space’ as conceptualized by quantum mechanics, but the answers must be brought about with consciousness as one of the ‘ingredients’ of the answers, in so far as they will be results of an analysis that is brought about by consciousness, besides the concepts that are taken to be the not-conscious elements and their composition, out of which consciousness is considered to be composed. In the end this would mean to explain how all these conceptualizations can be dissolved plausibly into its result – consciousness – such that they are themselves recognized as representations of referents of as well matter/energy/time/space aspects as as aspects of consciousness.
    But there is something new: That is Stephen Wolfram’s attempt to make a new start with the problem of consciousness after last not least his own achievement of the ‘language’ implemented in his own mathematical machine and the metaterminology that can be used to project a new design for a brain, or, in his own words, consciousness as a general concept.
    I will have to read this the essay again, and other essays with this terminology, of the same author, of course, before I can say something really valuable. My first impression gives me two aspects, and a question.
    Of course it is a narrative. A story is told. There seems to be something strange about it. Whatever is cloaked into a new terminology, results in an out one that is identical with contemporary mathematics and quantum mechanics. It reminds me of Hegel’s remark somewhere that the a priori, founding the narrative as a background, as a starting point, is in fact the aposteriori, so that the narrative is circular, because whatever it’s initial point of departure it must result in the vindication of the supposed theories, and the narrative might be sequential, but the sequence nevertheless is a circle, like the snake biting it’s tail, Kekule imagined and recognized as the key to the molecule he was concerned with.
    Of course the terminology is unusual, but coherent. But again, and this is the second point I vaguely have in mind, is not the strange dimensionless and ’empty’ ‘I think’ of Immanuel Kant’s Critique of pure Reason, that mocks ‘psychology’ as a science one one hand indeed a strong argument for a concept of ‘consciousness’, that can indeed be reconstructed with means beyond Psychologism, as refuted since Edmund Husserl and Gottlob Frege, but on the other hand is nevertheless implicit at the start in the investment that tries to tell a story of a Genesis out of structures that are left behind by the result because they vanish without a trace in this product that now is the underlying principle that constructs the narratives both of subject and object again in the same vein as tradition has it in variations but that nevertheless is a dimensionless and immaterial subject of the language of a Wolfram universe reconstructed as the imagination of these structures it describes in a new terminology, but beyond that still within the notion of Being opened up by the Greeks, where the object and the ‘absulute’ of the structural-dynamic history of a universe is in the end just another representation, serving as the focal point of unity of all meaning and sense and its spread out sequences as laid out in the sciences that are, as this projected whole, the spread out symbolic forms to be projected like the central perspective provides an order of things on a plane to give the impression of ‘reality as such’.
    This does not touch on the usefulness of the reordering of the same problems with a terminology that aims and might be very helpful for the attempts to construct a thinking machine, that may in the end produce an emergent at least similar to the ego cogitans of Descartes, that haunts modernity as something it would like to be able to be conceived by a machine that is ablw to do more than the machines of Pascal, Leibniz, Babbitt and Turing, emerging as the equivalent of what is implicitly projected in modern svience: To succeed with a construction attempt, that reproduces the productive origin of itself.
    In a way it can be proposed, that modernity, as represented in modern science since Galilei, and made a philosophical concept since Descartes’ ego cogitans, and that can be detected to be a main thread through from Descartes, La Mettrie, Leibniz, Babbitt to Turing and Computer Science, a ‘telos’, and this is the more or less consciously intended project to create a constructive design of a machine not of organic composition, not as a biological organism, that would successfully represent a twin of the productive capacities as displayed by the capabilities of the human mind, so far, to emulate consciousness of the species homo.
    But as I already hinted, to emulate consciousness is not the end goal or aim, it cannot be. If this is correct, the question is why not.
    One attempt to answer this question is given in the essay of the author: The concept ‘consciousness’ can, should or must be transcended, from the concept of consciousness of a biological species, the species homo, to consciousness in the sense the author suggests.
    But this again, however useful and necessary for the purpose, a design for a device emulating it as an emergent property of a non-organic cumputing device and a concept to reconstruct a comprehensive understanding of the universe, will not complete the intrinsic Motive and intention of modern science, if it is not absurd to detect the thread in it, as proposed above.
    So the question is: Why can this not be the projected end point of the attempt to make the design for the postulated machine or device, and what has to be done beyond the goal to emulate consciousness by way of the construction of a working machine or device that credibly emulates consciousness as an emergent property of the device?
    My best wishes and many greetings to the mother of philosophy.
    A. S.

  22. The consciousness is in the realm of the subjective, and science is in the domain of objective reality.

    Like grasping at smoke, the minute scienctists in it on its own terms can explain consciousness..

    It isn’t anymore

  23. So not as simple as “I think, therefore I am.”

  24. Not being conversant in this stuff at all, it however seems to me that certain ideas presented here may connect with Eric Weinstein’s observerse concept. Yes? No? Maybe?

  25. I’ll stick with what our Creator says our conscience is: ”
    For what man knoweth the things of a man, save the spirit of man which is in him? Even so no man knoweth the things of God, but the Spirit of God.”

  26. Wolfram builds pronunciamento upon pronunciamento. Some may be true, but essentially it is supposition erected on assumption.

  27. This entire ramble about how this person solved how computation becomes experience boils down to this:
    “computations are being “concentrated down” to the point where a coherent stream of “definite thoughts” can be identified in them.”
    So it’s not exactly consciousness emerges from complexity, the major “breakthrough” is that it is a type of filtered complexity. Some interesting ideas about perception, but it explains nothing about actual consciousness. Where is the observer? Does it just magically spring into existence when the right amount and type of computation occur? It is still just another appeal to the god of emergance.

  28. You might recall the incident when two AI appliances were left running all night in a laboratory and when the lab technicians arrived the next morning they found that the machines had created their own language and communicating with each other…does that imply consciousness?

  29. This great discourse is really sbout the philosophy of philosophy. It rewuures a very powerful mind to conceive this paper. In my coming book, on the other hand, I try to take consciousbess to the sub- particle scientific dimension. I talk about the ultimate physics of physics which is where in my Theory of Everything lies the secrets of the ultimate mystery of consciousness. There is no doubt that the ultimate nature and origin of existence lies at the level of the ultimate physics of quantum physicists and of relativity, the ultimate sub quantum dimension or the metaphysics of the infinitely micro scale of the ultimate existence. My last concluding chapter briefly touches on this topic of the ultimate origin and nature of consciousness.

  30. As a PhD chemist from UIUC, I am always proud when I drive by your building, as I lived in Savoy as a student. I have been working in neuroscience and ai since leaving UIUC and was privileged to learn the abstraction of intelligence from Vladimir Vapnik. We are now applying certain concepts of learning to drug discovery and development. It has forced me to contemplate consciousness as part of mild cognitive impairment and consider how much information processing happens in non-neuronal cells. I believe consciousness is a needed evolutionary accessory such that decentralized processes triggered independently are not directly contradicted by conscious choice without significant consideration. I believe analogical thinking is at its core. I have watched Gentner model the processes, but believe that Koch’s experience of consciousness is an artifact of the mind’s need to run system air traffic control as envisioning and emotional response labeling seem to be labeling constructs for the mind. I believe math can help simulate the process but descriptive representation through metaphor manipulation comes closer to what is happening.

  31. you do not address qualia. your assumption that consicousness excludes your daily experience. Computation can be done with clockwork, with no consciousness.

  32. Any thoughts and how evolution, a rather sequential algorithm, ties into the development of this sequential and coherent computation?

  33. A physics lecturer is an organised collection of atoms that communicates to other organised collections of atoms what an organised collection of atoms is.

  34. Always entertaining to see an expert from some other field, without relevant knowledge, launch a soliloquy on consciousness and cognition with no bearing on our current understanding of either topic.

  35. Try to write in lucid language, instead of using scientific & English jargons. It looks that writers don’t have any real anwser to what is conciousness, theay are trying in vauge to explain it!

  36. In a way this seems like a brilliant furthering of Wigner’s ELE, where all systems exist only as bounded by space, time, and chosen events determining one possible computational path.

  37. Consishenes is a black board. Anything can put some thing on it. But then we decide.

  38. The idea of Intelligence was replaced in the new definition from Psychology by short results of it, reducing it’s sense. Likewise…
    The sense of Conscience is prone to reduction (in the sense of trashing) to be seen as short results of it, as computations.

    I do accept that it is possible for “A Conscience” to use a different kink of body than the biological one we are accustomed to, even more linked to the causes behind. What I’m saying is…
    – When we cannot grasp the causes, we attribute names, associate with the effects.

    In other words, we satisfy our ignorance, not with inquiring ourselves and the universe, trying to ‘grok’ us and it, but to dive into a bright ‘iNgnorance’. To that failure, we call it a success, even a breaktrough. We see that all the time.

    And yet… Even after a certain species without conscience killing another that could help it (this seems a another subject, I know) … something remains even in the absence of anyone to see it. A chance to restart, only after the loss we are seeing today (by the had of bright fools deciding to save the world killing everyone) hiding real motivations, motivations replaced by arguments.

    Do we see here a pattern that repeats, that pretended to be Intelligence (mimicking it), and the attempts to call Conscience a mere description of its effects (a few visible, a few suspected) ?!?
    Naturally, we do. But we cannot say anything, we are not ‘formally’ on the field.
    … Thus, we carry no ‘formal’ conscience, attributed.

    Conscience, as today is seen, is as everything:
    – A result, decided by vote or attribution…
    … Not a (silent) cause, unseen by accounting minds.

    Whatever…
    ___
    (P.S.- Questions must be kept alive, not just bragging… Whatever!)

  39. I think these ideas could be connected with Immanuel Kant’s philosophy of the mind. Kant distinguishes between Things-In-Themselves (phenomena) and Things-For-Us (noumena) and makes the point that our understanding of reality is fundamentally bound by our senses – and we cannot perceive objects independent of that limitation. He theorises, like you, that other intelligences may perceive the same reality differently.

  40. Any discussion of what consciousness is and/or how it is possible really ought to begin with how the writer is defining it.

  41. You postulate (with pretty good, albeit necessarily somewhat sketchy, arguments) that there are tight links between coherent observers / imposed time-sequentiality (causality as a PO) / foliations of spacetime and reference frames (and hence GR). That is a lot, but you are saying that these are all just manifestations of a certain underlying structure, which is an attractive proposition.

    My question is: it seems to me that on a sufficiently small (“sufficiently local”) scale the multithread (brachial?) ambiguity must resolve itself. An atom forming a bond with another atom is not a conscious observer, and feels no human need for a coherent beholding of the world, but surely it “experiences” some sort of sequentiality (local time, local causality). Foliations must agree in the limit of a sufficiently local scale.

    So the worry is that the arguments given do not privilege consciousness “too much”

  42. Thanks for this, here are many valueable thoughts. I am doing research on consciousness in a long time already and I have ended at somewhat similar concepts. Ulla Mattfolk, Finland.

  43. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  44. Please study the cortical columns. You will see that ther is indeed a holographic cellular automaton at work in human brains.

  45. I am surprised someone would start speculating about the consciousness without even a short glance at neurology. There is very obviously a cellular automaton at work on the scale of the cortical columns, so I stopped reading at the point where you of all people deny such a thing.

  46. I would agree with @Pan Darius, and I am curious about what you think about the possible connection between the Computational Universe and the Integrated Information Theory (IIT), particularly at what extent the concept of intrinsic causal power in IIT, or the capability of the (conscious) brain to integrate information of the passed and the present in its physical structure, could correspond to the strictly causal structure of the computational universe. Could it be that a system with a high “intrinsic causal power” is exactly what is needed to be aware of the flow of time in the Computational Universe, and to integrate the experience over time in one “self”, just as our consciousness does?

  47. The universe & its’ contents runs on patterns so the simplest definition of consciousness I have imagined for what it’s worth is consciousness is the ability to recognize patterns hence most things are conscious to varying degrees depending on their needs & the challenges of their environments.