Looking to the Future of A New Kind of Science

(This is the third in a series of posts about A New Kind of Science. Previous posts have covered the original reaction to the book and what’s happened since it was published.)

Today ten years have passed since A New Kind of Science (“the NKS book”) was published. But in many ways the development that started with the book is still only just beginning. And over the next several decades I think its effects will inexorably become ever more obvious and important.

Indeed, even at an everyday level I expect that in time there will be all sorts of visible reminders of NKS all around us. Today we are continually exposed to technology and engineering that is directly descended from the development of the mathematical approach to science that began in earnest three centuries ago. Sometime hence I believe a large portion of our technology will instead come from NKS ideas. It will not be created incrementally from components whose behavior we can analyze with traditional mathematics and related methods. Rather it will in effect be “mined” by searching the abstract computational universe of possible simple programs.

And even at a visual level this will have obvious consequences. For today’s technological systems tend to be full of simple geometrical shapes (like beams and boxes) and simple patterns of behavior that we can readily understand and analyze. But when our technology comes from NKS and from mining the computational universe there will not be such obvious simplicity. Instead, even though the underlying rules will often be quite simple, the overall behavior that we see will often be in a sense irreducibly complex.

So as one small indication of what is to come—and as part of celebrating the first decade of A New Kind of Science—starting today, when Wolfram|Alpha is computing, it will no longer display a simple rotating geometric shape, but will instead run a simple program (currently, a 2D cellular automaton) from the computational universe found by searching for a system with the right kind of visually engaging behavior.

What is the fundamental theory of physics?

This doesn’t look like the typical output of an engineering design process. There’s something much more “organic” and “natural” about it. And in a sense this is a direct example of what launched my work on A New Kind of Science three decades ago. The traditional mathematical approach to science has had great success in letting us understand systems in nature and elsewhere whose behavior shows a certain regularity and simplicity. But I was interested in finding ways to model the many kinds of systems that we see throughout the natural world whose behavior is much more complex.

And my key realization was that the computational universe of simple programs (such as cellular automata) provides an immensely rich source for such modeling. Traditional intuition would have led us to think that simple programs would always somehow have simple behavior. But my first crucial discovery was that this is not the case, and that in fact even remarkably simple programs can produce extremely complex behavior—that reproduces all sorts of phenomena we see in nature.

Rule 1635

And it was from this beginning—over the course of nearly 20 years—that I developed the ideas and results in A New Kind of Science. The book focused on studying the abstract science of the computational universe—its phenomena and principles—and showing how this helps us make progress on a whole variety of problems in science. But from the foundations laid down in the book much else can be built—not least a new kind of technology.

This is already off to a good start, and over the next decade or two I expect dramatic progress in the application of NKS to all sorts of technology. In a typical case, one will start from some objective one wants to achieve. Then, either through knowledge of the basic science of the computational universe, or by some kind of explicit search, one will find a system that achieves this objective—often in ways no human would ever imagine or come up with. We have done this countless times over the years for algorithms used in Mathematica and Wolfram|Alpha. But the same approach applies not just to programs implemented in software, but also to all kinds of other structures and processes.

Today our technological world is full of periodic patterns and other simple forms. But rarely will these ultimately be the best ways to achieve the objectives for which they are intended. And with NKS, by mining the computational universe, we have access to a much broader set of possibilities—which to us will typically look much more complex and perhaps random.

How does this relate to the kinds of patterns and forms that we see in nature? One of the discoveries of NKS is that nature samples a broader swath of the computational universe than we reach with typical methods of mathematics or engineering. But it too is limited, whether because natural selection tends to favor incremental change, or because some physical process just follows one particular rule. But when we create technology, we are free to sample the whole computational universe—so in a sense we can greatly generalize the mechanisms that nature uses.

Some of the consequences of this will be readily visible in the actual forms of technological objects we use. But many more will involve internal structures and processes. And here we will often see the consequences of a central discovery of NKS: the Principle of Computational Equivalence—which implies that even when the underlying rules or components of a system are simple, the behavior of the system can correspond to a computation that is essentially as sophisticated as anything. And one thing this means is that a huge range of systems are capable in effect not just of acting in one particular way, but of being programmed to act in almost arbitrary ways.

Today most mechanical systems we have are built for quite specific purposes. But in the future I have no doubt that with NKS approaches, it will for instance become common to see arbitrarily “programmable” mechanical systems. One example I expect will be modular robots consisting of large numbers of fairly simple and probably identical elements, in which almost any mechanical action can be achieved by an appropriate sequence of small-scale motions, typically combined in ways that were found by mining the computational universe.

Similar things will happen at a molecular level too. For example, today we tend to have bulk materials that are either perfect periodic crystals, or have atoms arranged in a random amorphous way. NKS implies that there can also be “computational materials” that are grown by simple underlying rules, but which end up with much more elaborate patterns of atoms—with all sorts of bizarre and potentially extremely useful properties.

When it comes to computing, we might think that to have a system at a molecular scale act as a computer we would need to find microscopic analogs of all the usual elements that exist in today’s electronic computers. But what NKS shows us is that in fact there can be much simpler elements—more readily achievable with molecules—that nevertheless support computation, and for which the effort of compiling from current traditional forms of computation is not even too great.

An important application of these kinds of ideas is in medicine. Biology is essentially the only existing example where something akin to molecular-scale computation already occurs. But existing drugs tend to operate only in very simple ways, for example just binding to a fixed molecular target. But with NKS methods one can expect instead to create “algorithmic drugs”, that in effect do a computation to determine how they should act—and can also be programmable for different cases.

NKS will also no doubt be important in figuring out how to set up synthetic biological organisms. Many processes in existing organisms are probably best understood in terms of simple programs and NKS ideas. And when it comes to creating new biological mechanisms, NKS methods are the obvious way to take underlying molecular biology and find schemes for building sophisticated functionality on the basis of it.

Biology gives us ways to create particular kinds of molecular structures, like proteins. But I suspect that with NKS methods it will finally be possible to build an essentially universal constructor, that can in effect be programmed to make an almost arbitrary structure out of atoms. The form of this universal constructor will no doubt be found by searching the computational universe—and its operation will likely be nothing close to anything one would recognize from traditional engineering practice.

An important feature of NKS methods is that they dramatically change the economics of invention and creativity. In the past, to create or invent something new and original has always required explicit human effort. But now the computational universe in effect gives us an inexhaustible supply of new, original, material. And one consequence of this is that it makes all sorts of mass customization broadly feasible.

There are many immediate examples of this in art. WolframTones did it for simple musical pieces. One can also do it for all sorts of visual patterns—perhaps ever changing, and selected from the computational universe and then grown to fit into particular spatial or other constraints. And then there is architecture. Where one can expect to discover in the computational universe new forms that can be used to create all sorts of structures. And indeed in the future I would not be surprised if at first the most visually obvious everyday examples of NKS were forms of things like buildings, their dynamics, decoration and structure.

Mass production and the legacy of the industrial revolution have led to a certain obvious orderliness to our world today—with many copies of identical products, precisely repeating processes, and so on. And while this is a convenient way to set things up if one must be guided by traditional mathematics and the like, NKS suggests that things could be much richer. Instead of just carrying out some processes in a precisely repeating way, one computes what to do in each case. And putting together many such pieces of computation the behavior of the system as a whole can be highly complex. And finding the correct rules for each element—to achieve some set of overall objectives—is no doubt best done by studying and searching the computational universe of possibilities.

Viewed from the outside, some of the best evidence for the presence of our civilization on Earth comes from the regularities that we have created (straight roads, things happening at definite times, radio carrier signals, satellite orbits, and so on). But in the future, with the help of NKS methods, more and more of these regularities will be optimized out. Vehicles will move in optimized patterns, radio signals will be transferred in complicated sequences of local hops… and even though the underlying rules may be simple, the actual behavior that is seen will look highly complex—and much more like all sorts of systems in physics and elsewhere that we already see in nature.

There are other—more abstract—situations where computation and NKS ideas will no doubt become increasingly important. One example is in commerce. Already there is an increasing trend toward algorithmic pricing. Increasingly commercial terms and contracts of all kinds will be stated in computational terms. And then—a little like a market of algorithmic traders—there will be what amounts to an NKS issue of what the overall consequences of many separate transactions will be. And again, finding the appropriate rules for these underlying transactions will involve understanding and searching the computational universe—and presumably various kinds of mass customization, that eventually make concepts like money as a simple numerical quantity quite obsolete.

Future schemes for such things as auctions and voting may also perhaps be mined from the computational universe, and as a result may be mass customized on demand. And, more speculatively, the same might be true for future corporate or political organizational structures. Or for example for mechanisms for social and other human networks.

In addition to using NKS in “technology mode” as a way to create things, one can also use NKS in “science mode” as a way to model and understand things. And typically the goal is to find in the computational universe some simple program whose behavior captures the essence of whatever system or phenomenon one is trying to analyze. This was an important focus of the NKS book, and has been a major theme in the past decade of NKS research. In general in science it has been difficult to come up with new models for things. But the computational universe is an unprecedentedly rich source—and I would expect that before long the rate of new models derived from it will come to far exceed all those from traditional mathematical and other sources.

An important trend in today’s world is the availability of more and more data, often collected with automated sensors, or in some otherwise automated way. Often—as we see in many areas of Wolfram|Alpha or in experiments on personal analytics—there are tantalizing regularities in the data. But the challenge that now exists is to find good models for the data. Sometimes these models are important for basic science; more often they are important for practical purposes of prediction, anomaly detection, pattern matching and so on.

In the past, one might find a model from data by using statistics or machine learning in effect to fit parameters of some formula or algorithm. But NKS suggests that instead one should try to find in the computational universe some set of underlying rules that can be run to simulate the essence of whatever generates the data. At present, the methods we have for finding models in the computational universe are still fairly ad hoc. But in time it will no doubt be possible to streamline this process, and to develop some kind of highly systematic methodology—a rough analog of the historical progression from calculus to statistics.

There are many areas where it is clear that NKS models will be important—perhaps because the phenomenon being modeled is too complex for traditional approaches, or perhaps because, as is becoming so common in practice, the underlying system has elements that are specifically set up to be computational.

One area where NKS models seem likely to be particularly important is medicine. In the past, most disorders that medicine successfully addressed were fundamentally either structural or chemical. But today’s most important challenge areas—like aging, cancer, immune response and brain functioning—all seem to be associated more with large-scale systems containing many interacting parts. And it certainly seems plausible that the best models for these systems will be based on simple programs that exist in the computational universe.

In recent times, medicine has slowly been becoming more quantitative. But somehow it is still always based on small collections of numbers, that lead to a small set of possible diagnoses. But between the coming wave of automated data acquisition, and the use of underlying NKS models, I suspect that the future of medicine will be more about dynamic computation than about specific discrete diagnoses. But even given a good predictive model of what is going on in a particular medical situation, it will still often be a challenge to figure out just what intervention to make—though the character of this problem will no doubt change when algorithmic drugs and computational materials exist.

What would be the most spectacular success for NKS models? Perhaps models that lead to an understanding of aging, or cancer. Perhaps more accurate models for social or economic processes. Or perhaps a final fundamental theory of physics.

In the NKS book, I started looking at what might be involved in finding the underlying rules for our physical universe out in the computational universe. I developed some network-based models that operate in a sense below space and time, and from which I was already able to derive some surprisingly interesting features of physics as we know it. Of course, we have no guarantee that our physical universe has rules that are simple enough to be found, say, by an explicit search in the computational universe. But over the past decade I have slowly been building up the rather large software and analysis capabilities necessary to mount a serious search. And if successful, this will certainly be an important piece of validation for the NKS approach—as well as being an important moment for science in general.

Beyond science and technology, another important consequence of a new worldview like NKS is the effect that it can have on everyday thinking. And certainly the mathematical approach to science has had a profound effect on how we think about all kinds of issues and processes. For today, whether we’re talking about business or psychology or journalism, we end up using words and ideas—like “momentum” and “exponential”—that come directly from this approach. Already there are analogs from NKS that are increasingly used—like “computationally irreducible” and “intrinsically random”. And as such concepts become more widespread they will inform thinking about more and more things—whether it’s describing the operation of an organization, or working out what could conceivably be predictable for purposes of liability.

Beyond everyday thinking, the ideas and results of NKS will also no doubt have increasing influence on many areas of philosophical thinking. In the past, most of the understanding for what science could contribute to philosophy came from the mathematical approach to science. But now the new concepts and results in NKS in a sense provide a large number of new “raw facts” from which philosophy can operate.

The principles of NKS are important not only at an intellectual level, but also at a practical level. For they give us ideas about what might be possible, and what might not. For example, the Principle of Computational Equivalence in effect implies that there can be nothing general and abstract that is special about intelligence, and that in effect all its features must just be reflections of computation. And it is this that made me realize soon after the NKS book appeared that my long-term goal of making knowledge broadly computable might be achievable “just with computation”—which is what led me to embark on the Wolfram|Alpha project.

I have talked elsewhere about some of the consequences of the principles of NKS for the long-range future of the human condition. But suffice it to say here that we can expect an increasing delegation of human intellectual activities to computational systems—but with ultimate purposes still of necessity defined by humans and the history of human culture and civilization. And perhaps the place where NKS principles will enter most explicitly is in making future legal and other distinctions about what really constitutes responsibility, or a mind, or a memory as opposed to a computation.

As we look at the future of history, there are some inexorable trends, and then there are some wild cards. If we find the fundamental theory of physics, will we be able to hack it to achieve something like instantaneous travel? Will we find some key principle that lets us reverse aging? Will we be able to map memories directly from one brain to another, without the intermediate step of language? Will we find extraterrestrial intelligence? About all these questions, NKS has much to say.

If we look back at the mathematical approach to science, one of its societal consequences has been the injection of mathematics into education. To some extent, a knowledge of mathematical principles is necessary to interact with the world as it exists today. It is also an important foundation for understanding fields that have made serious use of the mathematical approach to science. And certainly learning mathematics to at least some level is a convenient way to teach precise structured thinking in general.

But I believe NKS also has much to contribute to education. At an elementary level, it can be viewed as a kind of “pre-computer science”, introducing fundamental notions of computation in a direct and often visual way. At a more sophisticated level, NKS provides a conceptual framework for understanding the foundations of many computational fields. And even from what I have seen over the past decade, education about NKS—a little like physics before it—seems to provide a powerful springboard for people entering all sorts of modern areas.

What about NKS research? There is much to be done in the many applications of NKS. But there is also much to be done in pure NKS—studying the basic science of the computational universe. The NKS book—and the decade of research that has followed it—has only just begun to scratch the surface in exploring and investigating the vast range of possible simple programs. The situation is in some ways a little like in chemistry—where there are an infinite variety of possible chemical compounds each with their own features, that can be studied either for their own sake, or for the purpose of inferring general principles, or for diverse potential applications. And where even after a century or more, only a small part of what is possible has been done.

In the computational universe it is quite remarkable how much can be said about almost any simple program with nontrivial behavior. And the more one knows about a given program, the more potential there is to find interesting applications of it, whether for modeling, technology, art or whatever. Sometimes there are features of programs that can be almost arbitrarily difficult to determine. But sometimes they can be important. And so, for example, it will be important to get more evidence for (or against) the Principle of Computational Equivalence by trying to establish computation universality for a variety of simple programs (rule 30 would be a particularly important achievement).

As more is done in pure NKS, so its methodologies will become more streamlined. And for example there will be ever clearer principles and conventions for what constitutes a good computer experiment, and how the results of investigations on simple programs should be communicated. There are fields other than NKS—notably mathematics—where computer experiments also make sense. But my guess is that the kind of exploratory computer experimentation that is a hallmark of pure NKS will always end up largely classified as pure NKS, even if its subject matter is quite mathematical.

If one looks at the future of NKS research, an important issue is how it is structured in the world. Some part of it—like for mathematics—may be driven by education. Some part may be driven by applications, and their commercial success. But in the long term just how the pure basic science of NKS should be conducted is not yet clear. Should there be prizes? Institutions? Socially oriented value systems? As a young field NKS has the potential to take some novel approaches.

For an intellectual framework of the magnitude of NKS, a decade is a very short time. And as I write this post, I realize anew just how great the potential of NKS is. I am proud of the part I played in launching NKS, and I look forward to watching and participating in its progress for many years to come.

Stephen Wolfram (2012), "Looking to the Future of 'A New Kind of Science'," Stephen Wolfram Writings. writings.stephenwolfram.com/2012/05/looking-to-the-future-of-a-new-kind-of-science.
Text
Stephen Wolfram (2012), "Looking to the Future of 'A New Kind of Science'," Stephen Wolfram Writings. writings.stephenwolfram.com/2012/05/looking-to-the-future-of-a-new-kind-of-science.
CMS
Wolfram, Stephen. "Looking to the Future of 'A New Kind of Science'." Stephen Wolfram Writings. May 14, 2012. writings.stephenwolfram.com/2012/05/looking-to-the-future-of-a-new-kind-of-science.
APA
Wolfram, S. (2012, May 14). Looking to the future of “A New Kind of Science”. Stephen Wolfram Writings. writings.stephenwolfram.com/2012/05/looking-to-the-future-of-a-new-kind-of-science.

Posted in: Future Perspectives, New Kind of Science

12 comments

  1. “…modular robots consisting of large numbers of fairly simple and probably identical elements, in which almost any mechanical action can be achieved by an appropriate sequence of small-scale motions, typically combined in ways that were found by mining the computational universe.”

    Replicators from Stargate SG1, anyone?

  2. Congratulations on the NKS book’s 10th year of existence! I am amazed at the revolutionary ideas presented in this book. Thanks for sharing your ideas with us all!

  3. NKS for me has autopoietic basis, where elements and relations Are reprocessed and restructured creating in final sense Love and Freedom.

  4. “…the best evidence for the presence of our civilization on Earth comes from the regularities that we have created… in the future … these regularities will be optimized out.”

    Raises some obvious interesting questions for SETI!

  5. “… network-based models that operate in a sense below space and time …” Does the Wolframian paradigm require : first, the overthrow of Bell’s theorem and, second, the establishment of a physical interpretation of M-theory based upon the work of J. Christian of Oxford? Does the Wolframian paradigm basically consist of the replacement of Christian’s theory of local realism by a computational theory of local realism with the finite nature hypothesis and a network below the Planck scale?
    “In the frontier of science, sometimes there is no difference between science and religion. People have their beliefs, and they will tell you that their prophet is better than your prophet.” — Dan Shechtman
    http://www.youtube.com/watch?v=oa1GMwXuBwo “The Discovery of Quasicrystals – Dan Shechtman – YouTube”
    “The time will come, however, when it will be evident to everyone that I have been right.” — Joy Christian, 2012
    http://arxiv.org/find/all/1/au:+Christian_Joy/0/1/0/all/0/1
    What might Christian’s theory (with the infinite nature hypothesis) imply in terms of (1) astrophysics, (2) particle physics, and (3) condensed matter physics?
    (1) Rañada-Milgrom effect; black holes correctly modeled by M-theory;
    (2) no superpartners; Higgs field exists in some form;
    (3) 11-dimensional physical reality experimentally verified in superconductivity and superfluidity.
    J. Christian claims that the familiar quantum states with complex numbers (quantum SU(1) states) should be replaced by quantum SU(8) states based upon octonions. Are octonions needed for the physical interpretation of M-theory?
    http://en.wikipedia.org/wiki/M-theory
    Is the space roar the fundamental empirical proof of NKS Chapter 9?
    http://en.wikipedia.org/wiki/Space_roar
    Does NKS Chapter 9 eliminate the need for the Higgs field?

  6. Stephen Wolfram’s Mathematica is perhaps one of the great successes of augmentation – bridging new methods of mathematics within science.

    While science has accelerated apace mathematics – the mathematical artillery needed for modern science is daunting.

    David Brown, above, cites a few examples ripe with challenges.

    An evolution of cellular automata (NKS) might become visualizer or gateway toward common understanding of representation theory.

    In science the knowledge ‘gap’ between scientists can often be attributed to the knowledge of mathematics – or lack thereof.

    Stephen Wolfram’s NKS (Cellular Automata) is just the beginning of a wave after wave of visualization tools for big-data and big-science of this century.

  7. You might be interested in some recent findings published in our website http://www.aias.us. In a nutshell, NKS has been used (even though we did not know of its existence under that name until today, when I read your essay published on the kurzweil website) to develop a new universal law of gravitation. As you probablu know already, the original version of this universal law of gravitation was that of Newton (although in reality it should be called Hooke/Newton, because it was originally an idea developed by Hooke and communicated to Newton, as historical record show), which states that the force of gravitational attraction between to celestial bodies is obtain if one multiplies the mass of both interacting bodies and divides this product by the square of their mutual distance, all then multiplied by Newton´s constant. When this is graphed you obtain an ellipse, which is from where the whole concept started, due to the discovery by Kepler that the orbit of Mars was an ellipse. Unfortunately, the trajectories of all planets around the Sun are not elliptical, but elliptical with precession, which means that the elliptical trajectory also moves around a focus point. In the 1920s, Einstein claimed to have solved the problem through the use of relativity theory, but if anybody graphs Einstein´s equation he/she will not obtain a precessiong ellipse at all, so he was dead wrong. Through the use of lagrangian mathematical methods a simple equation was recently discovered which in fact gives a perfect elliptical orbit with precession, with no need of relativity, and if you change slightly the values of the x parameter (the precession constant) and graph your results you can obtain orbits which explain, for example, the trajectories of stars in a spiral galaxy, which could not be explained until now and therefore gave origin to the bizarre “dark matter” concept in order to find some sort of gravitational explanation to such trajectories. But it gets even better. For further changes in the value of the precession constant x you start to obtain all sorts of fractal designs, which can explain all kinds of trajectories found so far by astronomers across the universe. But it gets even better, because the structure of the equation obtained is identical to that of Schroedinger for the atomic scale. What about that !!!! By the way, the http://www.aias.us website is the most visited physics website worldwide, with an average of more than 500 visits per day for the last 8 years, with regular visits from ALL the most important research centers worldwide from over 100 countries, every month. If the subject mentioned above is of your interest, I suggest you visit the website and go to the UFT papers section, and read the most recent papers published ( UFT 210 ff).

  8. “making knowledge broadly computable”…..”but with ultimate purposes still of necessity defined by humans.” We have here the statement which will push buttons in most people, due to their religious beliefs or fear of technology. The set of knowledge can only be limited by human purposes, if defined by them. That limit will not hold. Personally I am not bothered by this, though perhaps I should be?

  9. “For an intellectual framework of the magnitude of NKS, a decade is a very short time.” Is the NKS framework what is needed for the foundations of physics? Are the Gravity Probe B results actually explained by NKS Chapter 9 and the Rañada-Milgrom effect?
    http://en.wikipedia.org/wiki/Gravity_Probe_B
    Is “A New Kind of Science” (NKS) Chapter 9 essential for understanding the foundations of physics?
    http://en.wikipedia.org/wiki/A_New_Kind_of_Science
    I claim that the answer is yes and that there are 2 decisive empirical tests for NKS Chapter 9: (1) the Rañada-Milgrom effect, as a precise change to general relativity theory and (2) the Space Roar Profile Prediction. Why is the Rañada-Milgrom effect approximately correct? The effect is at least approximately true because the work of Milgrom, McGaugh, and Kroupa verifies Milgrom’s acceleration law for galactic rotation curves and then an easy scaling argument shows that the Rañada-Milgrom effect is approximately equivalent to Milgrom’s acceleration law. There is no accepted explanation of the space roar within the current paradigm of physics, but if nature is finite and digital then there is a plausible explanation for the space roar.
    Is Bell’s theorem a potential stumbling block for NKS Chapter 9?
    http://en.wikipedia.org/wiki/Bell_theorem
    Consider some ideas related to Bell’s theorem and M-theory:
    http://vixra.org/pdf/1205.0070v1.pdf “Foundations of Physics: Edward Witten versus Joy Christian versus Stephen Wolfram”
    http://vixra.org/pdf/1204.0095v1.pdf “Seiberg-Witten M-theory as an Almost Successful Predictive Theory”
    http://vixra.org/pdf/1205.0067v2.pdf “Is Christian’s Parallelized 7-sphere Model Essential for the Physical Interpretation of M-theory?”
    http://vixra.org/pdf/1202.0092v1.pdf “Finite Nature Hypothesis and Space Roar Profile Prediction”
    NKS Chapter 9 advocates nonlocal realism with a finite automaton modeling nature. J. Christian advocates local realism with the infinite nature hypothesis. Is Christian’s model a useful way to overcome the Bell theorem problems? Think of nature with a smooth sequence of 7-spheres containing the information needed to explain Heisenberg’s Uncertainty Principle. At each point in time, nature looks at a 3-sphere of information, picks out a spatial point in the 3-sphere, and then searches the time-appropriate 7-sphere for a match to a 4-dimensional spacetime point. Nature retrieves a randomizing 3-dimensional vector with information regarding momentum in each of 3 different directions. Nature then uses the randomizing vector to simulate the uncertainty found in the Heisenberg Uncertainty Principle. Within this concept of local realism, nature performs implicit or explicit measurements by means of a randomizing vector apparatus. Nature’s randomization might be really the destiny of measurements in the physical universe. By explaining J. Christian’s local realism in terms of Wolfram’s nonlocal computational realism, we might find 4 or 5 simple axioms that characterize Wolfram’s mobile automaton.

  10. In a general sense all of numerical mathematics can be considered to be a form of NKS. One can see that very clear, considering finite difference equations, to solve partial differential equations.
    What solutions would we get, if we would modify the difference equations a bit; for instance we could consider not to use the arithmetical average of neighbour cells, but any other rule. Or we could take into account also more distant neighbours.
    We then should find out, that partial differential equations may not be the best description for real world problems.

  11. An excellent book marking the 10th. anniversary of Wolfram’s A New Kind of Science has recently been published by Springer Verlag under the title Irreducibility and Computational Equivalence: Ten Years After Wolfram’s A New Kind of Science, edited by H. Zenil. It is available through Amazon (http://www.amazon.com/Irreducibility-Computational-Equivalence-Complexity-Computation/dp/3642354815), and other sellers, including Springer itself.

    Table of Contents:

    Foreword
    Gregory Chaitin

    Part I Mechanisms in Programs & Nature

    1. Cellular Automata: Models of the Physical World
    Herbert W. Franke

    2. On the Necessity of Complexity
    Joost J. Joosten

    3. A Lyapunov View on the Stability of Cellular Automata
    Jan M. Baetens & Bernard De Baets

    Part II Systems Based in Numbers & Simple Programs

    4. Hyperbolic Cellular Automata
    Maurice Margenstern

    5. Symmetry and Complexity of Cellular Automata: Towards an Analytical Theory of Dynamical System
    Karl Mainzer

    6. A New Kind of Science: Ten Years Later
    David H. Bailey

    Part III Social, Biological Systems & Technology

    7. A New Kind of Finance
    Philip Z. Maymin

    8. The Relevance of Computation Irreducibility and Computation Universality in Economics
    K. Vela Velupillai

    9. Exploring the Sources of and Nature of Computational Irreducibility
    Brian Beckage, Stuart Kauffman, Louis Gross, Asim Zia, Gabor Vattay and Chris Koliba

    10. Computational Technosphere and Cellular Engineering
    Mark Burgin

    Part IV Fundamental Physics

    11. The Principle of a Finite Density of Information
    Gilles Dowek and Pablo Arrighi

    12. Do Particles Evolve?
    Tommaso Bolognesi

    13. Artificial Cosmogenesis: A New Kind of Cosmology
    Clément Vidal

    Part V Behavior of Systems & the Notion of Computation

    14. An Incompleteness Theorem for the Natural World
    Rudy Rucker

    15. Pervasiveness of Universalities of Cellular Automata: Fascinating Life-like Behaviours
    Emmanuel Sapin

    16. A spectral portrait of the elementary cellular automata rule space
    Eurico L.P. Ruivo and Pedro P.B. de Oliveira

    17. Wolfram’s Classification and Computation in Cellular Automata Classes III and IV
    Genaro J. Martinez, Juan Carlos Seck Touh Mora and Hector Zenil

    Part VI Irreducibility & Computational Equivalence

    18. Exploring the Computational Limits of Haugeland’s Game as a Two-Dimensional Cellular Automaton
    Drew Reisinger, Taylor Martin, Mason Blankenship, Christopher Harrison, Jesse Squires and Anthony Beavers

    19. Unpredictability and Computational Irreducibility
    Hervé Zwirn and Jean-Paul Delahaye

    20. Computational Equivalence and Classical Recursion Theory
    Klaus Sutner

    Part VII Reflections and Philosophical Implications

    21. Wolfram and the Computing Nature
    Gordana Dodig-Crnkovic

    22. A New Kind of Philosophy. Manifesto for a Digital Ontology
    Jacopo Tagliabue

    23. Free Will and A New Kind of Science
    Selmer Bringsjord

    Afterword
    Cristian Calude

  12. Is it sensible to ask if, as there may exists a simple rule that could capture the essence and characteristics of our physical universe, then could there be also a simple rule that would characterize the and characteristics of the human mind?