The Ruliology of Lambdas

The Ruliology of Lambdas

Click any diagram to get Wolfram Language code to reproduce it.

What Are Lambdas?

It’s a story of pure, abstract computation. In fact, historically, one of the very first. But even though it’s something I for one have used in practice for nearly half a century, it’s not something that in all my years of exploring simple computational systems and ruliology I’ve ever specifically studied. And, yes, it involves some fiddly technical details. But it’ll turn out that lambdas—like so many systems I’ve explored—have a rich ruliology, made particularly significant by their connection to practical computing.

In Wolfram Language it’s the Function function. Back when Alonzo Church first discussed it in the 1930s he called it λ (lambda). The idea is to have something that serves as a “pure function”—which can be applied to an argument to give a value. For example, in the Wolfram Language one might have:

Continue reading

“I Have a Theory Too”: The Challenge and Opportunity of Avocational Science

Theories of the World

Several often arrive in a single day. Sometimes they’re marked “urgent”. Sometimes they’re long. Sometimes they’re short. Sometimes they’re humble. Sometimes they’re conspiratorial. And sometimes, these days, they’re written “in collaboration with” an AI. But there’s a common theme: they’re all emails that present some kind of fundamental theory invented by their authors (or perhaps their AI) about how our universe works.

At some level it’s encouraging to see how many people find it interesting to think about fundamental questions in science. But at some level it’s also to me very frustrating. All that effort being spent. And so much of it so wide of the mark. Most of the time it’s based on at best high-school physics—missing everything that was learned in twentieth-century physics. Sometimes it’s easy to tell that what’s being said just can’t be right; often things are too vague or tangled for one to be able to say much.

Most physicists term people who send such theories “crackpots”, and either discard their missives or send back derisive responses. I’ve never felt like that was the right thing to do. Somehow I’ve always felt as if there has to be a way to channel that interest and effort into something that would be constructive and fulfilling for all concerned. And maybe, just maybe, I now have at least one idea in that direction. Continue reading

New Features Everywhere: Launching Version 14.3 of Wolfram Language & Mathematica

This Is a Big Release

Version 14.2 launched on January 23 of this year. Now, today, just over six months later, we’re launching Version 14.3. And despite its modest .x designation, it’s a big release, with lots of important new and updated functionality, particularly in core areas of the system.

I’m particularly pleased to be able to report that in this release we’re delivering an unusually large number of long-requested features. Why didn’t they come sooner? Well, they were hard—at least to build to our standards. But now they’re here, ready for everyone to use.

Those who’ve been following our livestreamed software design reviews (42 hours of them since Version 14.2) may get some sense of the effort we put into getting the design of things right. And in fact we’ve been consistently putting in that kind of effort now for nearly four decades—ever since we started developing Version 1.0. And the result is something that I think is completely unique in the software world—a system that is consistent and coherent through and through, and that has maintained compatibility for 37 years. Continue reading

What If We Had Bigger Brains? Imagining Minds beyond Ours

Cats Don’t Talk

We humans have perhaps 100 billion neurons in our brains. But what if we had many more? Or what if the AIs we built effectively had many more? What kinds of things might then become possible? At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be. But what would become possible with 100 trillion neurons? And is it even something we could imagine understanding?

My purpose here is to start exploring such questions, informed by what we’ve seen in recent years in neural nets and LLMs, as well as by what we now know about the fundamental nature of computation, and about neuroscience and the operation of actual brains (like the one that’s writing this, imaged here):

What If We Had Bigger Brains? Imagining Minds beyond Ours Continue reading

What Can We Learn about Engineering and Innovation from Half a Century of the Game of Life Cellular Automaton?

What Can We Learn about Engineering and Innovation from Half a Century of the Game of Life Cellular Automaton?

Metaengineering and Laws of Innovation

Things are invented. Things are discovered. And somehow there’s an arc of progress that’s formed. But are there what amount to “laws of innovation” that govern that arc of progress?

There are some exponential and other laws that purport to at least measure overall quantitative aspects of progress (number of transistors on a chip; number of papers published in a year; etc.). But what about all the disparate innovations that make up the arc of progress? Do we have a systematic way to study those?

We can look at the plans for different kinds of bicycles or rockets or microprocessors. And over the course of years we’ll see the results of successive innovations. But most of the time those innovations won’t stay within one particular domain—say shapes of bicycle frames. Rather they’ll keep on pulling in innovations from other domains—say, new materials or new manufacturing techniques. But if we want to get closer to the study of the pure phenomenon of innovation we need a case where—preferably over a long period of time—everything that happens can be described in a uniform way within a single narrowly defined framework. Continue reading

Towards a Computational Formalization for Foundations of Medicine

Towards a Computational Formalization for Foundations of Medicine

A Theory of Medicine?

As it’s practiced today, medicine is almost always about particulars: “this has gone wrong; this is how to fix it”. But might it also be possible to talk about medicine in a more general, more abstract way—and perhaps to create a framework in which one can study its essential features without engaging with all of its details?

My goal here is to take the first steps towards such a framework. And in a sense my central result is that there are many broad phenomena in medicine that seem at their core to be fundamentally computational—and to be captured by remarkably simple computational models that are readily amenable to study by computer experiment.

I should make it clear at the outset that I’m not trying to set up a specific model for any particular aspect or component of biological systems. Rather, my goal is to “zoom out” and create what one can think of as a “metamodel” for studying and formalizing the abstract foundations of medicine. Continue reading

Launching Version 14.2 of Wolfram Language & Mathematica: Big Data Meets Computation & AI

The Drumbeat of Releases Continues…

Just under six months ago (176 days ago, to be precise) we released Version 14.1. Today I’m pleased to announce that we’re releasing Version 14.2, delivering the latest from our R&D pipeline.

This is an exciting time for our technology, both in terms of what we’re now able to implement, and in terms of how our technology is now being used in the world at large. A notable feature of these times is the increasing use of Wolfram Language not only by humans, but also by AIs. And it’s very nice to see that all the effort we’ve put into consistent language design, implementation and documentation over the years is now paying dividends in making Wolfram Language uniquely valuable as a tool for AIs—complementing their own intrinsic capabilities. Continue reading

Who Can Understand the Proof? A Window on Formalized Mathematics

Related writings:
“Logic, Explainability and the Future of Understanding” (2018) »
“The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics” (2022) »
“Computational Knowledge and the Future of Pure Mathematics” (2014) »

The Simplest Axiom for Logic

Theorem (Wolfram with Mathematica, 2000):
The single axiom ((ab)•c)•(a•((ac)•a))c is a complete axiom system for Boolean algebra (and is the simplest possible)

For more than a century people had wondered how simple the axioms of logic (Boolean algebra) could be. On January 29, 2000, I found the answer—and made the surprising discovery that they could be about twice as simple as anyone knew. (I also showed that what I found was the simplest possible.)

It was an interesting result—that gave new intuition about just how simple the foundations of things can be, and for example helped inspire my efforts to find a simple underlying theory of physics.

But how did I get the result? Well, I used automated theorem proving (specifically, what’s now FindEquationalProof in Wolfram Language). Automated theorem proving is something that’s been around since at least the 1950s, and its core methods haven’t changed in a long time. But in the rare cases it’s been used in mathematics it’s typically been to confirm things that were already believed to be true. And in fact, to my knowledge, my Boolean algebra axiom is actually the only truly unexpected result that’s ever been found for the first time using automated theorem proving. Continue reading

Useful to the Point of Being Revolutionary: Introducing Wolfram Notebook Assistant

Useful to the Point of Being Revolutionary: Introducing Wolfram Notebook Assistant

Note: As of today, copies of Wolfram Version 14.1 are being auto-updated to allow subscription access to the capabilities described here. [For additional installation information see here.]

Just Say What You Want! Turning Words into Computation

Nearly a year and a half ago—just a few months after ChatGPT burst on the scene—we introduced the first version of our Chat Notebook technology to integrate LLM-based chat into Wolfram Notebooks. For the past year and a half we’ve been building on those foundations. And today I’m excited to be able to announce that we’re releasing the fruits of those efforts: the first version of our Wolfram Notebook Assistant.

There are all sorts of gimmicky AI assistants out there. But Notebook Assistant isn’t one of them. It’s a serious, deep piece of new technology, and what’s more important, it’s really, really useful! In fact, I think it’s so useful as to be revolutionary. Personally, I thought I was a pretty efficient user of Wolfram Language—but Notebook Assistant has immediately made me not only significantly more efficient, but also more ambitious in what I try to do. I hadn’t imagined just how useful Notebook Assistant was going to be. But seeing it now I can say for sure that it’s going to raise the bar for what everyone can do. And perhaps most important of all, it’s going to open up computational language and computational thinking to a vast range of new people, who in the past assumed that those things just weren’t accessible to them.

Leveraging the decades of work we’ve done on the design and implementation of the Wolfram Language (and Wolfram|Alpha), Notebook Assistant lets people just say in their own words what they want to do; then it does its best to crispen it up and give a computational implementation. Sometimes it goes all the way and just delivers the answer. But even when there’s no immediate “answer” it does remarkably well at building up structures where things can be represented computationally and tackled concretely. People really don’t need to know anything about computational language—or computational thinking to get started; Notebook Assistant will take their ideas, rough as they may be, and frame them in computational language terms. Continue reading

Foundations of Biological Evolution: More Results & More Surprises

Foundations of Biological Evolution: More Results & More Surprises

This is a follow-on to Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes [May 3, 2024].

Even More from an Extremely Simple Model

A few months ago I introduced an extremely simple “adaptive cellular automaton” model that seems to do remarkably well at capturing the essence of what’s happening in biological evolution. But over the past few months I’ve come to realize that the model is actually even richer and deeper than I’d imagined. And here I’m going to describe some of what I’ve now figured out about the model—and about the often-surprising things it implies for the foundations of biological evolution.

The starting point for the model is to view biological systems in abstract computational terms. We think of an organism as having a genotype that’s represented by a program, that’s then run to produce its phenotype. So, for example, the cellular automaton rules on the left correspond to a genotype which are then run to produce the phenotype on the right (starting from a “seed” of a single red cell):

Continue reading