The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language

This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm

The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language

Turning LLM Capabilities into Functions

So far, we mostly think of LLMs as things we interact directly with, say through chat interfaces. But what if we could take LLM functionality and “package it up” so that we can routinely use it as a component inside anything we’re doing? Well, that’s what our new LLMFunction is about. Continue reading

Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin Kit

This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm

Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin Kit

Build a New Plugin in under a Minute…

A few weeks ago, in collaboration with OpenAI, we released the Wolfram plugin for ChatGPT, which lets ChatGPT use Wolfram Language and Wolfram|Alpha as tools, automatically called from within ChatGPT. One can think of this as adding broad “computational superpowers” to ChatGPT, giving access to all the general computational capabilities and computational knowledge in Wolfram Language and Wolfram|Alpha.

But what if you want to make your own special plugin, that does specific computations, or has access to data or services that are for example available only on your own computer or computer system? Well, today we’re releasing a first version of a kit for doing that. And building on our whole Wolfram Language tech stack, we’ve managed to make the whole process extremely easy—to the point where it’s now realistic to deploy at least a basic custom ChatGPT plugin in under a minute. Continue reading

ChatGPT Gets Its “Wolfram Superpowers”!

See also:
“What Is ChatGPT Doing … and Why Does It Work?” »

This is part of an ongoing series about our LLM-related technology:ChatGPT Gets Its “Wolfram Superpowers”!Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin KitThe New World of LLM Functions: Integrating LLM Technology into the Wolfram LanguagePrompts for Work & Play: Launching the Wolfram Prompt RepositoryIntroducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm

ChatGPT Gets Its “Wolfram Superpowers”!

To enable the functionality described here, select and install the Wolfram plugin from within ChatGPT.

Note that this capability is so far available only to some ChatGPT Plus users; for more information, see OpenAI’s announcement.

In Just Two and a Half Months…

Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.

Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things. So here’s my (very simple) first example from January, but now done by ChatGPT with “Wolfram superpowers” installed: Continue reading

Will AIs Take All Our Jobs and End Human History—or Not? Well, It’s Complicated…

The Shock of ChatGPT

Just a few months ago writing an original essay seemed like something only a human could do. But then ChatGPT burst onto the scene. And suddenly we realized that an AI could write a passable human-like essay. So now it’s natural to wonder: How far will this go? What will AIs be able to do? And how will we humans fit in?

My goal here is to explore some of the science, technology—and philosophy—of what we can expect from AIs. I should say at the outset that this is a subject fraught with both intellectual and practical difficulty. And all I’ll be able to do here is give a snapshot of my current thinking—which will inevitably be incomplete—not least because, as I’ll discuss, trying to predict how history in an area like this will unfold is something that runs straight into an issue of basic science: the phenomenon of computational irreducibility. Continue reading

What Is ChatGPT Doing … and Why Does It Work?

See also:
“LLM Tech Comes to Wolfram Language” »
A discussion about the history of neural nets »

It’s Just Adding One Word at a Time

That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.” Continue reading

Computational Foundations for the Second Law of Thermodynamics

Computational Foundations for the Second Law of Thermodynamics

The Mystery of the Second Law

Entropy increases. Mechanical work irreversibly turns into heat. The Second Law of thermodynamics is considered one of the great general principles of physical science. But 150 years after it was first introduced, there’s still something deeply mysterious about the Second Law. It almost seems like it’s going to be “provably true”. But one never quite gets there; it always seems to need something extra. Sometimes textbooks will gloss over everything; sometimes they’ll give some kind of “common-sense-but-outside-of-physics argument”. But the mystery of the Second Law has never gone away.

Why does the Second Law work? And does it even in fact always work, or is it actually sometimes violated? What does it really depend on? What would be needed to “prove it”?

For me personally the quest to understand the Second Law has been no less than a 50-year story. But back in the 1980s, as I began to explore the computational universe of simple programs, I discovered a fundamental phenomenon that was immediately reminiscent of the Second Law. And in the 1990s I started to map out just how this phenomenon might finally be able to demystify the Second Law. But it is only now—with ideas that have emerged from our Physics Project—that I think I can pull all the pieces together and finally be able to construct a proper framework to explain why—and to what extent—the Second Law is true. Continue reading

A 50-Year Quest: My Personal Journey with the Second Law of Thermodynamics

When I Was 12 Years Old…

I’ve been trying to understand the Second Law now for a bit more than 50 years.

It all started when I was 12 years old. Building on an earlier interest in space and spacecraft, I’d gotten very interested in physics, and was trying to read everything I could about it. There were several shelves of physics books at the local bookstore. But what I coveted most was the largest physics book collection there: a series of five plushly illustrated college textbooks. And as a kind of graduation gift when I finished (British) elementary school in June 1972 I arranged to get those books. And here they are, still on my bookshelf today, just a little faded, more than half a century later:

Click to enlarge Continue reading

How Did We Get Here? The Tangled History of the Second Law of Thermodynamics

How Did We Get Here? The Tangled History of the Second Law of Thermodynamics

The Basic Arc of the Story

As I’ve explained elsewhere, I think I now finally understand the Second Law of thermodynamics. But it’s a new understanding, and to get to it I’ve had to overcome a certain amount of conventional wisdom about the Second Law that I at least have long taken for granted. And to check myself I’ve been keen to know just where this conventional wisdom came from, how it’s been validated, and what might have made it go astray.

And from this I’ve been led into a rather detailed examination of the origins and history of thermodynamics. All in all, it’s a fascinating story, that both explains what’s been believed about thermodynamics, and provides some powerful examples of the complicated dynamics of the development and acceptance of ideas. Continue reading

Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT

See also:
“What Is ChatGPT Doing … and Why Does It Work?” »

Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT

ChatGPT and Wolfram|Alpha

It’s always amazing when things suddenly “just work”. It happened to us with Wolfram|Alpha back in 2009. It happened with our Physics Project in 2020. And it’s happening now with OpenAI’s ChatGPT. Continue reading

The Latest from Our R&D Pipeline: Version 13.2 of Wolfram Language & Mathematica

Exploring Wolfram Language 13.2 with Stephen Wolfram

The Latest from Our R&D Pipeline: Version 13.2 of Wolfram Language & Mathematica

Delivering from Our R&D Pipeline

In 2020 it was Versions 12.1 and 12.2; in 2021 Versions 12.3 and 13.0. In late June this year it was Version 13.1. And now we’re releasing Version 13.2. We continue to have a huge pipeline of R&D, some short term, some medium term, some long term (like decade-plus). Our goal is to deliver timely snapshots of where we’re at—so people can start using what we’ve built as quickly as possible. Continue reading