The Ease of Wolfram|Alpha, the Power of Mathematica: Introducing Wolfram|Alpha Notebook Edition

Wolfram|Alpha Notebook Edition

The Next Big Step for Wolfram|Alpha

Wolfram|Alpha has been a huge hit with students. Whether in college or high school, Wolfram|Alpha has become a ubiquitous way for students to get answers. But it’s a one-shot process: a student enters the question they want to ask (say in math) and Wolfram|Alpha gives them the (usually richly contextualized) answer. It’s incredibly useful—especially when coupled with its step-by-step solution capabilities.

But what if one doesn’t want just a one-shot answer? What if one wants to build up (or work through) a whole computation? Well, that’s what we created Mathematica and its whole notebook interface to do. And for more than 30 years that’s how countless inventions and discoveries have been made around the world. It’s also how generations of higher-level students have been taught.

But what about students who aren’t ready to use Mathematica yet? What if we could take the power of Mathematica (and what’s now the Wolfram Language), but combine it with the ease of Wolfram|Alpha?

Well, that’s what we’ve done in Wolfram|Alpha Notebook Edition. Continue reading

A Book from Alan Turing… and a Mysterious Piece of Paper

A Book from Alan Turing...

How I Got the Book

In May 2017, I got an email from a former high-school teacher of mine named George Rutter: “I have a copy of Dirac’s big book in German (Die Prinzipien der Quantenmechanik) that was owned by Alan Turing, and following your book Idea Makers it seemed obvious that you were the right person to own this.” He explained that he’d got the book from another (by then deceased) former high-school teacher of mine, Norman Routledge, who I knew had been a friend of Alan Turing’s. George ended, “If you would like the book, I could give it to you the next time you are in England.”

A couple of years passed. But in March 2019 I was indeed in England, and arranged to meet George for breakfast at a small hotel in Oxford. We ate and chatted, and waited for the food to be cleared. Then the book moment arrived. George reached into his briefcase and pulled out a rather unassuming, typical mid-1900s academic volume. Continue reading

Fifty Years of Mentoring

I’ve been reflecting recently on things I like to do. Of course I like creating things, figuring things out, and so on. But something else I like—that I don’t believe I’ve ever written about before—is mentoring. I’ve been doing it a shockingly long time: my first memories of it date from before I was 10 years old, 50 years ago. Somehow I always ended up being the one giving lots of advice—first to kids my own age, then also to ones somewhat younger, or older, and later to all sorts of people.

I was in England recently, and ran into someone I’d known as a kid nearly 50 years ago—and hadn’t seen since. He’s had a fascinating and successful career, but was kind enough to say that my interactions and advice to him nearly 50 years ago had really been important to him. Of course it’s nice to hear things like that—but as I reflect on it, I realize that mentoring is something I find fulfilling, whether or not I end up knowing that whatever seeds I’ve sown germinate (though, to be clear, I do find it fascinating to see what happens). Continue reading

Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…

Mitchell Feigenbaum
(Artwork by Gunilla Feigenbaum)

Behind the Feigenbaum Constant

It’s called the Feigenbaum constant, and it’s about 4.6692016. And it shows up, quite universally, in certain kinds of mathematical—and physical—systems that can exhibit chaotic behavior.

Mitchell Feigenbaum, who died on June 30 at the age of 74, was the person who discovered it—back in 1975, by doing experimental mathematics on a pocket calculator.

It became a defining discovery in the history of chaos theory. But when it was first discovered, it was a surprising, almost bizarre result, that didn’t really connect with anything that had been studied before. Somehow, though, it’s fitting that it should have been Mitchell Feigenbaum—who I knew for nearly 40 years—who would discover it.

Trained in theoretical physics, and a connoisseur of its mathematical traditions, Mitchell always seemed to see himself as an outsider. He looked a bit like Beethoven—and projected a certain stylish sense of intellectual mystery. He would often make strong assertions, usually with a conspiratorial air, a twinkle in his eye, and a glass of wine or a cigarette in his hand. Continue reading

Testifying at the Senate about A.I.-Selected Content on the Internet

Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms

An Invitation to Washington

Three and a half weeks ago I got an email asking me if I’d testify at a hearing of the US Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation and the Internet. Given that the title of the hearing was “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms” I wasn’t sure why I’d be relevant.

But then the email went on: “The hearing is intended to examine, among other things, whether algorithmic transparency or algorithmic explanation are policy options Congress should be considering.” That piqued my interest, because, yes, I have thought about “algorithmic transparency” and “algorithmic explanation”, and their implications for the deployment of artificial intelligence.

Generally I stay far away from anything to do with politics. But figuring out how the world should interact with AI is really important. So I decided that—even though it was logistically a bit difficult—I should do my civic duty and go to Washington and testify. Continue reading

My Part in an Origin Story:
The Launching of the Santa Fe Institute

The first workshop to define what is now the Santa Fe Institute took place on October 5–6, 1984. I was recently asked to give some reminiscences of the event, for a republication of a collection of papers derived from this and subsequent workshops.

It was a slightly dark room, decorated with Native American artifacts. Around it were tables arranged in a large rectangle, at which sat a couple dozen men (yes, all men), mostly in their sixties. The afternoon was wearing on, with many different people giving their various views about how to organize what amounted to a putative great new interdisciplinary university.

Here’s the original seating chart, together with a current view of the meeting room. (I’m only “Steve” to Americans currently over the age of 60…):

Santa Fe seating chart Continue reading

A Few Thoughts about Deep Fakes

Someone from the House Permanent Select Committee on Intelligence recently contacted me about a hearing they’re having on the subject of deep fakes. I can’t attend the hearing, but the conversation got me thinking about the subject of deep fakes, and I made a few quick notes….

What You See May Not Be What Happened

The idea of modifying images is as old as photography. At first, it had to be done by hand (sometimes with airbrushing). By the 1990s, it was routinely being done with image manipulation software such as Photoshop. But it’s something of an art to get a convincing result, say for a person inserted into a scene. And if, for example, the lighting or shadows don’t agree, it’s easy to tell that what one has isn’t real.

What about videos? If one does motion capture, and spends enough effort, it’s perfectly possible to get quite convincing results—say for animating aliens, or for putting dead actors into movies. The way this works, at least in a first approximation, is for example to painstakingly pick out the keypoints on one face, and map them onto another.

What’s new in the past couple of years is that this process can basically be automated using machine learning. And, for example, there are now neural nets that are simply trained to do “face swapping”:

Face swap
Continue reading

The Wolfram Function Repository: Launching an Open Platform for Extending the Wolfram Language

What the Wolfram Language Makes Possible

We’re on an exciting path these days with the Wolfram Language. Just three weeks ago we launched the Free Wolfram Engine for Developers to help people integrate the Wolfram Language into large-scale software projects. Now, today, we’re launching the Wolfram Function Repository to provide an organized platform for functions that are built to extend the Wolfram Language—and we’re opening up the Function Repository for anyone to contribute.

The Wolfram Function Repository is something that’s made possible by the unique nature of the Wolfram Language as not just a programming language, but a full-scale computational language. In a traditional programming language, adding significant new functionality typically involves building whole libraries, which may or may not work together. But in the Wolfram Language, there’s so much already built into the language that it’s possible to add significant functionality just by introducing individual new functions—which can immediately integrate into the coherent design of the whole language.

To get it started, we’ve already got 532 functions in the Wolfram Function Repository, in 26 categories:

The Wolfram Function Repository
Continue reading

Remembering Murray Gell-Mann
(1929–2019), Inventor of Quarks

First Encounters

In the mid-1970s, particle physics was hot. Quarks were in. Group theory was in. Field theory was in. And so much progress was being made that it seemed like the fundamental theory of physics might be close at hand.

Right in the middle of all this was Murray Gell-Mann—responsible for not one, but most of the leaps of intuition that had brought particle physics to where it was. There’d been other theories, but Murray’s—with their somewhat elaborate and abstract mathematics—were always the ones that seemed to carry the day.

It was the spring of 1978 and I was 18 years old. I’d been publishing papers on particle physics for a few years, and had gotten quite known around the international particle physics community (and, yes, it took decades to live down my teenage-particle-physicist persona). I was in England, but planned to soon go to graduate school in the US, and was choosing between Caltech and Princeton. And one weekend afternoon when I was about to go out, the phone rang. In those days, it was obvious if it was an international call. “This is Murray Gell-Mann”, the caller said, then launched into a monologue about why Caltech was the center of the universe for particle physics at the time.
Continue reading

Launching Today: Free Wolfram Engine for Developers

Why Aren’t You Using Our Technology?

It happens far too often. I’ll be talking to a software developer, and they’ll be saying how great they think our technology is, and how it helped them so much in school, or in doing R&D. But then I’ll ask them, “So, are you using Wolfram Language and its computational intelligence in your production software system?” Sometimes the answer is yes. But too often, there’s an awkward silence, and then they’ll say, “Well, no. Could I?”

Free Wolfram Engine for DevelopersI want to make sure the answer to this can always be: “Yes, it’s easy!” And to help achieve that, we’re releasing today the Free Wolfram Engine for Developers. It’s a full engine for the Wolfram Language, that can be deployed on any system—and called from programs, languages, web servers, or anything.

The Wolfram Engine is the heart of all our products. It’s what implements the Wolfram Language, with all its computational intelligence, algorithms, knowledgebase, and so on. It’s what powers our desktop products (including Mathematica), as well as our cloud platform. It’s what’s inside Wolfram|Alpha—as well as an increasing number of major production systems out in the world. And as of today, we’re making it available for anyone to download, for free, to use in their software development projects.

Continue reading