Contents Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic

Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic

Logic, Math and AI

In many ways the great quest of Doug Lenat’s life was an attempt to follow on directly from the work of Aristotle and Leibniz. For what Doug was fundamentally trying to do over the forty years he spent developing his CYC system was to use the framework of logic—in more or less the same form that Aristotle and Leibniz had it—to capture what happens in the world. It was a noble effort and an impressive example of long-term intellectual tenacity. And while I never managed to actually use CYC myself, I consider it a magnificent experiment—that if nothing else ultimately served to demonstrate the importance of building frameworks beyond logic alone in usefully representing and reasoning about the world.

Doug Lenat started working on artificial intelligence at a time when nobody really knew what might be possible—or even easy—to do. Was AI (whatever that might mean) just a clever algorithm—or a new type of computer—away? Or was it all just an “engineering problem” that simply required pulling together a bigger and better “expert system”? There was all sorts of mystery—and quite a lot of hocus pocus—around AI. Did the demo one was seeing actually prove something, or was it really just a trivial (if perhaps unwitting) cheat?

I first met Doug Lenat at the beginning of the 1980s. I had just developed my SMP (“Symbolic Manipulation Program”) system, that was the forerunner of Mathematica and the modern Wolfram Language. And I had been quite exposed to commercial efforts to “do AI” (and indeed our VCs had even pushed my first company to take on the dubious name “Inference Corporation”, complete with a “=>” logo). And I have to say that when I first met Doug I was quite dismissive. He told me he had a program (that he called “AM” for “Automated Mathematician”, and that had been the subject of his Stanford CS PhD thesis) that could discover—and in fact had discovered—nontrivial mathematical theorems.

“What theorems?” I asked. “What did you put in? What did you get out?” I suppose to many people the concept of searching for theorems would have seemed like something remarkable, and immediately exciting. But not only had I myself just built a system for systematically representing mathematics in computational form, I had also been enumerating large collections of simple programs like cellular automata. I poked at what Doug said he’d done, and came away unconvinced. Right around the same time I happened to be visiting a leading university AI group, who told me they had a system for translating stories from Spanish into English. “Can I try it?” I asked, suspending for a moment my feeling that this sounded like science fiction. “I don’t really know Spanish”, I said, “Can I start with just a few words?” “No”, they said, “the system works only with stories.” “How long does a story have to be?” I asked. “Actually it has to be a particular kind of story”, they said. “What kind?” I asked. There were a few more iterations, but eventually it came out: the “system” translated one particular story from Spanish into English! I’m not sure if my response included an expletive, but I wondered what kind of science, technology, or anything else this was supposed to be. And when Doug told me about his “Automated Mathematician”, this was the kind of thing I was afraid I was going to find.

Years later, I might say, I think there’s something AM could have been trying to do that’s valid, and interesting, if not obviously possible. Given a particular axiom system it’s easy to mechanically generate infinite collections of “true theorems”—that in effect fill metamathematical space. But now the question is: which of these theorems will human mathematicians find “interesting”? It’s not clear how much of the answer has to do with the “social history of mathematics”, and how much is more about “abstract principles”. I’ve been studying this quite a bit in recent years (not least because I think it could be useful in practice)—and have some rather deep conclusions about its relation to the nature of mathematics. But I now do wonder to what extent Doug’s work from all those years ago might (or might not) contain heuristics that would be worth trying to pursue even now.


I ran into Doug quite a few times in the early to mid-1980s, both around a company called Thinking Machines (to which I was a consultant) and at various events that somehow touched on AI. There was a fairly small and somewhat fragmented AI community in those days, with the academic part in the US concentrated around MIT, Stanford and CMU. I had the impression that Doug was never quite at the center of that community, but was somehow nevertheless a “notable member”, who—particularly with his work being connected to math—was seen as “doing upscale things” around AI.

In 1984 I wrote an article for a special issue of Scientific American on “computer software” (yes, software was trendy then). My article was entitled “Computer Software in Science and Mathematics”, and the very next article was by Doug, entitled “Computer Software for Intelligent Systems”. The summary at the top of my article read: “Computation offers a new means of describing and investigating scientific and mathematical systems. Simulation by computer may be the only way to predict how certain complicated systems evolve.” And the summary for Doug’s article read: “The key to intelligent problem solving lies in reducing the random search for solutions. To do so intelligent computer programs must tap the same underlying ‘sources of power’ as human beings”. And I suppose in many ways both of us spent most of our next four decades essentially trying to fill out the promise of these summaries.

A key point in Doug’s article—with which I wholeheartedly agree—is that to create something one can usefully identify as “AI”, it’s essential to somehow have lots of knowledge of the world built in. But how should that be done? How should the knowledge be encoded? And how should it be used?

Doug’s article in Scientific American illustrated his basic idea:

Click to enlarge

Encode knowledge about the world in the form of statements of logic. Then find ways to piece together these statements to derive conclusions. It was, in a sense, a very classic approach to formalizing the world—and one that would at least in concept be familiar to Aristotle and Leibniz. Of course it was now using computers—both as a way to store the logical statements, and as a way to find inferences from them.

At first, I think Doug felt the main problem was how to “search for correct inferences”. Given a whole collection of logical statements, he was asking how these could be knitted together to answer some particular question. In essence it was just like mathematical theorem proving: how could one knit together axioms to make a proof of a particular theorem? And especially with the computers and algorithms of the time, this seemed like a daunting problem in almost any realistic case.

But then how did humans ever manage to do it? What Doug imagined was that the critical element was heuristics: strategies for guessing how one might “jump ahead” and not have to do the kind of painstaking searches that systematic methods seemed to imply would be needed. Doug developed a system he called EURISKO that implemented a range of heuristics—that Doug expected could be used not only for math, but basically for anything, or at least anything where human-like thinking was effective. And, yes, EURISKO included not only heuristics, but also at least some kinds of heuristics for making new heuristics, etc.

But OK, so Doug imagined that EURISKO could be used to “reason about” anything. So if it had the kind of knowledge humans do, then—Doug believed—it should be able to reason just like humans. In other words, it should be able to deliver some kind of “genuine artificial intelligence” capable of matching human thinking.

There were all sorts of specific domains of knowledge to consider. But Doug particularly wanted to push in what seemed like the most broadly impactful direction—and tackle the problem of commonsense knowledge and commonsense reasoning. And so it was that Doug began what would become a lifelong project to encode as much knowledge as possible in the form of statements of logic.

In 1984 Doug’s project—now named CYC—became a flagship part of MCC (Microelectronics and Computer Technology Corporation) in Austin, TX—an industry-government consortium that had just been created to counter the perceived threat from the Japanese “Fifth Generation Computer Project”, that had shocked the US research establishment by putting immense resources into “solving AI” (and was actually emphasizing many of the same underlying rule-based techniques as Doug). And at MCC Doug had the resources to hire scores of people to embark on what was expected to be a few thousand person-years of effort.

I didn’t hear much about CYC for quite a while, though shortly after Mathematica was released in 1988 Marvin Minsky mused to me about how it seemed like we were doing for math-like knowledge what CYC was hoping to do for commonsense knowledge. I think Marvin wasn’t convinced that Doug had the technical parts of CYC right (and, yes, they weren’t using Marvin’s theories as much as they might). But in those years Marvin seemed to feel that CYC was one of the few AI projects going on that actually made any sense. And indeed in my archives I find a rather charming email from Marvin in 1992, attaching a draft of a science fiction novel (entitled The Turing Option) that he was writing with Harry Harrison, which contained mention of CYC:

June 19, 2024

When Brian and Ben reached the lab, the computer was running
but the tree-robot was folded and motionless. “Robin,

“Robin will have to use different concepts of progress for
different kinds of problems. And different kinds of subgoals
for reducing those different kinds of differences.”

“Won’t that require enormous amounts of knowledge?”

“It will indeed—and that’s one reason human education takes
so long. But Robin should already contain a massive amount of
just that kind of information—as part of his CYC-9 knowledge-

“There now exists a procedural model for the behavior of a
human individual, based on the prototype human described in
section 6.001 of the CYC-9 knowledge base. Now customizing
parameters on the basis of the example person Brian Delaney
described in the employment, health, and security records of
Megalobe Corporation.”

A brief silence ensued. Then the voice continued.

“The Delaney model is judged as incomplete as compared to those
of other persons such as President Abraham Lincoln, who has
3596.6 megabytes of descriptive text, or Commander James
Bond, who has 16.9 megabytes.”

Later, one of the novel’s characters observes: “Even if we started with nothing but the
old Lenat–Haase representation-languages, we’d still be far ahead of what any animal ever evolved.” (Ken Haase was a student of Marvin’s who critiqued and extended Doug’s work on heuristics.)

I was exposed to CYC again in 1996 in connection with a book called HAL’s Legacy—to which both Doug and I contributed—published in honor of the fictional birthday of the AI in the movie 2001. But mostly AI as a whole was in the doldrums, and almost nobody seemed to be taking it seriously. Sometimes I would hear murmurs about CYC, mostly from government and military contacts. Among academics, Doug would occasionally come up, but rather cruelly he was most notable for his name being used for a unit of “bogosity”—the lenat—of which it was said that “Like the farad it is considered far too large a unit for practical use, so bogosity is usually expressed in microlenats”.

Doug Meets Wolfram|Alpha

Many years passed. I certainly hadn’t forgotten Doug, or CYC. And a few times people suggested connecting CYC in some way to our technology. But nothing ever happened. Then in the spring of 2009 we were nearing the first release of Wolfram|Alpha, and it seemed like I finally had something that I might meaningfully be able to talk to Doug about.

I sent a rather tentative email:

Doug quickly responded:

It was definitely a “you’re on my turf” kind of response. And I wasn’t sure what to expect from Doug. But a few days later we had a long call with Doug and some of the senior members of what was now the Cycorp team. And Doug did something that deeply impressed me. Rather than for example nitpicking that Wolfram|Alpha was “not AI” he basically just said “We’ve been trying to do something like this for years, and now you’ve succeeded”. It was a great—and even inspirational—show of intellectual integrity. And whatever I might think of CYC and Doug’s other work (and I’d never formed a terribly clear opinion), this for me put Doug firmly in the category of people to respect.

Doug wrote a blog post entitled “I was positively impressed with Wolfram Alpha”, and immediately started inviting us to various AI and industry-pooh-bah events to which he was connected.

Doug seemed genuinely pleased that we had made such progress in something so close to his longtime objectives. I talked to him about the comparison between our approaches. He was just working with “pure human-like reasoning”, I said, like one would have had to do in the Middle Ages. But, I said, “In a sense we cheated”. Because we used all the things that got invented in modern times in science and math and so on. If he wanted to work out how some mechanical system would behave, he would have to reason through it: “If you push this down, that pulls up, then this rolls”, etc. But with what we’re doing, we just have to turn everything into math (or something like it), then systematically solve it using equations and so on.

And there was something else too: we weren’t trying to use just logic to represent the world, we were using the full power and richness of computation. In talking about the Solar System, we didn’t just say that “Mars is a planet contained in the Solar System”; we had an algorithm for computing its detailed motion, and so on.

Doug and CYC had also emphasized the scraps of knowledge that seem to appear in our “common sense”. But we were interested in systematic, computable knowledge. We didn’t just want a few scattered “common facts” about animals. We wanted systematic tables of properties of millions of species. And we had very general computational ways to represent things: not just words or tags for things, but systematic ways to capture computational structures, whether they were entities, graphs, formulas, images, time series, or geometrical forms, or whatever.

I think Doug viewed CYC as some kind of formalized idealization of how he imagined human minds work: providing a framework into which a large collection of (fairly undifferentiated) knowledge about the world could be “poured”. At some level it was a very “pure AI” concept: set up a generic brain-like thing, then “it’ll just do the rest”. But Doug still felt that the thing had to operate according to logic, and that what was fed into it also had to consist of knowledge packaged up in the form of logic.

But while Doug’s starting points were AI and logic, mine were something different—in effect computation writ large. I always viewed logic as something not terribly special: a particular formal system that described certain kinds of things, but didn’t have any great generality. To me the truly general concept was computation. And that’s what I’ve always used as my foundation. And it’s what’s now led to the modern Wolfram Language, with its character as a full-scale computational language.

There is a principled foundation. But it’s not logic. It’s something much more general, and structural: arbitrary symbolic expressions and transformations of them. And I’ve spent much of the past forty years building up coherent computational representations of the whole range of concepts and constructs that we encounter in the world and in our thinking about it. The goal is to have a language—in effect, a notation—that can represent things in a precise, computational way. But then to actually have the built-in capability to compute with that representation. Not to figure out how to string together logical statements, but rather to do whatever computation might need to be done to get an answer.

But beyond their technical visions and architectures, there is a certain parallelism between CYC and the Wolfram Language. Both have been huge projects. Both have been in development for more than forty years. And both have been led by a single person all that time. Yes, the Wolfram Language is certainly the larger of the two. But in the spectrum of technical projects, CYC is still a highly exceptional example of longevity and persistence of vision—and a truly impressive achievement.

Later Years

After Wolfram|Alpha came on the scene I started interacting more with Doug, not least because I often came to the SXSW conference in Austin, and would usually make a point of reaching out to Doug when I did. Could CYC use Wolfram|Alpha and the Wolfram Language? Could we somehow usefully connect our technology to CYC?

When I talked to Doug he tended to downplay the commonsense aspects of CYC, instead talking about defense, intelligence analysis, healthcare, etc. applications. He’d enthusiastically tell me about particular kinds of knowledge that had been put into CYC. But time and time again I’d have to tell him that actually we already had systematic data and algorithms in those areas. Often I felt a bit bad about it. It was as if he’d been painstakingly planting crops one by one, and we’d come through with a giant industrial machine.

In 2010 we made a big “Timeline of Systematic Data and the Development of Computable Knowledge” poster—and CYC was on it as one of the six entries that began in the 1980s (alongside, for example, the web). Doug and I continued to talk about somehow working together, but nothing ever happened. One problem was the asymmetry: Doug could play with Wolfram|Alpha and Wolfram Language any time. But I’d never once actually been able to try CYC. Several times Doug had promised API keys, but none had ever materialized.

Eventually Doug said to me: “Look, I’m worried you’re going to think it’s bogus”. And particularly knowing Doug’s history with alleged “bogosity” I tried to assure him my goal wasn’t to judge. Or, as I put it in a 2014 email: “Please don’t worry that we’ll think it’s ‘bogus’. I’m interested in finding the good stuff in what you’ve done, not criticizing its flaws.”

But when I was at SXSW the next year Doug had something else he wanted to show me. It was a math education game. And Doug seemed incredibly excited about its videogame setup, complete with 3D spacecraft scenery. My son Christopher was there and politely asked if this was the default Unity scenery. I kept on saying, “Doug, I’ve seen videogames before; show me the AI!” But Doug didn’t seem interested in that anymore, eventually saying that the game wasn’t using CYC—though did still (somewhat) use “rule-based AI”.

I’d already been talking to Doug, though, about what I saw as being an obvious, powerful application of CYC in the context of Wolfram|Alpha: solving math word problems. Given a problem, say, in the form of equations, we could solve pretty much anything thrown at us. But with a word problem like “If Mary has 7 marbles and 3 fall down a drain, how many does she now have?” we didn’t stand a chance. Because to solve this requires commonsense knowledge of the world, which isn’t what Wolfram|Alpha is about. But it is what CYC is supposed to be about. Sadly, though, despite many reminders, we never got to try this out. (And, yes, we built various simple linguistic templates for this kind of thing into Wolfram|Alpha, and now there are LLMs.)

Independent of anything else, it was impressive that Doug had kept CYC and Cycorp running all those years. But when I saw him in 2015 he was enthusiastically telling me about what I told him seemed to me to be a too-good-to-be-true deal he was making around CYC. A little later there was a strange attempt to sell us the technology of CYC, and I don’t think our teams interacted again after that.

I personally continued to interact with Doug, though. I sent him things I wrote about the formalization of math. He responded pointing me to things he’d done on AM. On the tenth anniversary of Wolfram|Alpha Doug sent me a nice note, offering that “If you want to team up on, e.g., knocking the Winograd sentence pairs out of the park, let me know.” I have to say I wondered what a “Winograd sentence pair” was. It felt like some kind of challenge from an age of AI long past (apparently it has to do with identifying pronoun reference, which of course has become even more difficult in modern English usage).

And as I write this today, I realize a mistake I made back in 2016. I had for years been thinking about what I’ve come to call “symbolic discourse language”—an extension of computational language that can represent “everyday discourse”. And—stimulated by blockchain and the idea of computational contracts—I finally wrote something about this in 2016, and I now realize that I overlooked sending Doug a link to it. Which is a shame, because maybe it would have finally been the thing that got us to connect our systems.

And Now There Are LLMs

Doug was a person who believed in formalism, particularly logic. And I have the impression that he always considered approaches like neural nets not really to have a chance of “solving the problem of AI”. But now we have LLMs. So how do they fit in with things like the ideas of CYC?

One of the surprises of LLMs is that they often seem, in effect, to use logic, even though there’s nothing in their setup that explicitly involves logic. But (as I’ve described elsewhere) I’m pretty sure what’s happened is that LLMs have “discovered” logic much as Aristotle did—by looking at lots of examples of statements people make and identifying patterns in them. And in a similar way LLMs have “discovered” lots of commonsense knowledge, and reasoning. They’re just following patterns they’ve seen, but—probably in effect organized into what I’ve called a “semantic grammar” that determines “laws of semantic motion”—that’s enough to often achieve some fairly impressive commonsense-like results.

I suspect that a great many of the statements that were fed into CYC could now be generated fairly successfully with LLMs. And perhaps one day there’ll be good enough “LLM science” to be able to identify mechanisms behind what LLMs can do in the commonsense arena—and maybe they’ll even look a bit like what’s in CYC, and how it uses logic. But in a sense the very success of LLMs in the commonsense arena strongly suggests that you don’t fundamentally need deep “structured logic” for that. Though, yes, the LLM may be immensely less efficient—and perhaps less reliable—than a direct symbolic approach.

It’s a very different story, by the way, with computational language and computation. LLMs are through and through based on language and patterns to be found through it. But computation—as it can be accessed through structured computational language—is something very different. It’s about processes that are in a sense thoroughly non-human, and that involve much deeper following of general formal rules, as well as much more structured kinds of data, etc. An LLM might be able to do basic logic, as humans have. But it doesn’t stand a chance on things where humans have had to systematically use formal tools that do serious computation. Insofar as LLMs represent “statistical AI”, CYC represents a certain level of “symbolic AI”. But computational language and computation go much further—to a place where LLMs can’t and shouldn’t follow, and should just call them as tools.

Doug always seemed to have a very optimistic view of the promise of AI. In 2013 he wrote to me:

The last mail I received from Doug was on January 10, 2023—telling me that he thought it was great that I was talking about connecting our tech to ChatGPT. He said, though, that he found it “increasingly worrisome that these models train on CONVINCINGNESS rather than CORRECTNESS”, then gave an example of ChatGPT getting a math word problem wrong.
His email ended:

Sadly we never did chat again. We now have a team actively working on symbolic discourse language, and just last week I mentioned CYC to them—and lamented that I’d never been able to try it. And then on Friday I heard that Doug had died. A remarkable pioneer of AI who steadfastly pursued his vision over the whole course of his career, and was taken far too soon.

Stephen Wolfram (2023), "Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic," Stephen Wolfram Writings.
Stephen Wolfram (2023), "Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic," Stephen Wolfram Writings.
Wolfram, Stephen. "Remembering Doug Lenat (1950–2023) and His Quest to Capture the World with Logic." Stephen Wolfram Writings. September 5, 2023.
Wolfram, S. (2023, September 5). Remembering Doug Lenat (1950–2023) and his quest to capture the world with logic. Stephen Wolfram Writings.

Posted in: Artificial Intelligence, Historical Perspectives, Software Design


  1. Wow. Thank-you,


  2. Thanks for the time you spent to remember and honor your colleague. “One man sows, another waters and another reaps the harvest.”

  3. What a touching story. You can’t help but feel a little sorry for Doug, chasing a noble dream that wasn’t quite within his reach, all the time somewhat jealous of the success of Wolfram on computational capabilities and knowledge base. It seems the baton for his kind of work has now passed to LLMs and their promise. The idea of blending the two is still ideal, in my view. Thank you for sharing the memories and insights.

  4. Part of my MA thesis in philosophy, meanwhile close to 25 years old, was about Doug Lenat’s CYC and this idea to capture the world and our understanding, not just in logic, but in a structured, exhaustive ontology. Super cool to read this; some behind the scenes insights in the work of true pioneers; the journey matters as much as the destination. Many thanks!