I’ve been curious about Gottfried Leibniz for years, not least because he seems to have wanted to build something like Mathematica and Wolfram|Alpha, and perhaps A New Kind of Science as well—though three centuries too early. So when I took a trip recently to Germany, I was excited to be able to visit his archive in Hanover.
Leafing through his yellowed (but still robust enough for me to touch) pages of notes, I felt a certain connection—as I tried to imagine what he was thinking when he wrote them, and tried to relate what I saw in them to what we now know after three more centuries:
Some things, especially in mathematics, are quite timeless. Like here’s Leibniz writing down an infinite series for √2 (the text is in Latin):
Or here’s Leibniz try to calculate a continued fraction—though he got the arithmetic wrong, even though he wrote it all out (the Π was his earlier version of an equal sign):
Or here’s a little summary of calculus, that could almost be in a modern textbook:
But what was everything else about? What was the larger story of his work and thinking?
I have always found Leibniz a somewhat confusing figure. He did many seemingly disparate and unrelated things—in philosophy, mathematics, theology, law, physics, history, and more. And he described what he was doing in what seem to us now as strange 17th century terms.
But as I’ve learned more, and gotten a better feeling for Leibniz as a person, I’ve realized that underneath much of what he did was a core intellectual direction that is curiously close to the modern computational one that I, for example, have followed.
Gottfried Leibniz was born in Leipzig in what’s now Germany in 1646 (four years after Galileo died, and four years after Newton was born). His father was a professor of philosophy; his mother’s family was in the book trade. Leibniz’s father died when Leibniz was 6—and after a 2-year deliberation on its suitability for one so young, Leibniz was allowed into his father’s library, and began to read his way through its diverse collection of books. He went to the local university at age 15, studying philosophy and law—and graduated in both of them at age 20.
Even as a teenager, Leibniz seems to have been interested in systematization and formalization of knowledge. There had been vague ideas for a long time—for example in the semi-mystical Ars Magna of Ramon Llull from the 1300s—that one might be able to set up some kind of universal system in which all knowledge could be derived from combinations of signs drawn from a suitable (as Descartes called it) “alphabet of human thought”. And for his philosophy graduation thesis, Leibniz tried to pursue this idea. He used some basic combinatorial mathematics to count possibilities. He talked about decomposing ideas into simple components on which a “logic of invention” could operate. And, for good measure, he put in an argument that purported to prove the existence of God.
As Leibniz himself said in later years, this thesis—written at age 20—was in many ways naive. But I think it began to define Leibniz’s lifelong way of thinking about all sorts of things. And so, for example, Leibniz’s law graduation thesis about “perplexing legal cases” was all about how such cases could potentially be resolved by reducing them to logic and combinatorics.
Leibniz was on a track to become a professor, but instead he decided to embark on a life working as an advisor for various courts and political rulers. Some of what he did for them was scholarship, tracking down abstruse—but politically important—genealogy and history. Some of it was organization and systematization—of legal codes, libraries and so on. Some of it was practical engineering—like trying to work out better ways to keep water out of silver mines. And some of it—particularly in earlier years—was “on the ground” intellectual support for political maneuvering.
One such activity in 1672 took Leibniz to Paris for four years—during which time he interacted with many leading intellectual lights. Before then, Leibniz’s knowledge of mathematics had been fairly basic. But in Paris he had the opportunity to learn all the latest ideas and methods. And for example he sought out Christiaan Huygens, who agreed to teach Leibniz mathematics—after he succeeded in passing the test of finding the sum of the reciprocals of the triangular numbers.
Over the years, Leibniz refined his ideas about the systematization and formalization of knowledge, imagining a whole architecture for how knowledge would—in modern terms—be made computational. He saw the first step as being the development of an ars characteristica—a methodology for assigning signs or symbolic representations to things, and in effect creating a uniform “alphabet of thought”. And he then imagined—in remarkable resonance with what we now know about computation—that from this uniform representation it would be possible to find “truths of reason in any field… through a calculus, as in arithmetic or algebra”.
He talked about his ideas under a variety of rather ambitious names like scientia generalis (“general method of knowledge”), lingua philosophica (“philosophical language”), mathematique universelle (“universal mathematics”), characteristica universalis (“universal system”) and calculus ratiocinator (“calculus of thought”). He imagined applications ultimately in all areas—science, law, medicine, engineering, theology and more. But the one area in which he had clear success quite quickly was mathematics.
To me it’s remarkable how rarely in the history of mathematics that notation has been viewed as a central issue. It happened at the beginning of modern mathematical logic in the late 1800s with the work of people like Gottlob Frege and Giuseppe Peano. And in recent times it’s happened with me in my efforts to create Mathematica and the Wolfram Language. But it also happened three centuries ago with Leibniz. And I suspect that Leibniz’s successes in mathematics were in no small part due to the effort he put into notation, and the clarity of reasoning about mathematical structures and processes that it brought.
When one looks at Leibniz’s papers, it’s interesting to see his notation and its development. Many things look quite modern. Though there are charming dashes of the 17th century, like the occasional use of alchemical or planetary symbols for algebraic variables:
There’s Π as an equals sign instead of =, with the slightly hacky idea of having it be like a balance, with a longer leg on one side or the other indicating less than (“<“) or greater than (“>”):
There are overbars to indicate grouping of terms—arguably a better idea than parentheses, though harder to type, and typeset:
We do use overbars for roots today. But Leibniz wanted to use them in integrals too. Along with the rather nice “tailed d”, which reminds me of the double-struck “differential d” that we invented for representing integrals in Mathematica.
Particularly in solving equations, it’s quite common to want to use ±, and it’s always confusing how the grouping is supposed to work, say in a±b±c. Well, Leibniz seems to have found it confusing too, but he invented a notation to handle it—which we actually should consider using today too:
I’m not sure what some of Leibniz’s notation means. Though those overtildes are rather nice-looking:
As are these things with dots:
Or this interesting-looking diagrammatic form:
Of course, Leibniz’s most famous notations are his integral sign (long “s” for “summa”) and d, here summarized in the margin for the first time, on November 11th, 1675 (the “5” in “1675” was changed to a “3” after the fact, perhaps by Leibniz):
I find it interesting that despite all his notation for “calculational” operations, Leibniz apparently did not invent similar notation for logical operations. “Or” was just the Latin word vel, “and” was et, and so on. And when he came up with the idea of quantifiers (modern ∀ and ∃), he just represented them by the Latin abbreviations U.A. and P.A.:
It’s always struck me as a remarkable anomaly in the history of thought that it took until the 1930s for the idea of universal computation to emerge. And I’ve often wondered if lurking in the writings of Leibniz there might be an early version of universal computation—maybe even a diagram that we could now interpret as a system like a Turing machine. But with more exposure to Leibniz, it’s become clearer to me why that’s probably not the case.
One big piece, I suspect, is that he didn’t take discrete systems quite seriously enough. He referred to results in combinatorics as “self-evident”, presumably because he considered them directly verifiable by methods like arithmetic. And it was only “geometrical”, or continuous, mathematics that he felt needed to have a calculus developed for it. In describing things like properties of curves, Leibniz came up with something like continuous functions. But he never seems to have applied the idea of functions to discrete mathematics—which might for example have led him to think about universal elements for building up functions.
Leibniz recognized the success of his infinitesimal calculus, and was keen to come up with similar “calculi” for other things. And in another “near miss” with universal computation, Leibniz had the idea of encoding logical properties using numbers. He thought about associating every possible attribute of a thing with a different prime number, then characterizing the thing by the product of the primes for its attributes—and then representing logical inference by arithmetic operations. But he only considered static attributes—and never got to an idea like Gödel numbering where operations are also encoded in numbers.
But even though Leibniz did not get to the idea of universal computation, he did understand the notion that computation is in a sense mechanical. And indeed quite early in life he seems to have resolved to build an actual mechanical calculator for doing arithmetic. Perhaps in part it was because he wanted to use it himself (always a good reason to build a piece of technology!). For despite his prowess at algebra and the like, his papers are charmingly full of basic (and sometimes incorrect) school-level arithmetic calculations written out in the margin—and now preserved for posterity:
There were scattered examples of mechanical calculators being built in Leibniz’s time, and when he was in Paris, Leibniz no doubt saw the addition calculator that had been built by Blaise Pascal in 1642. But Leibniz resolved to make a “universal” calculator, that could for the first time do all four basic functions of arithmetic with a single machine. And he wanted to give it a simple “user interface”, where one would for example turn a handle one way for multiplication, and the opposite way for division.
In Leibniz’s papers there are all sorts of diagrams about how the machine should work:
Leibniz imagined that his calculator would be of great practical utility—and indeed he seems to have hoped that he would be able to turn it into a successful business. But in practice, Leibniz struggled to get the calculator to work at all reliably. For like other mechanical calculators of its time, it was basically a glorified odometer. And just like in Charles Babbage’s machines nearly 200 years later, it was mechanically difficult to make many wheels move at once when a cascade of carries occurred.
Leibniz at first had a wooden prototype of his machine built, intended to handle just 3 or 4 digits. But when he demoed this to people like Robert Hooke during a visit to London in 1673 it didn’t go very well. But he kept on thinking he’d figured everything out—for example in 1679 writing (in French) of the “last correction to the arithmetic machine”:
Notes from 1682 suggest that there were more problems, however:
But Leibniz had plans drafted up from his notes—and contracted an engineer to build a brass version with more digits:
It’s fun to see Leibniz’s “marketing material” for the machine:
As well as parts of the “manual” (with 365×24 as a “worked example”):
Complete with detailed usage diagrams:
But despite all this effort, problems with the calculator continued. And in fact, for more than 40 years, Leibniz kept on tweaking his calculator—probably altogether spending (in today’s currency) more than a million dollars on it.
So what actually happened to the physical calculator? When I visited Leibniz’s archive, I had to ask. “Well”, my hosts said, “we can show you”. And there in a vault, along with shelves of boxes, was Leibniz’s calculator, looking as good as new in a glass case—here captured by me in a strange juxtaposition of ancient and modern:
All the pieces are there. Including a convenient wooden carrying box. Complete with a cranking handle. And, if it worked right, the ability to do any basic arithmetic operation with a few minutes of cranking:
Leibniz clearly viewed his calculator as a practical project. But he still wanted to generalize from it, for example trying to make a general “logic” to describe geometries of mechanical linkages. And he also thought about the nature of numbers and arithmetic. And was particularly struck by binary numbers.
Bases other than 10 had been used in recreational mathematics for several centuries. But Leibniz latched on to base 2 as having particular significance—and perhaps being a key bridge between philosophy, theology and mathematics. And he was encouraged in this by his realization that binary numbers were at the core of the I Ching, which he’d heard about from missionaries to China, and viewed as related in spirit to his characteristica universalis.
Leibniz worked out that it would be possible to build a calculator based on binary. But he appears to have thought that only base 10 could actually be useful.
It’s strange to read what Leibniz wrote about binary numbers. Some of it is clear and practical—and still seems perfectly modern. But some of it is very 17th century—talking for example about how binary proves that everything can be made from nothing, with 1 being identified with God, and 0 with nothing.
Almost nothing was done with binary for a couple of centuries after Leibniz: in fact, until the rise of digital computing in the last few decades. So when one looks at Leibniz’s papers, his calculations in binary are probably what seem most “out of his time”:
With binary, Leibniz was in a sense seeking the simplest possible underlying structure. And no doubt he was doing something similar when he talked about what he called “monads”. I have to say that I’ve never really understood monads. And usually when I think I almost have, there’s some mention of souls that just throws me completely off.
Still, I’ve always found it tantalizing that Leibniz seemed to conclude that the “best of all possible worlds” is the one “having the greatest variety of phenomena from the smallest number of principles”. And indeed, in the prehistory of my work on A New Kind of Science, when I first started formulating and studying one-dimensional cellular automata in 1981, I considered naming them “polymones”—but at the last minute got cold feet when I got confused again about monads.
There’s always been a certain mystique around Leibniz and his papers. Kurt Gödel—perhaps displaying his paranoia—seemed convinced that Leibniz had discovered great truths that had been suppressed for centuries. But while it is true that Leibniz’s papers were sealed when he died, it was his work on topics like history and genealogy—and the state secrets they might entail—that was the concern.
Leibniz’s papers were unsealed long ago, and after three centuries one might assume that every aspect of them would have been well studied. But the fact is that even after all this time, nobody has actually gone through all of the papers in full detail. It’s not that there are so many of them. Altogether there are only about 200,000 pages—filling perhaps a dozen shelving units (and only a little larger than my own personal archive from just the 1980s). But the problem is the diversity of material. Not only lots of subjects. But also lots of overlapping drafts, notes and letters, with unclear relationships between them.
Leibniz’s archive contains a bewildering array of documents. From the very large:
To the very small (Leibniz’s writing got smaller as he got older and more near-sighted):
Most of the documents in the archive seem very serious and studious. But despite the high cost of paper in Leibniz’s time, one still finds preserved for posterity the occasional doodle (is that Spinoza, by any chance?):
Leibniz exchanged mail with hundreds of people—famous and not-so-famous—all over Europe. So now, 300 years later, one can find in his archive “random letters” from the likes of Jacob Bernoulli:
What did Leibniz look like? Here he is, both in an official portrait, and without his rather oversized wig (that was mocked even in his time), that he presumably wore to cover up a large cyst on his head:
As a person, Leibniz seems to have been polite, courtierly and even tempered. In some ways, he may have come across as something of a nerd, expounding at great depth on all manner of topics. He seems to have taken great pains—as he did in his letters—to adapt to whoever he was talking to, emphasizing theology when he was talking to a theologian, and so on. Like quite a few intellectuals of his time, Leibniz never married, though he seems to have been something of a favorite with women at court.
In his career as a courtier, Leibniz was keen to climb the ladder. But not being into hunting or drinking, he never quite fit in with the inner circles of the rulers he worked for. Late in his life, when George I of Hanover became king of England, it would have been natural for Leibniz to join his court. But Leibniz was told that before he could go, he had to start writing up a history project he’d supposedly been working on for 30 years. Had he done so before he died, he might well have gone to England and had a very different kind of interaction with Newton.
At Leibniz’s archive, there are lots of papers, his mechanical calculator, and one more thing: a folding chair that he took with him when he traveled, and that he had suspended in carriages so he could continue to write as the carriage moved:
Leibniz was quite concerned about status (he often styled himself “Gottfried von Leibniz”, though nobody quite knew where the “von” came from). And as a form of recognition for his discoveries, he wanted to have a medal created to commemorate binary numbers. He came up with a detailed design, complete with the tag line omnibus ex nihilo ducendis; sufficit unum (“everything can be derived from nothing; all that is needed is 1”). But nobody ever made the medal for him.
In 2007, though, I wanted to come up with a 60th birthday gift for my friend Greg Chaitin, who has been a long-time Leibniz enthusiast. And so I thought: why not actually make Leibniz’s medal? So we did. Though on the back, instead of the picture of a duke that Leibniz proposed, we put a Latin inscription about Greg’s work.
And when I visited the Leibniz archive, I made sure to bring a copy of the medal, so I could finally put a real medal next to Leibniz’s design:
It would have been interesting to know what pithy statement Leibniz might have had on his grave. But as it was, when Leibniz died at the age of 70, his political fates were at a low ebb, and no elaborate memorial was constructed. Still, when I was in Hanover, I was keen to see his grave—which turns out to carry just the simple Latin inscription “bones of Leibniz”:
Across town, however, there’s another commemoration of a sort—an outlet store for cookies that carry the name “Leibniz” in his honor:
So what should we make of Leibniz in the end? Had history developed differently, there would probably be a direct line from Leibniz to modern computation. But as it is, much of what Leibniz tried to do stands isolated—to be understood mostly by projecting backward from modern computational thinking to the 17th century.
And with what we know now, it is fairly clear what Leibniz understood, and what he did not. He grasped the concept of having formal, symbolic, representations for a wide range of different kinds of things. And he suspected that there might be universal elements (maybe even just 0 and 1) from which these representations could be built. And he understood that from a formal symbolic representation of knowledge, it should be possible to compute its consequences in mechanical ways—and perhaps create new knowledge by an enumeration of possibilities.
Some of what Leibniz wrote was abstract and philosophical—sometimes maddeningly so. But at some level Leibniz was also quite practical. And he had sufficient technical prowess to often be able to make real progress. His typical approach seems to have been to start by trying to create a formal structure to clarify things—with formal notation if possible. And after that his goal was to create some kind of “calculus” from which conclusions could systematically be drawn.
Realistically he only had true success with this in one specific area: continuous “geometrical” mathematics. It’s a pity he never tried more seriously in discrete mathematics, because I think he might have been able to make progress, and might conceivably even have reached the idea of universal computation. He might well also have ended up starting to enumerate possible systems in the kind of way I have done in the computational universe.
One area where he did try his approach was with law. But in this he was surely far too early, and it is only now—300 years later—that computational law is beginning to seem realistic.
Leibniz also tried thinking about physics. But while he made progress with some specific concepts (like kinetic energy), he never managed to come up with any sort of large-scale “system of the world”, of the kind that Newton in effect did in his Principia.
In some ways, I think Leibniz failed to make more progress because he was trying too hard to be practical, and—like Newton—to decode the operation of actual physics, rather than just looking at related formal structures. For had Leibniz tried to do at least the basic kinds of explorations that I did in A New Kind of Science, I don’t think he would have had any technical difficulty—but I think the history of science could have been very different.
And I have come to realize that when Newton won the PR war against Leibniz over the invention of calculus, it was not just credit that was at stake; it was a way of thinking about science. Newton was in a sense quintessentially practical: he invented tools then showed how these could be used to compute practical results about the physical world. But Leibniz had a broader and more philosophical view, and saw calculus not just as a specific tool in itself, but as an example that should inspire efforts at other kinds of formalization and other kinds of universal tools.
I have often thought that the modern computational way of thinking that I follow is somehow obvious—and somehow an inevitable feature of thinking about things in formal, structured, ways. But it has never been very clear to me whether this apparent obviousness is just the result of modern times, and of our experience with modern practical computer technology. But looking at Leibniz, we get some perspective. And indeed what we see is that some core of modern computational thinking was possible even long before modern times. But the ambient technology and understanding of past centuries put definite limits on how far the thinking could go.
And of course this leads to a sobering question for us today: how much are we failing to realize from the core computational way of thinking because we do not have the ambient technology of the distant future? For me, looking at Leibniz has put this question in sharper focus. And at least one thing seems fairly clear.
In Leibniz’s whole life, he basically saw less than a handful of computers, and all they did was basic arithmetic. Today there are billions of computers in the world, and they do all sorts of things. But in the future there will surely be far far more computers (made easier to create by the Principle of Computational Equivalence). And no doubt we’ll get to the point where basically everything we make will explicitly be made of computers at every level. And the result is that absolutely everything will be programmable, down to atoms. Of course, biology has in a sense already achieved a restricted version of this. But we will be able to do it completely and everywhere.
At some level we can already see that this implies some merger of computational and physical processes. But just how may be as difficult for us to imagine as things like Mathematica and Wolfram|Alpha would have been for Leibniz.
Leibniz died on November 14, 1716. In 2016 that’ll be 300 years ago. And it’ll be a good opportunity to make sure everything we have from Leibniz has finally been gone through—and to celebrate after three centuries how many aspects of Leibniz’s core vision are finally coming to fruition, albeit in ways he could never have imagined.