# When Exactly Will the Eclipse Happen? A Multimillenium Tale of Computation

## Preparing for August 21, 2017

On August 21, 2017, there’s going to be a total eclipse of the Sun visible on a line across the US. But when exactly will the eclipse occur at a given location? Being able to predict astronomical events has historically been one of the great triumphs of exact science. But in 2017, how well can it actually be done?

The answer, I think, is well enough that even though the edge of totality moves at just over 1000 miles per hour it should be possible to predict when it will arrive at a given location to within perhaps a second. And as a demonstration of this, we’ve created a website to let anyone enter their geo location (or address) and then immediately compute when the eclipse will reach them—as well as generate many pages of other information.

This essay is also in
WIRED »

These days it’s easy to find out when the next solar eclipse will be; indeed built right into the Wolfram Language there’s just a function that tells you (in this form the output is the “time of greatest eclipse”):

 ✕ `SolarEclipse[]`

It’s also easy to find out, and plot, where the region of totality will be:

 ✕ `GeoListPlot[SolarEclipse["TotalPhasePolygon"]]`

Or to determine that the whole area of totality will be about 16% of the area of the US:

 ✕ `GeoArea[SolarEclipse["TotalPhasePolygon"]]/GeoArea[Entity["Country", "UnitedStates"]]`

But computing eclipses is not exactly a new business. In fact, the Antikythera device from 2000 years ago even tried to do it—using 37 metal gears to approximate the motion of the Sun and Moon (yes, with the Earth at the center). To me there’s something unsettling—and cautionary—about the fact that the Antikythera device stands as such a solitary piece of technology, forgotten but not surpassed for more than 1600 years.

But right there on the bottom of the device there’s an arm that moves around, and when it points to an Η or Σ marking, it indicates a possible Sun or Moon eclipse. The way of setting dates on the device is a bit funky (after all, the modern calendar wouldn’t be invented for another 1500 years), but if one takes the simulation on the Wolfram Demonstrations Project (which was calibrated back in 2012 when the Demonstration was created), and turns the crank to set the device for August 21, 2017, here’s what one gets:

And, yes, all those gears move so as to line the Moon indicator up with the Sun—and to make the arm on the bottom point right at an Η—just as it should for a solar eclipse. It’s amazing to see this computation successfully happen on a device designed 2000 years ago.

Of course the results are a lot more accurate today. Though, strangely, despite all the theoretical science that’s been done, the way we actually compute the position of the Sun and Moon is conceptually very much like the gears—and effectively epicycles—of the Antikythera device. It’s just that now we have the digital equivalent of hundreds of thousands of gears.

## Why Do Eclipses Happen?

A total solar eclipse occurs when the Moon gets in front of the Sun from the point of view of a particular location on the Earth. And it so happens that at this point in the Earth’s history the Moon can just block the Sun because it has almost exactly the same angular diameter in the sky as the Sun (about 0.5° or 30 arc-minutes).

So when does the Moon get between the Sun and the Earth? Well, basically every time there’s a new moon (i.e. once every lunar month). But we know there isn’t an eclipse every month. So how come?

 ✕ ```Graphics[{Style[Disk[{0, 0}, .3/5], Yellow], Style[Disk[{.8, 0}, .1/5], Gray], Style[Disk[{1, 0}, .15/5], Blue]}]```

Well, actually, in the analogous situation of Ganymede and Jupiter, there is an eclipse every time Ganymede goes around Jupiter (which happens to be about once per week). Like the Earth, Jupiter’s orbit around the Sun lies in a particular plane (the “Plane of the Ecliptic”). And it turns out that Ganymede’s orbit around Jupiter also lies in essentially the same plane. So every time Ganymede reaches the “new moon” position (or, in official astronomy parlance, when it’s aligned “in syzygy”—pronounced sizz-ee-gee), it’s in the right place to cast its shadow onto Jupiter, and to eclipse the Sun wherever that shadow lands. (From Jupiter, Ganymede appears about 3 times the size of the Sun.)

But our moon is different. Its orbit doesn’t lie in the plane of the ecliptic. Instead, it’s inclined at about 5°. (How it got that way is unknown, but it’s presumably related to how the Moon was formed.) But that 5° is what makes eclipses so comparatively rare: they can only happen when there’s a “new moon configuration” (syzygy) right at a time when the Moon’s orbit passes through the Plane of the Ecliptic.

To show what’s going on, let’s draw an exaggerated version of everything. Here’s the Moon going around the Earth, colored red whenever it’s close to the Plane of the Ecliptic:

 ✕ ```Graphics3D[{With[{dt = 0, \[Theta] = 20 Degree}, Table[{With[{p = {Sin[2 Pi (t + dt)/27.3] Cos[\[Theta]], Cos[2 Pi (t + dt)/27.3] Cos[\[Theta]], Cos[2 Pi (t + dt)/27.3] Sin[\[Theta]]}}, {Style[ Line[{{0, 0, 0}, p}], Opacity[.1]], Style[Sphere[p, .05], Blend[{Red, GrayLevel[.8, .02]}, Sqrt[Abs[Cos[2 Pi t/27.2]]]]]}], Style[Sphere[{0, 0, 0}, .1], Blue]}, {t, 0, 26}]], EdgeForm[Red], Style[InfinitePlane[{0, 0, 0}, {{1, 0, 0}, {0, 1, 0}}], Directive[Red, Opacity[.02]]]}, Lighting -> "Neutral", Boxed -> False]```

Now let’s look at what happens over the course of about a year. We’re showing a dot for where the Moon is each day. And the dot is redder if the Moon is closer to the Plane of the Ecliptic that day. (Note that if this was drawn to scale, you’d barely be able to see the Moon’s orbit, and it wouldn’t ever seem to go backwards like it does here.)

 ✕ ```With[{dt = 1}, Graphics[{Style[Disk[{0, 0}, .1], Darker[Yellow]], Table[{With[{p = .2 {Sin[2 Pi t/27.3], Cos[2 Pi t/27.3]} + {Sin[ 2 Pi t/365.25], Cos[2 Pi t/365.25]}}, {Style[ Line[{{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, p}], Opacity[.3]], Style[Disk[p, .01], Blend[{Red, GrayLevel[.8]}, Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]]}], Style[Disk[{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, .005], Blue]}, {t, 360}]}]]```

Now we can start to see how eclipses work. The basic point is that there’s a solar eclipse whenever the Moon is both positioned between the Earth and the Sun, and it’s in the Plane of the Ecliptic. In the picture, those two conditions correspond to the Moon being as far as possible towards the center, and as red as possible. So far we’re only showing the position of the (exaggerated) moon once per day. But to make things clearer, let’s show it four times a day—and now prune out cases where the Moon isn’t at least roughly lined up with the Sun:

 ✕ `With[{dt=1},Graphics[{Style[Disk[{0,0},.1],Darker[Yellow]],Table[{With[{p=.2 {Sin[2 Pi t/27.3],Cos[2 Pi t/27.3]}+{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]}},If[Norm[p]>.81,{},{Style[Line[{{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]},p}],Opacity[.3]],Style[Disk[p,.01],Blend[{Red,GrayLevel[.8]},Sqrt[Abs[Cos[2 Pi (t+dt)/27.2]]]]]}]],Style[Disk[{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]},.005],Blue]},{t,1,360,.25}]}]]`

And now we can see that at least in this particular case, there are two points (indicated by arrows) where the Moon is lined up and in the plane of the ecliptic (so shown red)—and these points will then correspond to solar eclipses.

In different years, the picture will look slightly different, essentially because the Moon is starting at a different place in its orbit at the beginning of the year. Here are schematic pictures for a few successive years:

 ✕ ```GraphicsGrid[ Partition[ Table[With[{dt = 1}, Graphics[{Table[{With[{p = .2 {Sin[2 Pi t/27.3], Cos[2 Pi t/27.3]} + {Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}}, If[Norm[p] > .81, {}, {Style[Line[{{0, 0}, p}], Blend[{Red, GrayLevel[.8]}, Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]], Style[Line[{{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, p}], Opacity[.3]], Style[Disk[p, .01], Blend[{Red, GrayLevel[.8]}, Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]]}]], Style[Disk[{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, .005], Blue]}, {t, 1 + n *365.25, 360 + n*365.25, .25}], Style[Disk[{0, 0}, .1], Darker[Yellow]]}]], {n, 0, 5}], 3]]```

It’s not so easy to see exactly when eclipses occur here—and it’s also not possible to tell which are total eclipses where the Moon is exactly lined up, and which are only partial eclipses. But there’s at least an indication, for example, that there are “eclipse seasons” in different parts of the year where eclipses happen.

OK, so what does the real data look like? Here’s a plot for 20 years in the past and 20 years in the future, showing the actual days in each year when total and partial solar eclipses occur (the small dots everywhere indicate new moons):

 ✕ ```coord[date_] := {DateValue[date, "Year"], date - NextDate[DateObject[{DateValue[date, "Year"]}], "Instant"]} ListPlot[coord /@ SolarEclipse[{Now - Quantity[20, "Years"], Now + Quantity[20, "Years"], All}], AspectRatio -> 1/3, Frame -> True]```

The reason for the “drift” between successive years is just that the lunar month (29.53 days) doesn’t line up with the year, so the Moon doesn’t go through a whole number of orbits in the course of a year, with the result that at the beginning of a new year, the Moon is in a different phase. But as the picture makes clear, there’s quite a lot of regularity in the general times at which eclipses occur—and for example there are usually 2 eclipses in a given year—though there can be more (and in 0.2% of years there can be as many as 5, as there last were in 1935 ).

To see more detail about eclipses, let’s plot the time differences (in fractional years) between all successive solar eclipses for 100 years in the past and 100 years in the future:

 ✕ `ListLinePlot[Differences[SolarEclipse[{Now - Quantity[100, "Years"], Now + Quantity[100, "Years"], All}]]/Quantity[1, "Years"], Mesh -> All, PlotRange -> All, Frame -> True, AspectRatio -> 1/3, FrameTicks -> {None, Automatic}]`

And now let’s plot the same time differences, but just for total solar eclipses:

 ✕ `ListLinePlot[Differences[SolarEclipse[{Now - Quantity[100, "Years"], Now + Quantity[100, "Years"], All}, EclipseType -> "Total"]]/Quantity[1, "Years"], Mesh -> All, PlotRange -> All, Frame -> True, AspectRatio -> 1/3, FrameTicks -> {None, Automatic}]`

There’s obviously a fair amount of overall regularity here, but there are also lots of little fine structure and irregularities. And being able to correctly predict all these details has basically taken science the better part of a few thousand years.

## Ancient History

It’s hard not to notice an eclipse, and presumably even from the earliest times people did. But were eclipses just reflections—or omens—associated with random goings-on in the heavens, perhaps in some kind of soap opera among the gods? Or were they things that could somehow be predicted?

A few thousand years ago, it wouldn’t have been clear what people like astrologers could conceivably predict. When will the Moon be at a certain place in the sky? Will it rain tomorrow? What will the price of barley be? Who will win a battle? Even now, we’re not sure how predictable all of these are. But the one clear case where prediction and exact science have triumphed is astronomy.

At least as far as the Western tradition is concerned, it all seems to have started in ancient Babylon—where for many hundreds of years, careful observations were made, and, in keeping with the ways of that civilization, detailed records were kept. And even today we still have thousands of daily official diary entries written in what look like tiny chicken scratches preserved on little clay tablets (particularly from Ninevah, which happens now to be in Mosul, Iraq). “Night of the 14th: Cold north wind. Moon was in front of α Leonis. From 15th to 20th river rose 1/2 cubit. Barley was 1 kur 5 siit. 25th, last part of night, moon was 1 cubit 8 fingers behind ε Leonis. 28th, 74° after sunrise, solar eclipse…”

If one looks at what happens on a particular day, one probably can’t tell much. But by putting observations together over years or even hundreds of years it’s possible to see all sorts of repetitions and regularities. And back in Babylonian times the idea arose of using these to construct an ephemeris—a systematic table that said where a particular heavenly body such as the Moon was expected to be at any particular time.

(Needless to say, reconstructing Babylonian astronomy is a complicated exercise in decoding what’s by now basically an alien culture. A key figure in this effort was a certain Otto Neugebauer, who happened to work down the hall from me at the Institute for Advanced Study in Princeton in the early 1980s. I would see him almost every day—a quiet white-haired chap, with a twinkle in his eye—and just sometimes I’d glimpse his huge filing system of index cards which I now realize was at the center of understanding Babylonian astronomy.)

One thing the Babylonians did was to measure surprisingly accurately the repetition period for the phases of the Moon—the so-called synodic month (or “lunation period”) of about 29.53 days. And they noticed that 235 synodic months was very close to 19 years—so that about every 19 years dates and phases of the Moon repeat their alignment, forming a so-called Metonic cycle (named after Meton of Athens, who described it in 432 BC).

It probably helps that the random constellations in the sky form a good pattern against which to measure the precise position of the Moon (it reminds me of the modern fashion of wearing fractals to make motion capture for movies easier). But the Babylonians noticed all sorts of details of the motion of the Moon. They knew about its “anomaly”: its periodic speeding up and slowing down in the sky (now known to be a consequence of its slightly elliptical orbit). And they measured the average period of this—the so-called anomalistic month—to be about 27.55 days. They also noticed that the Moon went above and below the Plane of the Ecliptic (now known to be because of the inclination of its orbit)—with an average period (the so-called draconic month) that they measured as about 27.21 days.

And by 400 BC they’d noticed that every so-called saros of about 18 years 11 days all these different periods essentially line up (223 synodic months, 239 anomalistic months and 242 draconic months)—with the result that the Moon ends up at about the same position relative to the Sun. And this means that if there was an eclipse at one saros, then one can make the prediction that there’s going to be an eclipse at the next saros too.

When one’s absolutely precise about it, there are all sorts of effects that prevent precise repetition at each saros. But over timescales of more than 1300 years, there are in fact still strings of eclipses separated from each other by one saros. (Over the course of such a saros series, the locations of the eclipses effectively scan across the Earth; the upcoming eclipse is number 22 in a series of 77 that began in 1639 AD with an eclipse near the North Pole and will end in 3009 AD with an eclipse near the South Pole.)

Any given moment in time will be in the middle of quite a few saros series (right now it’s 40)—and successive eclipses will always come from different series. But knowing about the saros cycle is a great first step in predicting eclipses—and it’s for example what the Antikythera device uses. In a sense, it’s a quintessential piece of science: take many observations, then synthesize a theory from them, or a least a scheme for computation.

It’s not clear what the Babylonians thought about abstract, formal systems. But the Greeks were definitely into them. And by 300 BC Euclid had defined his abstract system for geometry. So when someone like Ptolemy did astronomy, they did it a bit like Euclid—effectively taking things like the saros cycle as axioms, and then proving from them often surprisingly elaborate geometrical theorems, such as that there must be at least two solar eclipses in a given year.

Ptolemy’s Almagest from around 150 AD is an impressive piece of work, containing among many other things some quite elaborate procedures—and explicit tables—for predicting eclipses. (Yes, even in the later printed version, numbers are still represented confusingly by letters, as they always were in ancient Greek.)

In Ptolemy’s astronomy, Earth was assumed to be at the center of everything. But in modern terms that just meant he was choosing to use a different coordinate system—which didn’t affect most of the things he wanted to do, like working out the geometry of eclipses. And unlike the mainline Greek philosophers he wasn’t so much trying to make a fundamental theory of the world, but just wanted whatever epicycles and so on he needed to explain what he observed.

## The Dawn of Modern Science

For more than a thousand years Ptolemy’s theory of the Moon defined the state of the art. In the 1300s Ibn al-Shatir revised Ptolemy’s models, achieving somewhat better accuracy. In 1472 Regiomontanus (Johannes Müller), systematizer of trigonometry, published more complete tables as part of his launch of what was essentially the first-ever scientific publishing company. But even in 1543 when Nicolaus Copernicus introduced his Sun-centered model of the solar system, the results he got were basically the same as Ptolemy’s, even though his underlying description of what was going on was quite different.

It’s said that Tycho Brahe got interested in astronomy in 1560 at age 13 when he saw a solar eclipse that had been predicted—and over the next several decades his careful observations uncovered several effects in the motion of the Moon (such as speeding up just before a full moon)—that eventually resulted in perhaps a factor 5 improvement in the prediction of its position. To Tycho eclipses were key tests, and he measured them carefully, and worked hard to be able to predict their timing more accurately than to within a few hours. (He himself never saw a total solar eclipse, only partial ones.)

Armed with Tycho’s observations, Johannes Kepler developed his description of orbits as ellipses—introducing concepts like inclination and eccentric anomaly—and in 1627 finally produced his Rudolphine Tables, which got right a lot of things that had been got wrong before, and included all sorts of detailed tables of lunar positions, as well as vastly better predictions for eclipses.

Using Kepler’s Rudolphine Tables (and a couple of pages of calculations)—the first known actual map of a solar eclipse was published in 1654. And while there are some charming inaccuracies in overall geography, the geometry of the eclipse isn’t too bad.

Whether it was Ptolemy’s epicycles or Kepler’s ellipses, there were plenty of calculations to do in determining the motions of heavenly bodies (and indeed the first known mechanical calculator—excepting the Antikythera device—was developed by a friend of Kepler’s, presumably for the purpose). But there wasn’t really a coherent underlying theory; it was more a matter of describing effects in ways that could be used to make predictions.

So it was a big step forward in 1687 when Isaac Newton published his Principia, and claimed that with his laws for motion and gravity it should be possible—essentially from first principles—to calculate everything about the motion of the Moon. (Charmingly, in his “Theory of the World” section he simply asserts as his Proposition XXII “That all the motions of the Moon… follow from the principles which we have laid down.”)

Newton was proud of the fact that he could explain all sorts of known effects on the basis of his new theory. But when it came to actually calculating the detailed motion of the Moon he had a frustrating time. And even after several years he still couldn’t get the right answer—in later editions of the Principia adding the admission that actually “The apse of the Moon is about twice as swift” (i.e. his answer was wrong by a factor of 2).

Still, in 1702 Newton was happy enough with his results that he allowed them to be published, in the form of a 20-page booklet on the “Theory of the Moon”, which proclaimed that “By this Theory, what by all Astronomers was thought most difficult and almost impossible to be done, the Excellent Mr. Newton hath now effected, viz. to determine the Moon’s Place even in her Quadratures, and all other Parts of her Orbit, besides the Syzygys, so accurately by Calculation, that the Difference between that and her true Place in the Heavens shall scarce be two Minutes…”

Newton didn’t explain his methods (and actually it’s still not clear exactly what he did, or how mathematically rigorous it was or wasn’t). But his booklet effectively gave a step-by-step algorithm to compute the position of the Moon. He didn’t claim it worked “at the syzygys” (i.e. when the Sun, Moon and Earth are lined up for an eclipse)—though his advertised error of two arc-minutes was still much smaller than the angular size of the Moon in the sky.

But it wasn’t eclipses that were the focus then; it was a very practical problem of the day: knowing the location of a ship out in the open ocean. It’s possible to determine what latitude you’re at just by measuring how high the Sun gets in the sky. But to determine longitude you have to correct for the rotation of the Earth—and to do that you have to accurately keep track of time. But back in Newton’s day, the clocks that existed simply weren’t accurate enough, especially when they were being tossed around on a ship.

But particularly after various naval accidents, the problem of longitude was deemed important enough that the British government in 1714 established a “Board of Longitude” to offer prizes to help get it solved. One early suggestion was to use the regularity of the moons of Jupiter discovered by Galileo as a way to tell time. But it seemed that a simpler solution (not requiring a powerful telescope) might just be to measure the position of our moon, say relative to certain fixed stars—and then to back-compute the time from this.

But to do this one had to have an accurate way to predict the motion of the Moon—which is what Newton was trying to provide. In reality, though, it took until the 1760s before tables were produced that were accurate enough to be able to determine time to within a minute (and thus distance to within 15 miles or so). And it so happens that right around the same time a marine chronometer was invented that was directly able to keep good time.

## The Three-Body Problem

One of Newton’s great achievements in the Principia was to solve the so-called two-body problem, and to show that with an inverse square law of gravity the orbit of one body around another must always be what Kepler had said: an ellipse.

In a first approximation, one can think of the Moon as just orbiting the Earth in a simple elliptical orbit. But what makes everything difficult is that that’s just an approximation, because in reality there’s also a gravitational pull on the Moon from the Sun. And because of this, the Moon’s orbit is no longer a simple fixed ellipse—and in fact it ends up being much more complicated. There are a few definite effects one can describe and reason about. The ellipse gets stretched when the Earth is closer to the Sun in its own orbit. The orientation of the ellipse precesses like a top as a result of the influence of the Sun. But there’s no way in the end to work out the orbit by pure reasoning—so there’s no choice but to go into the mathematics and start solving the equations of the three-body problem.

In many ways this represented a new situation for science. In the past, one hadn’t ever been able to go far without having to figure out new laws of nature. But here the underlying laws were supposedly known, courtesy of Newton. Yet even given these laws, there was difficult mathematics involved in working out the behavior they implied.

Over the course of the 1700s and 1800s the effort to try to solve the three-body problem and determine the orbit of the Moon was at the center of mathematical physics—and attracted a veritable who’s who of mathematicians and physicists.

An early entrant was Leonhard Euler, who developed methods based on trigonometric series (including much of our current notation for such things), and whose works contain many immediately recognizable formulas:

In the mid-1740s there was a brief flap—also involving Euler’s “competitors” Clairaut and d’Alembert—about the possibility that the inverse-square law for gravity might be wrong. But the problem turned out to be with the calculations, and by 1748 Euler was using sums of about 20 trigonometric terms and proudly proclaiming that the tables he’d produced for the three-body problem had predicted the time of a total solar eclipse to within minutes. (Actually, he had said there’d be 5 minutes of totality, whereas in reality there was only 1—but he blamed this error on incorrect coordinates he’d been given for Berlin.)

Mathematical physics moved rapidly over the next few decades, with all sorts of now-famous methods being developed, notably by people like Lagrange. And by the 1770s, for example, Lagrange’s work was looking just like it could have come from a modern calculus book (or from a Wolfram|Alpha step-by-step solution):

Particularly in the hands of Laplace there was increasingly obvious success in deriving the observed phenomena of what he called “celestial mechanics” from mathematics—and in establishing the idea that mathematics alone could indeed generate new results in science.

At a practical level, measurements of things like the position of the Moon had always been much more accurate than calculations. But now they were becoming more comparable—driving advances in both. Meanwhile, there was increasing systematization in the production of ephemeris tables. And in 1767 the annual publication began of what was for many years the standard: the British Nautical Almanac.

The almanac quoted the position of the Moon to the arc-second, and systematically achieved at least arc-minute accuracy. The primary use of the almanac was for navigation (and it was what started the convention of using Greenwich as the “prime meridian” for measuring time). But right at the front of each year’s edition were the predicted times of the eclipses for that year—in 1767 just two solar eclipses:

## The Math Gets More Serious

At a mathematical level, the three-body problem is about solving a system of three ordinary differential equations that give the positions of the three bodies as a function of time. If the positions are represented in standard 3D Cartesian coordinates
ri={xi, yi, zi}, the equations can be stated in the form:

The {x,y,z} coordinates here aren’t, however, what traditionally show up in astronomy. For example, in describing the position of the Moon one might use longitude and latitude on a sphere around the Earth. Or, given that one knows the Moon has a roughly elliptical orbit, one might instead choose to describe its motions by variables that are based on deviations from such an orbit. In principle it’s just a matter of algebraic manipulation to restate the equations with any given choice of variables. But in practice what comes out is often long and complex—and can lead to formulas that fill many pages.

But, OK, so what are the best kind of variables to use for the three-body problem? Maybe they should involve relative positions of pairs of bodies. Or relative angles. Or maybe positions in various kinds of rotating coordinate systems. Or maybe quantities that would be constant in a pure two-body problem. Over the course of the 1700s and 1800s many treatises were written exploring different possibilities.

But in essentially all cases the ultimate approach to the three-body problem was the same. Set up the problem with the chosen variables. Identify parameters that, if set to zero, would make the problem collapse to some easy-to-solve form. Then do a series expansion in powers of these parameters, keeping just some number of terms.

By the 1860s Charles Delaunay had spent 20 years developing the most extensive theory of the Moon in this way. He’d identified five parameters with respect to which to do his series expansions (eccentricities, inclinations, and ratios of orbit sizes)—and in the end he generated about 1800 pages like this (yes, he really needed Mathematica!):

But the sad fact was that despite all this effort, he didn’t get terribly good answers. And eventually it became clear why. The basic problem was that Delaunay wanted to represent his results in terms of functions like sin and cos. But in his computations, he often wanted to do series expansions with respect to the frequencies of those functions. Here’s a minimal case:

 ✕ `Series[Sin[(ω + δ)*t], {δ, 0, 3}]`

And here’s the problem. Take a look even at the second term. Yes, the δ parameter may be small. But how about the t parameter, standing for time? If you don’t want to make predictions very far out, that’ll stay small. But what if you want to figure out what will happen further in the future?

Well eventually that term will get big. And higher-order terms will get even bigger. But unless the Moon is going to escape its orbit or something, the final mathematical expressions that represent its position can’t have values that are too big. So in these expressions the so-called secular terms that increase with t must somehow cancel out.

But the problem is that at any given order in the series expansion, there’s no guarantee that will happen in a numerically useful way. And in Delaunay’s case—even though with immense effort he often went to 7th order or beyond—it didn’t.

One nice feature of Delaunay’s computation was that it was in a sense entirely algebraic: everything was done symbolically, and only at the very end were actual numerical values of parameters substituted in.

But even before Delaunay, Peter Hansen had taken a different approach—substituting numbers as soon as he could, and dropping terms based on their numerical size rather than their symbolic form. His presentations look less pure (notice things like all those t1800, where t is the time in years), and it’s more difficult to tell what’s going on. But as a practical matter, his results were much better, and in fact were used for many national almanacs from about 1862 to 1922, achieving errors as small as 1 or 2 arc-seconds at least over periods of a decade or so. (Over longer periods, the errors could rapidly increase because of the lack of terms that had been dropped as a result of what amounted to numerical accidents.)

Both Delaunay and Hansen tried to represent orbits as series of powers and trigonometric functions (so-called Poisson series). But in the 1870s, George Hill in the US Nautical Almanac Office proposed instead using as a basis numerically computed functions that came from solving an equation for two-body motion with a periodic driving force of roughly the kind the Sun exerts on the Moon’s orbit. A large-scale effort was mounted, and starting in 1892 Ernest W. Brown (who had moved to the US, but had been a student of George Darwin, Charles Darwin’s physicist son) took charge of the project and in 1918 produced what would stand for many years as the definitive “Tables of the Motion of the Moon”.

Brown’s tables consist of hundreds of pages like this—ultimately representing the position of the Moon as a combination of about 1400 terms with very precise coefficients:

He says right at the beginning that the tables aren’t particularly intended for unique events like eclipses, but then goes ahead and does a “worked example” of computing an eclipse from 381 BC, reported by Ptolemy:

It was an impressive indication of how far things had come. But ironically enough the final presentation of Brown’s tables had the same sum-of-trigonometric-functions form that one would get from having lots of epicycles. At some level it’s not surprising, because any function can ultimately be represented by epicycles, just as it can be represented by a Fourier or other series. But it’s a strange quirk of history that such similar forms were used.

## Can the Three-Body Problem Be Solved?

It’s all well and good that one can find approximations to the three-body problem, but what about just finding an outright solution—like as a mathematical formula? Even in the 1700s, there’d been some specific solutions found—like Euler’s collinear configuration, and Lagrange’s equilateral triangle. But a century later, no further solutions had been found—and finding a complete solution to the three-body problem was beginning to seem as hopeless as trisecting an angle, solving the quintic, or making a perpetual motion machine. (That sentiment was reflected for example in a letter Charles Babbage wrote Ada Lovelace in 1843 mentioning the “horrible problem [of] the three bodies”—even though this letter was later misinterpreted by Ada’s biographers to be about a romantic triangle, not the three-body problem of celestial mechanics.)

In contrast to the three-body problem, what seemed to make the two-body problem tractable was that its solutions could be completely characterized by “constants of the motion”—quantities that stay constant with time (in this case notably the direction of the axis of the ellipse). So for many years one of the big goals with the three-body problem was to find constants of the motion.

In 1887, though, Heinrich Bruns showed that there couldn’t be any such constants of the motion, at least expressible as algebraic functions of the standard {x,y,z} position and velocity coordinates of the three bodies. Then in the mid-1890s Henri Poincaré showed that actually there couldn’t be any constants of the motion that were expressible as any analytic functions of the positions, velocities and mass ratios.

One reason that was particularly disappointing at the time was that it had been hoped that somehow constants of the motion would be found in n-body problems that would lead to a mathematical proof of the long-term stability of the solar system. And as part of his work, Poincaré also saw something else: that at least in particular cases of the three-body problem, there was arbitrarily sensitive dependence on initial conditions—implying that even tiny errors in measurement could be amplified to arbitrarily large changes in predicted behavior (the classic “chaos theory” phenomenon).

But having discovered that particular solutions to the three-body problem could have this kind of instability, Poincaré took a different approach that would actually be characteristic of much of pure mathematics going forward: he decided to look not at individual solutions, but at the space of all possible solutions. And needless to say, he found that for the three-body problem, this was very complicated—though in his efforts to analyze it he invented the field of topology.

Poincaré’s work all but ended efforts to find complete solutions to the three-body problem. It also seemed to some to explain why the series expansions of Delaunay and others hadn’t worked out—though in 1912 Karl Sundman did show that at least in principle the three-body problem could be solved in terms of an infinite series, albeit one that converges outrageously slowly.

But what does it mean to say that there can’t be a solution to the three-body problem? Galois had shown that there couldn’t be a solution to the generic quintic equation, at least in terms of radicals. But actually it’s still perfectly possible to express the solution in terms of elliptic or hypergeometric functions. So why can’t there be some more sophisticated class of functions that can be used to just “solve the three-body problem”?

Here are some pictures of what can actually happen in the three-body problem, with various initial conditions:

 ✕ ```eqns = {Subscript[m, 1]* Derivative[2][Subscript[r, 1]][ t] == -((Subscript[m, 1]* Subscript[m, 2]*(Subscript[r, 1][t] - Subscript[r, 2][t]))/ Norm[Subscript[r, 1][t] - Subscript[r, 2][t]]^3) - (Subscript[ m, 1]*Subscript[m, 3]*(Subscript[r, 1][t] - Subscript[r, 3][t]))/ Norm[Subscript[r, 1][t] - Subscript[r, 3][t]]^3, Subscript[m, 2]* Derivative[2][Subscript[r, 2]][ t] == -((Subscript[m, 1]* Subscript[m, 2]*(Subscript[r, 2][t] - Subscript[r, 1][t]))/ Norm[Subscript[r, 2][t] - Subscript[r, 1][t]]^3) - (Subscript[ m, 2]*Subscript[m, 3]*(Subscript[r, 2][t] - Subscript[r, 3][t]))/ Norm[Subscript[r, 2][t] - Subscript[r, 3][t]]^3, Subscript[m, 3]* Derivative[2][Subscript[r, 3]][ t] == -((Subscript[m, 1]* Subscript[m, 3]*(Subscript[r, 3][t] - Subscript[r, 1][t]))/ Norm[Subscript[r, 3][t] - Subscript[r, 1][t]]^3) - (Subscript[ m, 2]*Subscript[m, 3]*(Subscript[r, 3][t] - Subscript[r, 2][t]))/ Norm[Subscript[r, 3][t] - Subscript[r, 2][t]]^3}; (SeedRandom[#]; {Subscript[m, 1], Subscript[m, 2], Subscript[m, 3]} = RandomReal[{0, 1}, 3]; inits = Table[{Subscript[r, i][0] == RandomReal[{-1, 1}, 3], Subscript[r, i]'[0] == RandomReal[{-1, 1}, 3]}, {i, 3}]; sols = NDSolve[{eqns, inits}, {Subscript[r, 1], Subscript[r, 2], Subscript[r, 3]}, {t, 0, 100}]; ParametricPlot3D[{Subscript[r, 1][t], Subscript[r, 2][t], Subscript[r, 3][t]} /. sols, {t, 0, 100}, Ticks -> None]) & /@ {776, 5742, 6711, 2300, 5281, 9225}```

And looking at these immediately gives some indication of why it’s not easy to just “solve the three-body problem”. Yes, there are cases where what happens is fairly simple. But there are also cases where it’s not, and where the trajectories of the three bodies continue to be complicated and tangled for a long time.

So what’s fundamentally going on here? I don’t think traditional mathematics is the place to look. But I think what we’re seeing is actually an example of a general phenomenon I call computational irreducibility that I discovered in the 1980s in studying the computational universe of possible programs.

Many programs, like many instances of the three-body problem, behave in quite simple ways. But if you just start looking at all possible simple programs, it doesn’t take long before you start seeing behavior like this:

 ✕ ```ArrayPlot[ CellularAutomaton[{#, 3, 1}, {{2}, 0}, 100], ImageSize -> {Automatic, 100}] & /@ {5803305107286, 2119737824118, 5802718895085, 4023376322994, 6252890585925}```

How can one tell what’s going to happen? Well, one can just keep explicitly running each program and seeing what it does. But the question is: is there some systematic way to jump ahead, and to predict what will happen without tracing through all the steps?

The answer is that in general there isn’t. And what I call the Principle of Computational Equivalence suggests that pretty much whenever one sees complex behavior, there won’t be.

Here’s the way to think about this. The system one’s studying is effectively doing a computation to work out what its behavior will be. So to jump ahead we’d in a sense have to do a more sophisticated computation. But what the Principle of Computational Equivalence says is that actually we can’t—and that whether we’re using our brains or our mathematics or a Turing machine or anything else, we’re always stuck with computations of the same sophistication.

So what about the three-body problem? Well, I strongly suspect that it’s an example of computational irreducibility: that in effect the computations it’s doing are as sophisticated as any computations that we can do, so there’s no way we can ever expect to systematically jump ahead and solve the problem. (We also can’t expect to just define some new finite class of functions that can just be evaluated to give the solution.)

I’m hoping that one day someone will rigorously prove this. There’s some technical difficulty, because the three-body problem is usually formulated in terms of real numbers that immediately have an infinite number of digits—but to compare with ordinary computation one has to require finite processes to set up initial conditions. (Ultimately one wants to show for example that there’s a “compiler” that can go from any program, say for a Turing machine, and can generate instructions to set up initial conditions for a three-body problem so that the evolution of the three-body problem will give the same results as running that program—implying that the three-body problem is capable of universal computation.)

I have to say that I consider Newton in a sense very lucky. It could have been that it wouldn’t have been possible to work out anything interesting from his theory without encountering the kind of difficulties he had with the motion of the Moon—because one would always be running into computational irreducibility. But in fact, there was enough computational reducibility and enough that could be computed easily that one could see that the theory was useful in predicting features of the world (and not getting wrong answers, like with the apse of the Moon)—even if there were some parts that might take two centuries to work out, or never be possible at all.

Newton himself was certainly aware of the potential issue, saying that at least if one was dealing with gravitational interactions between many planets then “to define these motions by exact laws admitting of easy calculation exceeds, if I am not mistaken, the force of any human mind”. And even today it’s extremely difficult to know what the long-term evolution of the solar system will be.

It’s not particularly that there’s sensitive dependence on initial conditions: we actually have measurements that should be precise enough to determine what will happen for a long time. The problem is that we just have to do the computation—a bit like computing the digits of π—to work out the behavior of the n-body problem that is our solar system.

Existing simulations show that for perhaps a few tens of millions of years, nothing too dramatic can happen. But after that we don’t know. Planets could change their order. Maybe they could even collide, or be ejected from the solar system. Computational irreducibility implies that at least after an infinite time it’s actually formally undecidable (in the sense of Gödel’s Theorem or the Halting Problem) what can happen.

One of my children, when they were very young, asked me whether when dinosaurs existed the Earth could have had two moons. For years when I ran into celestial mechanics experts I would ask them that question—and it was notable how difficult they found it. Most now say that at least at the time of the dinosaurs we couldn’t have had an extra moon—though a billion years earlier it’s not clear.

We used to only have one system of planets to study. And the fact that there were (then) 9 of them used to be a classic philosopher’s example of a truth about the world that just happens to be the way it is, and isn’t “necessarily true” (like 2+2=4). But now of course we know about lots of exoplanets. And it’s beginning to look as if there might be a theory for things like how many planets a solar system is likely to have.

At some level there’s presumably a process like natural selection: some configurations of planets aren’t “fit enough” to be stable—and only those that are survive. In biology it’s traditionally been assumed that natural selection and adaptation is somehow what’s led to the complexity we see. But actually I suspect much of it is instead just a reflection of what generally happens in the computational universe—both in biology and in celestial mechanics. Now in celestial mechanics, we haven’t yet seen in the wild any particularly complex forms (beyond a few complicated gap structures in rings, and tumbling moons and asteroids). But perhaps elsewhere we’ll see things like those obviously tangled solutions to the three-body problem—that come closer to what we’re used to in biology.

It’s remarkable how similar the issues are across so many different fields. For example, the whole idea of using “perturbation theory” and series expansions that has existed since the 1700s in celestial mechanics is now also core to quantum field theory. But just like in celestial mechanics there’s trouble with convergence (maybe one should try renormalization or resummation in celestial mechanics). And in the end one begins to realize that there are phenomena—no doubt like turbulence or the three-body problem—that inevitably involve more sophisticated computations, and that need to be studied not with traditional mathematics of the kind that was so successful for Newton and his followers but with the kind of science that comes from exploring the computational universe.

## Approaching Modern Times

But let’s get back to the story of the motion of the Moon. Between Brown’s tables, and Poincaré’s theoretical work, by the beginning of the 1900s the general impression was that whatever could reasonably be computed about the motion of the Moon had been computed.

Occasionally there were tests. Like for example in 1925, when there was a total solar eclipse visible in New York City, and the New York Times perhaps overdramatically said that “scientists [were] tense… wondering whether they or moon is wrong as eclipse lags five seconds behind”. The fact is that a prediction accurate to 5 seconds was remarkably good, and we can’t do all that much better even today. (By the way, the actual article talks extensively about “Professor Brown”—as well as about how the eclipse might “disprove Einstein” and corroborate the existence of “coronium”—but doesn’t elaborate on the supposed prediction error.)

As a practical matter, Brown’s tables were not exactly easy to use: to find the position of the Moon from them required lots of mechanical desk calculator work, as well as careful transcription of numbers. And this led Leslie Comrie in 1932 to propose using a punch-card-based IBM Hollerith automatic tabulator—and with the help of Thomas Watson, CEO of IBM, what was probably the first “scientific computing laboratory” was established—to automate computations from Brown’s tables.

(When I was in elementary school in England in the late 1960s—before electronic calculators—I always carried around, along with my slide rule, a little book of “4-figure mathematical tables”. I think I found it odd that such a book would have an author—and perhaps for that reason I still remember the name: “L. J. Comrie”.)

By the 1950s, the calculations in Brown’s tables were slowly being rearranged and improved to make them more suitable for computers. But then with John F. Kennedy’s 1962 “We choose to go the Moon”, there was suddenly urgent interest in getting the most accurate computations of the Moon’s position. As it turned out, though, it was basically just a tweaked version of Brown’s tables, running on a mainframe computer, that did the computations for the Apollo program.

At first, computers were used in celestial mechanics purely for numerical computation. But by the mid-1960s there were also experiments in using them for algebraic computation, and particularly to automate the generation of series expansions. Wallace Eckert at IBM started using FORMAC to redo Brown’s tables, while in Cambridge David Barton and Steve Bourne (later the creator of the “Bourne shell” (sh) in Unix) built their own CAMAL computer algebra system to try extending the kind of thing Delaunay had done. (And by 1970, Delaunay’s 7th-order calculations had been extended to 20th order.)

When I myself started to work on computer algebra in 1976 (primarily for computations in particle physics), I’d certainly heard about CAMAL, but I didn’t know what it had been used for (beyond vaguely “celestial mechanics”). And as a practicing theoretical physicist in the late 1970s, I have to say that the “problem of the Moon” that had been so prominent in the 1700s and 1800s had by then fallen into complete obscurity.

I remember for example in 1984 asking a certain Martin Gutzwiller, who was talking about quantum chaos, what his main interest actually was. And when he said “the problem of the Moon”, I was floored; I didn’t know there still was any problem with the Moon. As it turns, in writing this post I found out that Gutzwiller was actually the person who took over from Eckert and spent nearly two decades working on trying to improve the computations of the position of the Moon.

## Why Not Just Solve It?

Traditional approaches to the three-body problem come very much from a mathematical way of thinking. But modern computational thinking immediately suggests a different approach. Given the differential equations for the three-body problem, why not just directly solve them? And indeed in the Wolfram Language there’s a built-in function NDSolve for numerically solving systems of differential equations.

So what happens if one just feeds in equations for a three-body problem? Well, here are the equations:

 ✕ ```eqns = {Subscript[m, 1] (Subscript[r, 1]^\[Prime]\[Prime])[t] == -(( Subscript[m, 1] Subscript[m, 2] (Subscript[r, 1][t] - Subscript[r, 2][t]))/ Norm[Subscript[r, 1][t] - Subscript[r, 2][t]]^3) - ( Subscript[m, 1] Subscript[m, 3] (Subscript[r, 1][t] - Subscript[r, 3][t]))/ Norm[Subscript[r, 1][t] - Subscript[r, 3][t]]^3, Subscript[m, 2] (Subscript[r, 2]^\[Prime]\[Prime])[t] == -(( Subscript[m, 1] Subscript[m, 2] (Subscript[r, 2][t] - Subscript[r, 1][t]))/ Norm[Subscript[r, 2][t] - Subscript[r, 1][t]]^3) - ( Subscript[m, 2] Subscript[m, 3] (Subscript[r, 2][t] - Subscript[r, 3][t]))/ Norm[Subscript[r, 2][t] - Subscript[r, 3][t]]^3, Subscript[m, 3] (Subscript[r, 3]^\[Prime]\[Prime])[t] == -(( Subscript[m, 1] Subscript[m, 3] (Subscript[r, 3][t] - Subscript[r, 1][t]))/ Norm[Subscript[r, 3][t] - Subscript[r, 1][t]]^3) - ( Subscript[m, 2] Subscript[m, 3] (Subscript[r, 3][t] - Subscript[r, 2][t]))/ Norm[Subscript[r, 3][t] - Subscript[r, 2][t]]^3};```

Now as an example let’s set the masses to random values:

 ✕ `{Subscript[m, 1], Subscript[m, 2], Subscript[m, 3]} = RandomReal[{0, 1}, 3]`

And let’s define the initial position and velocity for each body to be random as well:

 ✕ `inits = Table[{Subscript[r, i][0] == RandomReal[{-1, 1}, 3], Derivative[1][Subscript[r, i]][0] == RandomReal[{-1, 1}, 3]}, {i, 3}]`

Now we can just use NDSolve to get the solutions (it gives them as implicit approximate numerical functions of t):

 ✕ `sols = NDSolve[{eqns, inits}, {Subscript[r, 1], Subscript[r, 2], Subscript[r, 3]}, {t, 0, 100}]`

And now we can plot them. And now we’ve got a solution to a three-body problem, just like that!

 ✕ `ParametricPlot3D[Evaluate[{Subscript[r, 1][t], Subscript[r, 2][t], Subscript[r, 3][t]} /. First[sols]], {t, 0, 100}]`

Well, obviously this is using the Wolfram Language and a huge tower of modern technology. But would it have been possible even right from the beginning for people to generate direct numerical solutions to the three-body problem, rather than doing all that algebra? Back in the 1700s, Euler already knew what’s now called Euler’s method for finding approximate numerical solutions to differential equations. So what if he’d just used that method to calculate the motion of the Moon?

The method relies on taking a sequence of discrete steps in time. And if he’d used, say, a step size of a minute, then he’d have had to take 40,000 steps to get results for a month, but he should have been able to successfully reproduce the position of the Moon to about a percent. If he’d tried to extend to 3 months, however, then he would already have had at least a 10% error.

Any numerical scheme for solving differential equations in practice eventually builds up some kind of error—but the more one knows about the equations one’s solving, and their expected solutions, the more one’s able to preprocess and adapt things to minimize the error. NDSolve has enough automatic adaptivity built into it that it’ll do pretty well for a surprisingly long time on a typical three-body problem. (It helps that the Wolfram Language and NDSolve can handle numbers with arbitrary precision, not just machine precision.)

But if one looks, say, at the total energy of the three-body system—which one can prove from the equations should stay constant—then one will typically see an error slowly build up in it. One can avoid this if one effectively does a change of variables in the equations to “factor out” energy. And one can imagine doing a whole hierarchy of algebraic transformations that in a sense give the numerical scheme as much help as possible.

And indeed since at least the 1980s that’s exactly what’s been done in practical work on the three-body problem, and the Earth-Moon-Sun system. So in effect it’s a mixture of the traditional algebraic approach from the 1700s and 1800s, together with modern numerical computation.

## The Real Earth-Moon-Sun Problem

OK, so what’s involved in solving the real problem of the Earth-Moon-Sun system? The standard three-body problem gives a remarkably good approximation to the physics of what’s happening. But it’s obviously not the whole story.

For a start, the Earth isn’t the only planet in the solar system. And if one’s trying to get sufficiently accurate answers, one’s going to have to take into account the gravitational effect of other planets. The most important is Jupiter, and its typical effect on the orbit of the Moon is at about the 10-5 level—sufficiently large that for example Brown had to take it into account in his tables.

The next effect is that the Earth isn’t just a point mass, or even a precise sphere. Its rotation makes it bulge at the equator, and that affects the orbit of the Moon at the 10-6 level.

Orbits around the Earth ultimately depend on the full mass distribution and gravitational field of the Earth (which is what Sputnik-1 was nominally launched to map)—and both this, and the reverse effect from the Moon, come in at the 10-8 level. At the 10-9 level there are then effects from tidal deformations (“solid tides”) on the Earth and moon, as well as from gravitational redshift and other general relativistic phenomena.

To predict the position of the Moon as accurately as possible one ultimately has to have at least some model for these various effects.

But there’s a much more immediate issue to deal with: one has to know the initial conditions for the Earth, Sun and Moon, or in other words, one has to know as accurately as possible what their positions and velocities were at some particular time.

And conveniently enough, there’s now a really good way to do that, because Apollo 11, 14 and 15 all left laser retroreflectors on the Moon. And by precisely timing how long it takes a laser pulse from the Earth to round-trip to these retroreflectors, it’s now possible in effect to measure the position of the Moon to millimeter accuracy.

OK, so how do modern analogs of the Babylonian ephemerides actually work? Internally they’re dealing with the equations for all the significant bodies in the solar system. They do symbolic preprocessing to make their numerical work as easy as possible. And then they directly solve the differential equations for the system, appropriately inserting models for things like the mass distribution in the Earth.

They start from particular measured initial conditions, but then they repeatedly insert new measurements, trying to correct the parameters of the model so as to optimally reproduce all the measurements they have. It’s very much like a typical machine learning task—with the training data here being observations of the solar system (and typically fitting just being least squares).

But, OK, so there’s a model one can run to figure out something like the position of the Moon. But one doesn’t want to have to explicitly do that every time one needs to get a result; instead one wants in effect just to store a big table of pre-computed results, and then to do something like interpolation to get any particular result one needs. And indeed that’s how it’s done today.

## How It’s Really Done

Back in the 1960s NASA started directly solving differential equations for the motion of planets. The Moon was more difficult to deal with, but by the 1980s that too was being handled in a similar way. Ongoing data from things like the lunar retroreflectors was added, and all available historical data was inserted as well.

The result of all this was the JPL Development Ephemeris (JPL DE). In addition to new observations being used, the underlying system gets updated every few years, typically to get what’s needed for some spacecraft going to some new place in the solar system. (The latest is DE432, built for going to Pluto.)

But how is the actual ephemeris delivered? Well, for every thousand years covered, the ephemeris has about 100 megabytes of results, given as coefficients for Chebyshev polynomials, which are convenient for interpolation. And for any given quantity in any given coordinate system over a particular period of time, one accesses the appropriate parts of these results.

OK, but so how does one find an eclipse? Well, it’s an iterative process. Start with an approximation, perhaps from the saros cycle. Then interpolate the ephemeris and look at the result. Then keep iterating until one finds out just when the Moon will be in the appropriate position.

But actually there’s some more to do. Because what’s originally computed are the positions of the barycenters (centers of mass) of the various bodies. But now one has to figure out how the bodies are oriented.

The Earth rotates, and we know its rate quite precisely. But the Moon is basically locked with the same face pointing to the Earth, except that in practice there are small “librations” where the Moon wobbles a little back and forth—and these turn out to be particularly troublesome to predict.

## Computing the Eclipse

OK, so let’s say one knows where the Earth, Moon and Sun are. How does one then figure out where on the Earth the eclipse will actually hit? Well, there’s some further geometry to do. Basically, the Moon generates a cone of shadow in a direction defined by the location of the Sun, and what’s then needed is to figure out how the surface of the Earth intersects that cone.

In 1824 Friedrich Bessel suggested in effect inverting the problem by using the shadow cone to define a coordinate system in which to specify the positions of the Sun and Moon. The resulting so-called Besselian elements provide a convenient summary of the local geometry of an eclipse—with respect to which its path can be defined.

OK, but so how does one figure out at what time an eclipse will actually reach a given point on Earth? Well, first one has to be clear on one’s definition of time. And there’s an immediate issue with the speed of light and special relativity. What does it mean to say that the positions of the Earth and Sun are such-and-such at such-and-such a time? Because it takes light about 8 minutes to get to the Earth from the Sun, we only get to see where the Sun was 8 minutes ago, not where it is now.

And what we need is really a classic special relativity setup. We essentially imagine that the solar system is filled with a grid of clocks that have been synchronized by light pulses. And what a modern ephemeris does is to quote the results for positions of bodies in the solar system relative to the times on those clocks. (General relativity implies that in different gravitational fields the clocks will run at different rates, but for our purposes this is a tiny effect. But what isn’t a tiny effect is including retardation in the equations for the n-body problem—making them become delay differential equations.)

But now there’s another issue. If one’s observing the eclipse, one’s going to be using some timepiece (phone?) to figure out what time it is. And if it’s working properly that timepiece should show official “civil time” that’s based on UTC—which is what NTP internet time is synchronized to. But the issue is that UTC has a complicated relationship to the time used in the astronomical ephemeris.

The starting point is what’s called UT1: a definition of time in which one day is the average time it takes the Earth to rotate once relative to the Sun. But the point is that this average time isn’t constant, because the rotation of the Earth is gradually slowing down, primarily as a result of interactions with the Moon. But meanwhile, UTC is defined by an atomic clock whose timekeeping is independent of any issues about the rotation of the Earth.

There’s a convention for keeping UT1 aligned with UTC: if UT1 is going to get more than 0.9 seconds away from UTC, then a leap second is added to UTC. One might think this would be a tiny effect, but actually, since 1972, a total of 27 leap seconds have been added. Exactly when a new leap second will be needed is unpredictable; it depends on things like what earthquakes have occurred. But we need to account for leap seconds if we’re going to get the time of the eclipse correct to the second relative to UTC or internet time.

There are a few other effects that are also important in the precise observed timing of the eclipse. The most obvious is geo elevation. In doing astronomical computations, the Earth is assumed to be an ellipsoid. (There are many different definitions, corresponding to different geodetic “datums”—and that’s an issue in defining things like “sea level”, but it’s not relevant here.) But if you’re at a different height above the ellipsoid, the cone of shadow from the eclipse will reach you at a different time. And the size of this effect can be as much as 0.3 seconds for every 1000 feet of height.

All of the effects we’ve talked about we’re readily able to account for. But there is one remaining effect that’s a bit more difficult. Right at the beginning or end of totality one typically sees points of light on the rim of the Moon. Known as Baily’s beads, these are the result of rays of light that make it to us between mountains on the Moon. Figuring out exactly when all these rays are extinguished requires taking geo elevation data for the Moon, and effectively doing full 3D ray tracing. The effect can last as long as a second, and can cause the precise edge of totality to move by as much as a mile. (One can also imagine effects having to do with the corona of the Sun, which is constantly changing.)

But in the end, even though the shadow of the Moon on the Earth moves at more than 1000 mph, modern science successfully makes it possible to compute when the shadow will reach a particular point on Earth to an accuracy of perhaps a second. And that’s what our precisioneclipse.com website is set up to do.

## Eclipse Experiences

I saw my first partial solar eclipse more than 50 years ago. And I’ve seen one total solar eclipse before in my life—in 1991. It was the longest eclipse (6 minutes 53 seconds) that’ll happen for more than a century.

There was a certain irony to my experience, though, especially in view of our efforts now to predict the exact arrival time of next week’s eclipse. I’d chartered a plane and flown to a small airport in Mexico (yes, that’s me on the left with the silly hat)—and my friends and I had walked to a beautiful deserted beach, and were waiting under a cloudless sky for the total eclipse to begin.

I felt proud of how prepared I was—with maps marking to the minute when the eclipse should arrive. But then I realized: there we were, out on a beach with no obvious signs of modern civilization—and nobody had brought any properly set timekeeping device (and in those days my cellphone was just a phone, and didn’t even have signal there).

And so it was that I missed seeing a demonstration of an impressive achievement of science. And instead I got to experience the eclipse pretty much the way people throughout history have experienced eclipses—even if I did know that the Moon would continue gradually eating into the Sun and eventually cover it, and that it wouldn’t make the world end.

There’s always something sobering about astronomical events, and about realizing just how tiny human scales are compared to them. Billions of eclipses have happened over the course of the Earth’s history. Recorded history has covered only a few thousand of them. On average, there’s an eclipse at any given place on Earth roughly every 400 years; in Jackson, WY, where I’m planning to see next week’s eclipse, it turns out the next total eclipse will be 727 years from now—in 2744.

In earlier times, civilizations built giant monuments to celebrate the motions of the Sun and moon. For the eclipse next week what we’re making is a website. But that website builds on one of the great epics of human intellectual history—stretching back to the earliest times of systematic science, and encompassing contributions from a remarkable cross-section of the most celebrated scientists and mathematicians from past centuries.

It’ll be about 9538 days since the eclipse I saw in 1991. The Moon will have traveled some 500 million miles around the Earth, and the Earth some 15 billion miles around the Sun. But now—in a remarkable triumph of science—we’re computing to the second when they’ll be lined up again.