Delivering from Our R&D Pipeline
In 2020 it was Versions 12.1 and 12.2; in 2021 Versions 12.3 and 13.0. In late June this year it was Version 13.1. And now weâ€™re releasing Version 13.2. We continue to have a huge pipeline of R&D, some short term, some medium term, some long term (like decadeplus). Our goal is to deliver timely snapshots of where weâ€™re atâ€”so people can start using what weâ€™ve built as quickly as possible.
Version 13.2 isâ€”by our standardsâ€”a fairly small release, that mostly concentrates on rounding out areas that have been under development for a long time, as well as adding â€śpolishâ€ť to a range of existing capabilities. But itâ€™s also got some â€śsurpriseâ€ť new dramatic efficiency improvements, and itâ€™s got some first hints of major new areas that we have under developmentâ€”particularly related to astronomy and celestial mechanics.
But even though Iâ€™m calling it a â€śsmall releaseâ€ť, Version 13.2 still introduces completely new functions into the Wolfram Language, 41 of themâ€”as well as substantially enhancing 64 existing functions. And, as usual, weâ€™ve put a lot of effort into coherently designing those functions, so they fit into the tightly integrated framework weâ€™ve been building for the past 35 years. For the past several years weâ€™ve been following the principle of open code development (does anyone else do this yet?)â€”opening up our core software design meetings as livestreams. During the Version 13.2 cycle weâ€™ve done about 61 hours of design livestreamsâ€”getting all sorts of great realtime feedback from the community (thanks, everyone!). And, yes, weâ€™re holding steady at an overall average of one hour of livestreamed design time per new function, and a little less than half that per enhanced function.
Introducing Astro Computation
Astronomy has been a driving force for computation for more than 2000 years (from the Antikythera device on)… and in Version 13.2 itâ€™s coming to Wolfram Language in a big way. Yes, the Wolfram Language (and WolframAlpha) have had astronomical data for well over a decade. But whatâ€™s new now is astronomical computation fully integrated into the system. In many ways, our astro computation capabilities are modeled on our geo computation ones. But astro is substantially more complicated. Mountains donâ€™t move (at least perceptibly), but planets certainly do. Relativity also isnâ€™t important in geography, but it is in astronomy. And on the Earth, latitude and longitude are good standard ways to describe where things are. But in astronomyâ€”especially with everything movingâ€”describing where things are is much more complicated. Oh, and thereâ€™s the question of where things â€śare,â€ť versus where things appear to beâ€”because of effects ranging from lightpropagation delays to refraction in the Earthâ€™s atmosphere.
The key function for representing where astronomical things are is AstroPosition. Hereâ€™s where Mars is now:
✕

What does that output mean? Itâ€™s very â€śhere and nowâ€ť oriented. By default, itâ€™s telling me the azimuth (angle from north) and altitude (angle above the horizon) for Mars from where Here says I am, at the time specified by Now. How can I get a less â€śpersonalâ€ť representation of â€śwhere Mars isâ€ť? Because if even I just reevaluate my previous input now, Iâ€™ll get a slightly different answer, just because of the rotation of the Earth:
✕

One thing to do is to use equatorial coordinates, that are based on a frame centered at the center of the Earth but not rotating with the Earth. (One direction is defined by the rotation axis of the Earth, the other by where the Sun is at the time of the spring equinox.) The result is the â€śastronomerfriendlyâ€ť right ascension/declination position of Mars:
✕

And maybe thatâ€™s good enough for a terrestrial astronomer. But what if you want to specify the position of Mars in a way that doesnâ€™t refer to the Earth? Then you can use the nowstandard ICRS frame, which is centered at the center of mass of the Solar System:
✕

Often in astronomy the question is basically â€śwhich direction should I point my telescope in?â€ť, and thatâ€™s something one wants to specify in spherical coordinates. But particularly if oneâ€™s â€śout and about in the Solar Systemâ€ť (say thinking about a spacecraft), itâ€™s more useful to be able to give actual Cartesian coordinates for where one is:
✕

And here are the raw coordinates (by default in astronomical units):
✕

AstroPosition is backed by lots of computation, and in particular by ephemeris data that covers all planets and their moons, together with other substantial bodies in the Solar System:
✕

By the way, particularly the first time you ask for the position of an obscure object, there may be some delay while the necessary ephemeris gets downloaded. The main ephemerides we use give data for the period 2000â€“2050. But we also have access to other ephemerides that cover much longer periods. So, for example, we can tell where Ganymede was when Galileo first observed it:
✕

We also have position data for more than 100,000 stars, galaxies, pulsars and other objectsâ€”with many more coming soon:
✕

Things get complicated very quickly. Hereâ€™s the position of Venus seen from Mars, using a frame centered at the center of Mars:
✕

If we pick a particular point on Mars, then we can get the result in azimuthaltitude coordinates relative to the Martian horizon:
✕

Another complication is that if youâ€™re looking at something from the surface of the Earth, youâ€™re looking through the atmosphere, and the atmosphere refracts light, making the position of the object look different. By default, AstroPosition takes account of this when you use coordinates based on the horizon. But you can switch it off, and then the results will be differentâ€”and, for example, for the Sun at sunset, substantially different:
✕

✕

And then thereâ€™s the speed of light, and relativity, to think about. Letâ€™s say we want to know where Neptune â€śisâ€ť now. Well, do we mean where Neptune â€śactually isâ€ť, or do we mean â€śwhere we observe Neptune to beâ€ť based on light from Neptune coming to us? For frames referring to observations from Earth, weâ€™re normally concerned with the case where we include the â€ślight timeâ€ť effectâ€”and, yes, it does make a difference:
✕

OK, so AstroPositionâ€”which is the analog of GeoPositionâ€”gives us a way to represent where things are, astronomically. The next important function to discuss is AstroDistanceâ€”the analog of GeoDistance.
This gives the current distance between Venus and Mars:
✕

This is the current distance from where we are (according to Here) and the position of the Viking 2 lander on Mars:
✕

This is the distance from Here to the star Ď„ Ceti:
✕

To be more precise, AstroDistance really tells us the distance from a certain object, to an observer, at a certain local time for the observer (and, yes, the fact that itâ€™s local time matters because of light delays):
✕

And, yes, things are quite precise. Hereâ€™s the distance to the Apollo 11 landing site on the Moon, computed 5 times with a 1second pause in between, and shown to 10digit precision:
✕

This plots the distance to Mars for every day in the next 10 years:
✕

Another function is AstroAngularSeparation, which gives the angular separation between two objects as seen from a given position. Hereâ€™s the result from Jupiter and Saturn (seen from the Earth) over a 20year span:
✕

The Beginnings of Astro Graphics
In addition to being able to compute astronomical things, Version 13.2 includes first steps in visualizing astronomical things. Thereâ€™ll be more on this in subsequent versions. But Version 13.2 already has some powerful capabilities.
As a first example, hereâ€™s a part of the sky around Betelgeuse as seen right now from where I am:
✕

Zooming out, one can see more of the sky:
✕

There are lots of options for how things should be rendered. Here weâ€™re seeing a realistic image of the sky, with grid lines superimposed, aligned with the equator of the Earth:
✕

And here weâ€™re seeing a more whimsical interpretation:
✕

Just like for maps of the Earth, projections matter. Hereâ€™s a Lambert azimuthal projection of the whole sky:
✕

The blue line shows the orientation of the Earthâ€™s equator, the yellow line shows the plane of the ecliptic (which is basically the plane of the Solar System), and the red line shows the plane of our galaxy (which is where we see the Milky Way).
If we want to know what we actually â€śsee in the skyâ€ť we need a stereographic projection (in this case centered on the south direction):
✕

Thereâ€™s a lot of detail in the astronomical data and computations we have (and even more will be coming soon). So, for example, if we zoom in on Jupiter we can see the positions of its moons (though their disks are too small to be rendered here):
✕

Itâ€™s fun to see how this corresponds to Galileoâ€™s original observation of these moons more than 400 years ago. This is from Galileo:
The old typesetting does cause a little trouble:
✕

But the astronomical computation is more timeless. Here are the computed positions of the moons of Jupiter from when Galileo said he saw them, in Padua:
✕

And, yes, the results agree!
By the way, hereâ€™s another computation that could be verified soon. This is the time of maximum eclipse for an upcoming solar eclipse:
✕

And hereâ€™s what it will look like from a particular location right at that time:
✕

Dates, Times and Units: There’s Always More to Do
Dates are complicated. Even without any of the issues of relativity that we have to deal with for astronomy, itâ€™s surprisingly difficult to consistently â€śnameâ€ť times. What time zone are you talking about? What calendar system will you use? And so on. Oh, and then what granularity of time are you talking about? A day? A week? A month (whatever that means)? A second? An instantaneous moment (or perhaps a single elementary time from our Physics Project)?
These issues arise in what one might imagine would be trivial functions: the new RandomDate and RandomTime in Version 13.2. If you donâ€™t say otherwise, RandomDate will give an instantaneous moment of time, in your current time zone, with your default calendar system, etc.â€”randomly picked within the current year:
✕

But letâ€™s say you want a random date in June 1988. You can do that by giving the date object that represents that month:
✕

OK, but letâ€™s say you donâ€™t want an instant of time then, but instead you want a whole day. The new option DateGranularity allows this:
✕

You can ask for a random time in the next 6 hours:
✕

Or 10 random times:
✕

You can also ask for a random date within some intervalâ€”or collection of intervalsâ€”of dates:
✕

And, needless to say, we correctly sample uniformly over any collection of intervals:
✕

Another area of almost arbitrary complexity is units. And over the course of many years weâ€™ve systematically solved problem after problem in supporting basically every kind of unit thatâ€™s in use (now more than 5000 base types). But one holdout has involved temperature. In physics textbooks, itâ€™s traditional to carefully distinguish absolute temperatures, measured in kelvins, from temperature scales, like degrees Celsius or Fahrenheit. And thatâ€™s important, because while absolute temperatures can be added, subtracted, multiplied etc. just like other units, temperature scales on their own cannot. (Multiplying by 0Â° C to get 0 for something like an amount of heat would be very wrong.) On the other hand, differences in temperatureâ€”even measured in Celsiusâ€”can be multiplied. How can all this be untangled?
In previous versions we had a whole different kind of unit (or, more precisely, different physical quantity dimension) for temperature differences (much as mass and time have different dimensions). But now weâ€™ve got a better solution. Weâ€™ve basically introduced new unitsâ€”but still â€śtemperaturedimensionedâ€ť onesâ€”that represent temperature differences. And weâ€™ve introduced a new notation (a little Î” subscript) to indicate them:
✕

If you take a difference between two temperatures, the result will have temperaturedifference units:
✕

But if you convert this to an absolute temperature, itâ€™ll just be in ordinary temperature units:
✕

And with this unscrambled, itâ€™s actually possible to do arbitrary arithmetic even on temperatures measured on any temperature scaleâ€”though the results also come back as absolute temperatures:
✕

Itâ€™s worth understanding that an absolute temperature can be converted either to a temperature scale value, or a temperature scale difference:
✕

All of this means that you can now use temperatures on any scale in formulas, and theyâ€™ll just work:
✕

Dramatically Faster Polynomial Operations
Almost any algebraic computation ends up somehow involving polynomials. And polynomials have been a welloptimized part of Mathematica and the Wolfram Language since the beginning. And in fact, little has needed to be updated in the fundamental operations we do with them in more than a quarter of a century. But now in Version 13.2â€”thanks to new algorithms and new data structures, and new ways to use modern computer hardwareâ€”weâ€™re updating some core polynomial operations, and making them dramatically faster. And, by the way, weâ€™re getting some new polynomial functionality as well.
Here is a product of two polynomials, expanded out:
✕

Factoring polynomials like this is pretty much instantaneous, and has been ever since Version 1:
✕

But now letâ€™s make this bigger:
✕

There are 999 terms in the expanded polynomial:
✕

Factoring this isnâ€™t an easy computation, and in Version 13.1 takes about 19 seconds:
✕

But now, in Version 13.2, the same computation takes 0.3 secondsâ€”nearly 60 times faster:
✕

Itâ€™s pretty rare that anything gets 60x faster. But this is one of those cases, and in fact for still larger polynomials, the ratio will steadily increase further. But is this just something thatâ€™s only relevant for obscure, big polynomials? Well, no. Not least because it turns out that big polynomials show up â€śunder the hoodâ€ť in all sorts of important places. For example, the innocuousseeming object
✕

can be manipulated as an algebraic number, but with minimal polynomial:
✕

In addition to factoring, Version 13.2 also dramatically increases the efficiency of polynomial resultants, GCDs, discriminants, etc. And all of this makes possible a transformative update to polynomial linear algebra, i.e. operations on matrices whose elements are (univariate) polynomials.
Hereâ€™s a matrix of polynomials:
✕

And hereâ€™s a power of the matrix:
✕

And the determinant of this:
✕

In Version 13.1 this didnâ€™t look nearly as nice; the result comes out unexpanded as:
✕

Both size and speed are dramatically improved in Version 13.2. Hereâ€™s a larger caseâ€”where in 13.1 the computation takes more than an hour, and the result has a staggering leaf count of 178 billion
✕

but in Version 13.2 itâ€™s 13,000 times faster, and 60 million times smaller:
✕

Polynomial linear algebra is used â€śunder the hoodâ€ť in a remarkable range of areas, particularly in handling linear differential equations, difference equations, and their symbolic solutions. And in Version 13.2, not only polynomial MatrixPower and Det, but also LinearSolve, Inverse, RowReduce, MatrixRank and NullSpace have been dramatically sped up.
In addition to the dramatic speed improvements, Version 13.2 also adds a polynomial feature for which I, for one, happen to have been waiting for more than 30 years: multivariate polynomial factoring over finite fields:
✕

Indeed, looking in our archives I find many requests stretching back to at least 1990â€”from quite a range of peopleâ€”for this capability, even though, charmingly, a 1991 internal note states:
✕

Yup, that was right. But 31 years later, in Version 13.2, itâ€™s done!
Integrating External Neural Nets
The Wolfram Language has had integrated neural net technology since 2015. Sometimes this is automatically used inside other Wolfram Language functions, like ImageIdentify, SpeechRecognize or Classify. But you can also build your own neural nets using the symbolic specification language with functions like NetChain and NetGraphâ€”and the Wolfram Neural Net Repository provides a continually updated source of neural nets that you can immediately use, and modify, in the Wolfram Language.
But what if thereâ€™s a neural net out there that you just want to run from within the Wolfram Language, but donâ€™t need to have represented in modifiable (or trainable) symbolic Wolfram Language formâ€”like you might run an external program executable? In Version 13.2 thereâ€™s a new construct NetExternalObject that allows you to run trained neural nets â€śfrom the wildâ€ť in the same integrated framework used for actual WolframLanguagespecified neural nets.
NetExternalObject so far supports neural nets that have been defined in the ONNX neural net exchange format, which can easily be generated from frameworks like PyTorch, TensorFlow, Keras, etc. (as well as from Wolfram Language). One can get a NetExternalObject just by importing an .onnx file. Hereâ€™s an example from the web:
✕

If we â€śopen upâ€ť the summary for this object we see what basic tensor structure of input and output it deals with:
✕

But to actually use this network we have to set up encoders and decoders suitable for the actual operation of this particular networkâ€”with the particular encoding of images that it expects:
✕

✕

Now we just have to run the encoder, the external network and the decoderâ€”to get (in this case) a cartoonized Mount Rushmore:
✕

Often the â€śwrapper codeâ€ť for the NetExternalObject will be a bit more complicated than in this case. But the builtin NetEncoder and NetDecoder functions typically provide a very good start, and in general the symbolic structure of the Wolfram Language (and its integrated ability to represent images, video, audio, etc.) makes the process of importing typical neural nets â€śfrom the wildâ€ť surprisingly straightforward. And once imported, such neural nets can be used directly, or as components of other functions, anywhere in the Wolfram Language.
Displaying Large Trees, and Making More
We first introduced trees as a fundamental structure in Version 12.3, and weâ€™ve been enhancing them ever since. In Version 13.1 we added many options for determining how trees are displayed, but in Version 13.2 weâ€™re adding another, very important one: the ability to elide large subtrees.
Hereâ€™s a size200 random tree with every branch shown:
✕

And hereâ€™s the same tree with every node being told to display a maximum of 3 children:
✕

And, actually, tree elision is convenient enough that in Version 13.2 weâ€™re doing it by default for any node that has more than 10 childrenâ€”and weâ€™ve introduced the global $MaxDisplayedChildren to determine what that default limit should be.
Another new tree feature in Version 13.2 is the ability to create trees from your file system. Hereâ€™s a tree that goes down 3 directory levels from my Wolfram Desktop installation directory:
✕

Calculus & Its Generalizations
Is there still more to do in calculus? Yes! Sometimes the goal is, for example, to solve more differential equations. And sometimes itâ€™s to solve existing ones better. The point is that there may be many different possible forms that can be given for a symbolic solution. And often the forms that are easiest to generate arenâ€™t the ones that are most useful or convenient for subsequent computation, or the easiest for a human to understand.
In Version 13.2 weâ€™ve made dramatic progress in improving the form of solutions that we give for the most kinds of differential equations, and systems of differential equations.
Hereâ€™s an example. In Version 13.1 this is an equation we could solve symbolically, but the solution we give is long and complicated:
✕

But now, in 13.2, we immediately give a much more compact and useful form of the solution:
✕

The simplification is often even more dramatic for systems of differential equations. And our new algorithms cover a full range of differential equations with constant coefficientsâ€”which are what go by the name LTI (linear timeinvariant) systems in engineering, and are used quite universally to represent electrical, mechanical, chemical, etc. systems.
✕

In Version 13.1 we introduced symbolic solutions of fractional differential equations with constant coefficients; now in Version 13.2 weâ€™re extending this to asymptotic solutions of fractional differential equations with both constant and polynomial coefficients. Hereâ€™s an Airylike differential equation, but generalized to the fractional case with a Caputo fractional derivative:
✕

Analysis of Cluster Analysis
The Wolfram Language has had basic builtin support for cluster analysis since the mid2000s. But in more recent timesâ€”with increased sophistication from machine learningâ€”weâ€™ve been adding more and more sophisticated forms of cluster analysis. But itâ€™s one thing to do cluster analysis; itâ€™s another to analyze the cluster analysis oneâ€™s done, to try to better understand what it means, how to optimize it, etc. In Version 13.2 weâ€™re both adding the function ClusteringMeasurements to do this, as well as adding more options for cluster analysis, and enhancing the automation we have for method and parameter selection.
Letâ€™s say we do cluster analysis on some data, asking for a sequence of different numbers of clusters:
✕

Which is the â€śbestâ€ť number of clusters? One measure of this is to compute the â€śsilhouette scoreâ€ť for each possible clustering, and thatâ€™s something that ClusteringMeasurements can now do:
✕

As is fairly typical in statisticsrelated areas, there are lots of different scores and criteria one can useâ€”ClusteringMeasurements supports a wide variety of them.
Chess as Computable Data
Our goal with Wolfram Language is to make as much as possible computable. Version 13.2 adds yet another domainâ€”chessâ€”supporting import of the FEN and PGN chess formats:
✕

PGN files typically contain many games, each represented as a list of FEN strings. This counts the number of games in a particular PGN file:
✕

Hereâ€™s the first game in the file:
✕

Given this, we can now use Wolfram Languageâ€™s video capabilities to make a video of the game:
✕

Controlling Runaway Computations
Back in 1979 when I started building SMPâ€”the forerunner to the Wolfram Languageâ€”I did something that to some people seemed very bold, perhaps even reckless: I set up the system to fundamentally do â€śinfinite evaluationâ€ť, that is, to continue using whatever definitions had been given until nothing more could be done. In other words, the process of evaluation would always go on until a fixed point was reached. â€śBut what happens if x doesnâ€™t have a value, and you say x = x + 1?â€ť people would ask. â€śWonâ€™t the system blow up in that case?â€ť Well, in some sense yes. But I took a calculated gamble that the benefits of infinite evaluation for ordinary computations that people actually want to do would vastly outweigh any possible issues with what seemed like â€śpointless corner casesâ€ť such as x = x + 1. Well, 43 years later I think I can say with some confidence that that gamble worked out. The concept of infinite evaluationâ€”combined with the symbolic structure of the Wolfram Languageâ€”has been a source of tremendous power, and most users simply never run into, and never have to think about, the x = x + 1 â€ścorner caseâ€ť.
However, if you type x = x + 1 the system clearly has to do something. And in a sense the purest thing to do would just be to continue computing forever. But 34 years ago that led to a rather disastrous problem on actual computersâ€”and in fact still does today. Because in general this kind of repeated evaluation is a recursive process, that ultimately has to be implemented using the call stack set up for every instance of a program by the operating system. But the way operating systems work (still!) is to allocate only a fixed amount of memory for the stackâ€”and if this is overrun, the operating system will simply make your program crash (or, in earlier times, the operating system itself might crash). And this meant that ever since Version 1, weâ€™ve needed to have a limit in place on infinite evaluation. In early versions we tried to give the â€śresult of the computation so farâ€ť, wrapped in Hold. Back in Version 10, we started just returning a held version of the original expression:
✕

But even this is in a sense not safe. Because with other infinite definitions in place, one can end up with a situation where even trying to return the held form triggers additional infinite computational processes.
In recent times, particularly with our exploration of multicomputation, weâ€™ve decided to revisit the question of how to limit infinite computations. At some theoretical level, one might imagine explicitly representing infinite computations using things like transfinite numbers. But thatâ€™s fraught with difficulty, and manifest undecidability (â€śIs this infinite computation output really the same as that one?â€ť, etc.) But in Version 13.2, as the beginning of a new, â€śpurely symbolicâ€ť approach to â€śrunaway computationâ€ť weâ€™re introducing the construct TerminatedEvaluationâ€”that just symbolically represents, as it says, a terminated computation.
So hereâ€™s what now happens with x = x + 1:
✕

A notable feature of this is that itâ€™s â€śindependently encapsulatedâ€ť: the termination of one part of a computation doesnâ€™t affect others, so that, for example, we get:
✕

Thereâ€™s a complicated relation between terminated evaluations and lazy evaluation, and weâ€™re working on some interesting and potentially powerful new capabilities in this area. But for now, TerminatedEvaluation is an important construct for improving the â€śsafetyâ€ť of the system in the corner case of runaway computations. And introducing it has allowed us to fix what seemed for many years like â€śtheoretically unfixableâ€ť issues around complex runaway computations.
TerminatedEvaluation is what you run into if you hit systemwide â€śguard railsâ€ť like $RecursionLimit. But in Version 13.2 weâ€™ve also tightened up the handling of explicitly requested abortsâ€”by adding the new option PropagateAborts to CheckAbort. Once an abort has been generatedâ€”either directly by using Abort[ ], or as the result of something like TimeConstrained[ ] or MemoryConstrained[ ]â€”thereâ€™s a question of how far that abort should propagate. By default, itâ€™ll propagate all the way up, so your whole computation will end up being aborted. But ever since Version 2 (in 1991) weâ€™ve had the function CheckAbort, which checks for aborts in the expression itâ€™s given, then stops further propagation of the abort.
But there was always a lot of trickiness around the question of things like TimeConstrained[ ]. Should aborts generated by these be propagated the same way as Abort[ ] aborts or not? In Version 13.2 weâ€™ve now cleaned all of this up, with an explicit option PropagateAborts for CheckAbort. With PropagateAborts→True all aborts are propagated, whether initiated by Abort[ ] or TimeConstrained[ ] or whatever. PropagateAborts→False propagates no aborts. But thereâ€™s also PropagateAborts→Automatic, which propagates aborts from TimeConstrained[ ] etc., but not from Abort[ ].
Yet Another Little List Function
In our neverending process of extending and polishing the Wolfram Language weâ€™re constantly on the lookout for â€ślumps of computational workâ€ť that people repeatedly want to do, and for which we can create functions with easytounderstand names. These days we often prototype such functions in the Wolfram Function Repository, then further streamline their design, and eventually implement them in the permanent core Wolfram Language. In Version 13.2 just two new basic listmanipulation functions came out of this process: PositionLargest and PositionSmallest.
Weâ€™ve had the function Position since Version 1, as well as Max. But something Iâ€™ve often found myself needing to do over the years is to combine these to answer the question: â€śWhere is the max of that list?â€ť Of course itâ€™s not hard to do this in the Wolfram Languageâ€”Position[list, Max[list]] basically does it. But there are some edge cases and extensions to think about, and itâ€™s convenient just to have one function to do this. And, whatâ€™s more, now that we have functions like TakeLargest, thereâ€™s an obvious, consistent name for the function: PositionLargest. (And by â€śobviousâ€ť, I mean obvious after you hear it; the archive of our livestreamed design review meetings will reveal thatâ€”as is so often the caseâ€”it actually took us quite a while to settle on the â€śobviousâ€ť.)
Hereâ€™s PositionLargest and in action:
✕

And, yes, it has to return a list, to deal with â€śtiesâ€ť:
✕

Graphics, Image, Graph, …? Tell It from the Frame Color
Everything in the Wolfram Language is a symbolic expression. But different symbolic expressions are displayed differently, which is, of course, very useful. So, for example, a graph isnâ€™t displayed in the raw symbolic form
✕

but rather as a graph:
✕

But letâ€™s say youâ€™ve got a whole collection of visual objects in a notebook. How can you tell what they â€śreally areâ€ť? Well, you can click them, and then see what color their borders are. Itâ€™s subtle, but Iâ€™ve found one quickly gets used to noticing at least the kinds of objects one commonly uses. And in Version 13.2 weâ€™ve made some additional distinctions, notably between images and graphics.
So, yes, the object above is a Graphâ€”and you can tell that because it has a purple border when you click it:
✕

This is a Graphics object, which you can tell because itâ€™s got an orange border:
✕

And here, now, is an Image object, with a light blue border:
✕

For some things, color hints just donâ€™t work, because people canâ€™t remember which color means what. But for some reason, adding color borders to visual objects seems to work very well; it provides the right level of hinting, and the fact that one often sees the color when itâ€™s obvious what the object is helps cement a memory of the color.
In case youâ€™re wondering, there are some others already in use for bordersâ€”and more to come. Trees are green (though, yes, ours by default grow down). Meshes are brown:
✕

Brighter, Better Syntax Coloring
How do we make it as easy as possible to type correct Wolfram Language code? This is a question weâ€™ve been working on for years, gradually inventing more and more mechanisms and solutions. In Version 13.2 weâ€™ve made some small tweaks to a mechanism thatâ€™s actually been in the system for many years, but the changes weâ€™ve made have a substantial effect on the experience of typing code.
One of the big challenges is that code is typed â€ślinearlyâ€ťâ€”essentially (apart from 2D constructs) from left to right. But (just like in natural languages like English) the meaning is defined by a more hierarchical tree structure. And one of the issues is to know how something you typed fits into the tree structure.
Something like this is visually obvious quite locally in the â€ślinearâ€ť code you typed. But sometimes what defines the tree structure is quite far away. For example, you might have a function with several arguments that are each large expressions. And when youâ€™re looking at one of the arguments it may not be obvious what the overall function is. And part of what weâ€™re now emphasizing more strongly in Version 13.2 is dynamic highlighting that shows you â€śwhat function youâ€™re inâ€ť.
Itâ€™s highlighting that appears when you click. So, for example, this is the highlighting you get clicking at several different positions in a simple expression:
✕

Hereâ€™s an example â€śfrom the wildâ€ť showing you that if you type at the position of the cursor, youâ€™ll be adding an argument to the ContourPlot function:
✕

But now letâ€™s click in a different place:
✕

Hereâ€™s a smaller example:
✕

User Interface Conveniences
We first introduced the notebook interface in Version 1 back in 1988. And already in that version we had many of the current features of notebooksâ€”like cells and cell groups, cell styles, etc. But over the past 34 years weâ€™ve been continuing to tweak and polish the notebook interface to make it ever smoother to use.
In Version 13.2 we have some minor but convenient additions. Weâ€™ve had the Divide Cell menu item (cmdshiftD) for more than 30 years. And the way itâ€™s always worked is that you click where you want a cell to be divided. Meanwhile, weâ€™ve always had the ability to put multiple Wolfram Language inputs into a single cell. And while sometimes itâ€™s convenient to type code that way, or import it from elsewhere like that, it makes better use of all our notebook and cell capabilities if each independent input is in its own cell. And now in Version 13.2 Divide Cell can make it like that, analyzing multiline inputs to divide them between complete inputs that occur on different lines:
✕

Similarly, if youâ€™re dealing with text instead of code, Divide Cell will now divide at explicit line breaksâ€”that might correspond to paragraphs.
In a completely different area, Version 13.1 added a new default toolbar for notebooks, and in Version 13.2 weâ€™re beginning the process of steadily adding features to this toolbar. The main obvious feature thatâ€™s been added is a new interactive tool for changing frames in cells. Itâ€™s part of the Cell Appearance item in the toolbar:
✕

Just click a side of the frame style widget and youâ€™ll get a tool to edit that frame styleâ€”and youâ€™ll immediately see any changes reflected in the notebook:
✕

If you want to edit all the sides, you can lock the settings together with:
✕

Cell frames have always been a useful mechanism for delineating, highlighting or otherwise annotating cells in notebooks. But in the past itâ€™s been comparatively difficult to customize them beyond whatâ€™s in the stylesheet youâ€™re using. With the new toolbar feature in Version 13.2 weâ€™ve made it very easy to work with cell frames, making it realistic for custom cell frames to become a routine part of notebook content.
Mixing Compiled and Evaluated Code
Weâ€™ve worked hard to have code you write in the Wolfram Language immediately run efficiently. But by taking the extra onetime effort to invoke the Wolfram Language compilerâ€”telling it more details about how you expect to use your codeâ€” you can often make your code run more efficiently, and sometimes dramatically so. In Version 13.2 weâ€™ve been continuing the process of streamlining the workflow for using the compiler, and for unifying code thatâ€™s set up for compilation, and code thatâ€™s not.
The primary work you have to do in order to make the best use of the Wolfram Language compiler is in specifying types. One of the important features of the Wolfram Language in general is that a symbol x can just as well be an integer, a list of complex numbers or a symbolic representation of a graph. But the main way the compiler adds efficiency is by being able to assume that x is, say, always going to be an integer that fits into a 64bit computer word.
The Wolfram Language compiler has a sophisticated symbolic language for specifying types. Thus, for example
✕

is a symbolic specification for the type of a function that takes two 64bit integers as input, and returns a single one. TypeSpecifier[ ... ] is a symbolic construct that doesnâ€™t evaluate on its own, and can be used and manipulated symbolically. And itâ€™s the same story with Typed[ ... ], which allows you to annotate an expression to say what type it should be assumed to be.
But what if you want to write code which can either be evaluated in the ordinary way, or fed to the compiler? Constructs like Typed[ ... ] are for permanent annotation. In Version 13.2 weâ€™ve added TypeHint which allows you to give a hint that can be used by the compiler, but will be ignored in ordinary evaluation.
This compiles a function assuming that its argument x is an 8bit integer:
✕

By default, the 100 here is assumed to be represented as a 64bit integer. But with a type hint, we can say that it too should be represented as an 8bit integer:
✕

150 doesnâ€™t fit in an 8bit integer, so the compiled code canâ€™t be used:
✕

But whatâ€™s relevant here is that the function we compiled can be used not only for compilation, but also in ordinary evaluation, where the TypeHint effectively just â€śevaporatesâ€ť:
✕

As the compiler develops, itâ€™s going to be able to do more and more type inferencing on its own. But itâ€™ll always be able to get further if the user gives it some hints. For example, if x is a 64bit integer, what type should be assumed for x^{x}? There are certainly values of x for which x^{x} wonâ€™t fit in a 64bit integer. But the user might know those wonâ€™t show up. And so they can give a type hint that says that the x^{x} should be assumed to fit in a 64bit integer, and this will allow the compiler to do much more with it.
Itâ€™s worth pointing out that there are always going to be limitations to type inferencing, because, in a sense, inferring types requires proving theorems, and there can be theorems that have arbitrarily long proofs, or no proofs at all in a certain axiomatic system. For example, imagine asking whether the type of a zero of the Riemann zeta function has a certain imaginary part. To answer this, the type inferencer would have to solve the Riemann hypothesis. But if the user just wanted to assume the Riemann hypothesis, they couldâ€”at least in principleâ€”use TypeHint.
TypeHint is a wrapper that means something to the compiler, but â€śevaporatesâ€ť in ordinary evaluation. Version 13.2 adds IfCompiled, which lets you explicitly delineate code that should be used with the compiler, and code that should be used in ordinary evaluation. This is useful when, for example, ordinary evaluation can use a sophisticated builtin Wolfram Language function, but compiled code will be more efficient if it effectively builds up similar functionality from lowerlevel primitives.
In its simplest form FunctionCompile lets you take an explicit pure function and make a compiled version of it. But what if you have a function where youâ€™ve already assigned downvalues to it, like:
✕

Now in Version 13.2 you can use the new DownValuesFunction wrapper to give a function like this to FunctionCompile:
✕

This is important because it lets you set up a whole network of definitions using := etc., then have them automatically be fed to the compiler. In general, you can use DownValuesFunction as a wrapper to tag any use of a function youâ€™ve defined elsewhere. Itâ€™s somewhat analogous to the KernelFunction wrapper that you can use to tag builtin functions, and specify what types you want to assume for them in code that youâ€™re feeding to the compiler.
Packaging LargeScale Compiled Code
Letâ€™s say youâ€™re building a substantial piece of functionality that might include compiled Wolfram Language code, external libraries, etc. In Version 13.2 weâ€™ve added capabilities to make it easy to â€śpackage upâ€ť such functionality, and for example deploy it as a distributable paclet.
As an example of what can be done, this installs a paclet called GEOSLink that includes the GEOS external library and compilerbased functionality to access this:
✕

Now that the paclet is installed, we can use a file from it to set up a whole collection of functions that are defined in the paclet:
✕

Given the code in the paclet we can now just start calling functions that use the GEOS library:
✕

Itâ€™s quite nontrivial that this â€śjust worksâ€ť. Because for it to work, the system has to have been told to load and initialize the GEOS library, as well as convert the Wolfram Language polygon geometry to a form suitable for GEOS. The returned result is also nontrivial: itâ€™s essentially a handle to data thatâ€™s inside the GEOS library, but being memorymanaged by the Wolfram Language system. Now we can take this result, and call a GEOS library function on it, using the Wolfram Language binding thatâ€™s been defined for that function:
✕

This gets the result â€śback from GEOSâ€ť into pure Wolfram Language form:
✕

How does all this work? This goes to the directory for the installed GEOSLink paclet on my system:
✕

Thereâ€™s a subdirectory called LibraryResources that contains dynamic libraries suitable for my computer system:
✕

The libgeos libraries are the raw external GEOS libraries â€śfrom the wildâ€ť. The GEOSLink library is a library that was built by the Wolfram Language compiler from Wolfram Language code that defines the â€śglueâ€ť for interfacing between the GEOS library and the Wolfram Language:
✕

What is all this? It’s all based on new functionality in Version 13.2. And ultimately what it’s doing is to create a CompiledComponent construct (which is a new thing in Version 13.2). A CompiledComponent construct represents a bundle of compilable functionality with elements like "Declarations", "InstalledFunctions", "LibraryFunctions", "LoadingEpilogs" and "ExternalLibraries". And in a typical caseâ€”like the one shown hereâ€”one creates (or adds to) a CompiledComponent using DeclareCompiledComponent.
Here’s an example of part of what’s added by DeclareCompiledComponent:
✕

First there’s a declaration of an external (in this case GEOS) library function, giving its type signature. Then there’s a declaration of a compilable Wolfram Language function GEOSUnion that directly calls the GEOSUnion function in the external library, defining it to take a certain memorymanaged data structure as input, and return a similarly memorymanaged object as output.
From this source code, all you do to build an actual library is use BuildCompiledComponent. And given this library you can start calling external GEOS functions directly from toplevel Wolfram Language code, as we did above.
But the CompiledComponent object does something else as well. It also sets up everything you need to be able to write compilable code that calls the same functions as you can within the built library.
The bottom line is that with all the new functionality in Version 13.2 it’s become dramatically easier to integrate compiled code, external libraries etc. and to make them conveniently distributable. It’s a fairly remarkable simplification of what was previously a timeconsuming and complex software engineering challenge. And it’s good example of how powerful it can be to set up symbolic specifications in the Wolfram Language and then use our compiler technology to automatically create and deploy code defined by them.
And More…
In addition to all the things weâ€™ve discussed, there are other updates and enhancements that have arrived in the six months since Version 13.1 was released. A notable example is that there have been no fewer than 241 new functions added to the Wolfram Function Repository during that time, providing specific addon functionality in a whole range of areas:
But within the core Wolfram Language itself, Version 13.2 also adds lots of little new capabilities, that polish and round out existing functionality. Here are some examples:
Parallelize now supports automatic parallelization of a variety of new functions, particularly related to associations.
Blurring now joins DropShadowing as a 2D graphics effect.
MeshRegion, etc. can now store vertex coloring and vertex normals to allow enhanced visualization of regions.
RandomInstance does much better at quickly finding nondegenerate examples of geometric scenes that satisfy specified constraints.
ImageStitch now supports stitching images onto spherical and cylindrical canvases.
Functions like Definition and Clear that operate on symbols now consistently handle lists and string patterns.
FindShortestTour has a direct way to return individual features of the result, rather than always packaging them together in a list.
PersistentSymbol and LocalSymbol now allow reassignment of parts using functions like AppendTo.
SystemModelMeasurements now gives diagnostics such as rise time and overshoot for SystemModel control systems.
Import of OSM (OpenStreetMap) and GXF geo formats are now supported.