Launching a New Era in Large-Scale Systems Modeling

Over the past 25 years, we’ve been fortunate enough to make a mark in all sorts of areas of science and technology. Today I’m excited to announce that we’re in a position to tackle another major area: large-scale systems modeling.

It’s a huge and important area, long central to engineering, and increasingly central to fields like biomedicine. To do it right is also incredibly algorithmically demanding. But the exciting thing is that now we’ve finally assembled the technology stack that we need to do it—and we’re able to begin the process of making large-scale systems modeling an integrated core feature of Mathematica, accessible to a very broad range of users.

Lots of remarkable things will become possible. Using the methodology we’ve developed for Wolfram|Alpha, we’ll be curating not only data about systems and their components, but also complete dynamic models. Then we’ll have the tools to easily assemble models of almost arbitrary complexity—and to put them into algorithmic form so that they can be simulated, optimized, validated, visualized or manipulated by anything across the Mathematica system.

Making models computable

And then we’ll also be able to inject large-scale models into the Wolfram|Alpha system, and all its deployment channels.

So what does this mean? Here’s an example. Imagine that there’s a model for a new kind of car engine—probably involving thousands of individual components. The model is running in Mathematica, inside a Wolfram|Alpha server. Now imagine someone out in the field with a smartphone, wondering what will happen if they do a particular thing with an engine.

Well, with the technology we’re building, they should be able to just type (or say) into an app: “Compare the frequency spectrum for the crankshaft in gears 1 and 5”. Back on the server, Wolfram|Alpha technology will convert the natural language into a definite symbolic query. Then in Mathematica the model will be simulated and analyzed, and the results—quantitative, visual or otherwise—will be sent back to the user. Like a much more elaborate and customized version of what Wolfram|Alpha would do today with a question about a satellite position or a tide.

OK. So what needs to happen to make all this stuff possible? To begin with, how can Mathematica even represent something like a car—with all its various physical components, moving and running and acting on each other?

We know that Mathematica is good at handling complexity in algorithmic systems: just look at the 20+ million lines of Mathematica code that make up Wolfram|Alpha. And we also know that among the functions in the Mathematica language are ones that powerfully handle all sorts of computations needed in studying physical and other processes.

But what about a car? The gears and belts don’t work like functions that take input and give output. They connect, and interact, and act on one another. And in an ordinary computer language that’s based on having data structures and variables that get fed to functions, there wouldn’t be any good way to represent this.

But in Mathematica, building on the fact that it is a symbolic language, there is a way: equations. Because in addition to being a rich programming language, Mathematica—through its symbolic character—is also a rich mathematical language. And one element of it is a full representation of equations: algebraic, differential, differential-algebraic, discrete, whatever.

But, OK. So maybe there’s an equation—based on physics—for how one component acts on another in a car. But in a whole car there are perhaps tens of thousands of components. So what kind of thing does one need to represent that whole system?

It’s a little like a large program. But it’s not a program in the traditional input-output sense. Rather, it’s a different kind of thing: a system model.

The elements of the model are components. With certain equations—or algorithms—describing how the components behave, and how they interact with other components. And to set this up one needs not a programming language, but a modeling language.

We’ve talked for nearly 20 years about how Mathematica could be extended to handle modeling in a completely integrated way. And for years we’ve been adding more and more of the technology and capabilities that are needed. And now we’ve come to an exciting point: we’re finally ready to make modeling an integrated part of Mathematica.

And to accelerate that process, we announced today that we’ve acquired MathCore Engineering AB—a long-time developer of large-scale engineering software systems based on Mathematica and Modelica, and a supplier of solutions to such companies as Rolls-Royce, Siemens and Scania.

And as of today, we’re beginning the process of bringing together Mathematica and MathCore’s technology—as well as Wolfram|Alpha and CDF—to create a system that we think will launch a whole new era in design, modeling and systems engineering.

So what will it be like?

The basic approach is to think about large systems—whether engineering, biological, social or otherwise—as collections of components.

Each component has certain properties and behaviors. But the main idealization—central for example to most existing engineering—is that the components interact with each other only in very definite ways.

It doesn’t matter whether the components are electrical, hydraulic, thermodynamic, chemical or whatever. If one looks at models that are used, there are typically just two parameters: one representing some kind of effort, and another some kind of flow. These might be voltage and current for an electrical circuit. Or pressure and volume flow for a hydraulic system. Or temperature and entropy flow for a thermodynamic system. Or chemical potential and molar flow for a chemical system.

And then to represent how the components interact with each other, one ends up with equations relating the values of these parameters—much like a generalization of Kirchoff’s laws for circuits from 1845. Typically, the individual components also satisfy equations—that may be algebraic (like Ohm’s law or Bernoulli’s principle), differential (like Newton’s laws for a point object), differential-algebraic (like rigid-body kinematics with constraints), difference (like for sampled motion), or discrete (like in a finite-state machine).

When computers were first used for systems modeling back in the 1960s, all equations in effect had to be entered explicitly. But by the 1980s, block diagrams had become popular as graphical ways to represent systems, and to generate equations that correspond to them. But in their standard form, such diagrams were of necessity very restrictive: they were set up to work like flowcharts, with only one-way dependence of one component on another.

By restricting dependence in this way, one is forced to have only ordinary differential equations that can be solved with traditional numerical computation methods. Of course, in actual systems, there is two-way dependence. And so to be able to model systems correctly, one has to be able to handle that—and so one has to go beyond traditional “causal” block diagram methods.

For a long time, however, this just seemed too difficult. And it didn’t even help much that computers were getting so much faster. The systems of equations that appeared—typically differential-algebraic ones—just seemed to be fundamentally too complicated to handle in any automatic way.

But gradually cleaner and cleaner formulations were developed—particularly in connection with the Modelica description language. And it became clear that really the issue was appropriate manipulation of the underlying symbolic equations.

Now, of course, in Mathematica we’ve spent nearly 25 years building up the best possible ways to handle symbolic equations. And starting about a decade ago we began to use our capabilities to attack differential-algebraic equations. And meanwhile, our friends at MathCore had been integrating Mathematica with Modelica, and creating a sequence of increasingly sophisticated modeling systems.

So now we’re at an exciting point. Building on a whole tower of interlinked capabilities in Mathematica—symbolic, numerical, graph theoretic, etc.—together with technology that MathCore has been developing for a decade or so, we’re finally at the point where we can start to create a complete environment for systems modeling, with no compromises.

It’s a very high tech thing. Inside are many extremely sophisticated algorithms, spanning a wide range of fields and methodologies. And the good news is that over the course of many years, these algorithms have progressively been strengthened, to the point where they can now deal with very large-scale industrial problems. So even if one wants to use a million variables to accurately model a whole car, it’ll actually be possible.

OK, but what’s ultimately the point of doing something like this?

First and foremost, it’s to be able to figure out what the car will do just by simulation—without actually building a physical version of the car. And that’s a huge win, because it lets one do vastly more experiments, more easily, than one ever could in physical form.

But beyond that, it lets one take things to a whole different level, by effectively doing “meta-experiments”. For example, one might want to optimize a design with respect to some set of parameters, effectively doing an infinite family of possible experiments. Or one might want to create a control system that one can guarantee will work robustly. Or one might want to identify a model from a whole family of possibilities by fitting its behavior to measured physical data.

And these are the kinds of places where things get really spectacular with Mathematica. Because these sorts of “meta” operations are already built in to Mathematica in a very coherent and integrated way. As once one has the model in Mathematica, one can immediately apply Mathematica‘s built-in capabilities for optimization, control theory, statistical analysis, or whatever.

It’s also “free” in Mathematica to do very high-quality visualization, interface building, scripting and other things. And perhaps particularly important is Mathematica‘s ability to create interactive documents in its Computable Document Format (CDF).

So that one can have a “live” description of a model to distribute. In which one mixes narrative text, formulas, images and so on with the actual working model. Already in the Wolfram Demonstrations Project there are lots of examples of simulating small systems. But when we’ve finished our large-scale system modeling initiative, one will be able to use exactly the same technology for highly complex systems too.

Gone will be the distinction between “documentation” and “modeling software”. There’ll just be one integrated CDF that covers both.

Computable model deployment

So how does one actually set about creating a model? One has to build up from models of larger- and larger-scale components. And many of these components will have names. Perhaps generic ones—like springs or transformers—or perhaps specific ones, based on some standard, or the products of some manufacturer. The diversity of different objects, and different ways to refer to variants of them, might seem quite daunting.

But from Wolfram|Alpha, we have had the experience of curating all sorts of information like this—and linking underlying specific computable knowledge with the convenient free-form linguistics that get used by practitioners in any particular specialty. Of course it helps that we already have in Wolfram|Alpha huge amounts of computable knowledge about physical systems, material properties, thermodynamics and so on—as well as about environmental issues like climate history or electrical prices.

Today we are used to programmers who create sophisticated pieces of software. Increasingly, we will see modelers who create sophisticated models. Often they will start from free-form linguistic specifications of their components. Then gradually build up—using textual or graphical tools—precise representations of larger- and larger-scale models.

Once a model is constructed, then it’s a question of running it, analyzing it, and so on. And here both the precise Mathematica language and free-form Wolfram|Alpha-style linguistics are relevant.

Most of the modeling that is done today is done as part of the design process. But the technology stack we’re building will make it possible to deliver the results of models to users and consumers of devices as well. By using Wolfram|Alpha technology, we’ll be able to have models running on cloud servers, accessed with free-form linguistics, on mobile devices, or whatever. So that all sorts of people who know nothing about the actual structure and design of systems can get a whole range of practical questions immediately answered.

This kind of methodology will be important not only across engineering, but also in areas like biomedicine. Where it’ll become realistic to take complex models and make clinically relevant predictions from them in the field.

And when it comes to engineering, what’s most important about the direction we’re launching is that it promises to allow a significant increase in the complexity of systems that can cost-effectively be designed—a kind of higher-level form of engineering.

In addition, it will make it realistic to explore more broadly the space of possible engineering systems and components. It is remarkable that even today most engineering is still done with systems whose components were well known in the nineteenth century—or at least in the middle of the twentieth century. But the modeling technology we are building will make it straightforward to investigate the consequences of using quite different components or structures.

And for example it will become realistic to use elements found by the “artificial innovation” of A New Kind of Science methods—never constructed by human engineers, but just discovered by searching the computational universe of possibilities.

A great many of the engineering accomplishments of today have specifically been made possible by the level of systems modeling that can so far be done. So it will be exciting to see what qualitatively new accomplishments—and what new kinds of engineering systems—will become possible with the new kind of large-scale systems modeling that we have launched into building.

Stephen Wolfram (2011), "Launching a New Era in Large-Scale Systems Modeling," Stephen Wolfram Writings. writings.stephenwolfram.com/2011/03/launching-a-new-era-in-large-scale-systems-modeling.
Text
Stephen Wolfram (2011), "Launching a New Era in Large-Scale Systems Modeling," Stephen Wolfram Writings. writings.stephenwolfram.com/2011/03/launching-a-new-era-in-large-scale-systems-modeling.
CMS
Wolfram, Stephen. "Launching a New Era in Large-Scale Systems Modeling." Stephen Wolfram Writings. March 30, 2011. writings.stephenwolfram.com/2011/03/launching-a-new-era-in-large-scale-systems-modeling.
APA
Wolfram, S. (2011, March 30). Launching a new era in large-scale systems modeling. Stephen Wolfram Writings. writings.stephenwolfram.com/2011/03/launching-a-new-era-in-large-scale-systems-modeling.

Posted in: New Technology, Other

14 comments

  1. Forgive me as a layman if I miss anything here, but I find this fascinating. In automotive terms, have you talked to the Formula 1 teams about partnerships? It strikes me that the ability to model a problem / opportunity rapidly would appeal to them greatly.

  2. Very impressive. I’m fascinated by the philosophical implications, have been ever since my physics days before philosophy and logics got to me and when I first fell under the spell of Mathematica. I take it VLSI architectures can now morph from very large to . . . ? I’m also interested in what this advance implies for information theory in general, and AIT (algorithmic information theory) in particular. Shouldn’t we perhaps organize some kind of transdisciplinary conference to explore? (Would be interested in helping some such, you can google me.)

  3. It is just great. A dream come true. A language for material tools and a e-link between “soul and body” or “form and matter”. It is a pity that Plato is not here to see it, but definetively I am amazed to be. How can someone learn more about it or to use it? I am especially interested in biomedicine applications. Thank you and congratulations!

  4. A very important development. As an economist and former investment banker, I have always been unhappy with the lack of computational support in developing adequate business models that capture real life relationships. With systems now properly modelled, we can now move a step closer to developing financial models that properly capture the complexity of real life data.

  5. Please focus on the natural language to machine code translation! I’m not impressed at all from wolfram Alpha. Maybe offer a coding/scripting query language too? so more tech people wont struggle with the natural language 🙂

  6. How about beginning with the modelling of Wolfram Research?. It would include models of :-
    -Mathematica, Wolfram Alpha……
    Brian Gilbert
    W|A Volunteer Curator

  7. I find that using 2-level (attribute) grammars allows me to model just about everything I come across; from bureaucratic systems to software to, I can imagine, modelling a car. Structure is found in the actual grammar while restrictions and relationships (equations) take place in the attributes. Because of the expressiveness of 2-level grammars, it’s both powerful and sometimes uncontrollable. A modelling tool which provides feedback while maintaining formalisms on the modelling process would be a warm welcome. It would also allow the model to be transformed (e.g. compiled) into different forms and integrate within and interoperate throughout other contexts. The car model would fit into the entire car-life-cycle model, from materials acquisition to recycling with a car-driving model somewhere in between.

  8. I believe that it is also necessary to integrate algorithms for system identification. In real systems, describing the behavior of a component only from physical laws results impossible in many relevant situations. In such cases, it would be very practical if Mathematica could create a model of a component (with a given complexity specified by the user) simply from input-output data.

  9. The first challenge is not how to make Large-Scale System Modeling assessable for the general user. The first challenge is how to extract results from various models in a way that is appropriate for the problem to be solved.
    “All models are wrong, but some are useful” (quote attributive to George Box). In real life applications, the result of models need to had some confidence of their applicability to the problem. There also need to be consideration about the uncertainty of the inputs and the time allotted to get a result. The issue is not alway connecting a set of models from smaller part, but understanding the validity of the models and the links between them.
    There needs to be a focus on how to use and link models or there is the risk of just providing easy access to garbage results.

  10. can we use this engine for forcasting of future for example in the field of economy or politics.
    if so pls explain it.

  11. I am fascinated by the grand vision and the steps towards its realisation. But like alirazei I think it would be highly desirable to extend such a large scale modeling effort towards economics, i.e. in the last instance the world economy, especially given by the inability of the economics profession to foresee nor to handle the current crisis. Maybe it is already something you have envisioned, too…

  12. The most significant hardware advance for this purpose is access to multiple processors which can each model a component in parallel.

  13. This is exciting stuff, yet here we are a year later, and I can’t find signs that the systems modeling extensions to Mathematica are available. I look forward to this, and would appreciate news, or at least a placeholder feature name so it will become obvious when it has arrived.

  14. Free at last , my brain will be able to rest and I will be just focus on the result and do not worry about the whole process , just give the input and get the result at the end to see how the system will be behave under different parameters.