A lot of stock these days gets put into simulations of various complex events. For most of these simulations, there aren’t poor underfed graduate students doing calculations by hand, of course we use our computers to do these simulations for us.

Computers approximate our answers. In same cases for the sake of expediency: there’s an infinite number of numbers out there, it’s faster and cheaper to just ignore some of them, and of course, this leads to some inaccuracy and a loss of precision in performing calculations. However in a lot of cases, they approximate because we just don’t know what the ‘actual’ solution is. I’m not sure how many of you remember your calculus, but they always seemed to kind of ignore some integration questions, and for good reason, for almost all functions there’s no way known to get an exact integral. A lot of those functions are used all over the place. Similarly for many differential equations, and finding roots of polynomials of degree 5 or more, there’s no general solution we know of, so we approximate, and then proceed to use those approximations to make further approximations, and so on.

Now this works pretty well in practice for most applications. After all, we seem to be doing alright in space, using computers we seem to be able to have gone to the moon and launched many shiny objects. But for things which exhibit complex or chaotic behaviour, it turns out our computer models have quite a number of limitations. Want to do equations with gravity with 3 masses? We can do somewhat decent approximations, throw in say 20 masses, and things tend to get out of control pretty quickly. The weather is another system that we model, we’re able to gather quite a lot of data, and they have very powerful computers working on very complex models, and for all that the best predictions we have are at most a few days in advance, and are often wrong at that. Whoops!

So, the universe doesn’t give us exact answers most of the time, and sometimes even when it does, we prefer to take the quick and dirty route. Does this mean we throw our simulations of complex systems out? Of course not. Even if they get they’re never completely ‘right’, often the approximations can still give us a lot of insight.

In a lot of those climate change simulations I’ve seen discussed, of course the industry hacks like to bring this up as uncertainty about the actual effects of various inputs to the simulation. This is where people, and common sense, come in. Those simulations need people to analyze their results, to run them with different data, more data, better data, manual corrections, etc., to see what our approximations are telling us about the underlying universe. The models themselves have no fundamental truths to them. There’s still some art there.

Hey liam, I think your comment about simulations providing us with some extra insight is the most important thing to take away from the big world of computer models and uncertainty. A good example is the weather, as you brough up. Many of the computer models used in meteorology research are actually modeling past events, i.e. a certain hurricane, or certain other weather system. The reason for this is so meteorologists can understand the weather better, or, have greater insight into the functions of the lower atmosphere. As for prediction models, and I would wager a guess for many other types of models, I don’t think it’s necessarily the model’s fault for not being able to complete an integral that we get wrong predictions, I think it is the people creating the models that don’t know enough yet to create it correctly. That’s why historical modelling is essential. As that famous saying goes, “the key to the present is the past”.