It’s been an interesting time to be a modeller. The 2021 Nobel Prize in Physics was split among three modellers. Shortly thereafter Jordan Peterson haemorrhaged a string of words that roughly translated to “I don’t understand climate models and therefore they are wrong”. Meanwhile epidemiological models have spent the past two years being derided in the tabloids. Much of the anti-science rhetoric around such issues as the pandemic or global warming reduces to “this result must be wrong because it is a ‘model'”, without more specific detail. But this generalised attack is illogical. For example, we know full well that Newton’s laws are not a complete theory of gravity, and yet numerical approximations of Newton’s equations guided space exploration.
So let’s stand back a minute and clarify what is a “model”, a “mathematical model”, and a “numerical model”, so we can better understand how climate modelling works.
In academia, a “model” can mean any abstraction or representation of an object, concept, or system. This includes the mental models our brains create in response to sensory input, connecting observations into patterns and helping us navigate life. The expectation that the sun will rise tomorrow is a mental model we’ve all made of the world, and it’s a ridiculously accurate one (at least until the sun expands, or some other rare cataclysm interrupts normal events).
“Mathematical models” are when we use equations to relate observable variables. Newton’s laws, for example, are maths equations that can be solved to describe the motions of objects based on momenta and forces. And because these equations include time as a variable, you can even input values and then solve for the future to make predictions.
Many equations are profoundly difficult to solve, and so mathematicians and scientists have developed “numerical methods”: Tools for converting hard-to-solve equations into simplified steps that approximate the solution with a quantifiable degree of precision. Sometimes it takes a finite number of steps to get an exact answer, sometimes even an infinite number only gives an approximation not quite equal to the true solution. The size of the error is often determined by how powerful your computer is and how much time you have on your hands. But it’s a quantifiable error. Numerical methods are a rigorous branch of mathematics, well studied for how they approximate and converge upon true solutions. No one is shooting in total darkness here.
As an aside, before modern computers existed equations like Newton’s would be solved by doing these numerical steps by hand. There would be a team and each member of the team would do just one step, and pass their result to the next person. These days that’s exactly what a computer does. It is also essential to note that while it might sound like the people doing the computations deserved a great deal of credit for scientific advances, they were often hidden from view. It was common for human computer labs to be composed of women, such as the Harvard computers, and Black women especially had their contributions suppressed for decades, including Katherine Johnson, Mary Jackson, and Dorothy Vaughan, whose stories were uncovered and told in the recent book Hidden Figures.
Figuring out what equations describe what aspects of reality can admittedly be tricky. But it is far from ad-hoc. Terms have to be defined, carefully. The behaviour of posited equations has to be understood rigorously. Measurements need to be made to guide hypotheses. When constructing a model, scientists will tinker with equations to try and get the outputs to match some set of data already available, known as “training data”. So by definition, a model fits training data quite well. But then the model’s outputs are compared with test data. Which is separate from training data. Test data is never, ever, ever, ever the same as training data. This is taken extremely seriously.
And so observations become models, models lead to new hypotheses, which become more observations, and everything gets packaged into theory.
Some systems are so well understood and have such precise measurement data as inputs that our models yield immensely precise predictions. It’s how we can tell the dates of full moons going forward millennia and beyond. There’s always some amount of uncertainty, but it’s quantifiable uncertainty that constrains expectations. And if we have uncertainty about the validity of the equations, models can test them. Again, there is value in a model we know to be incomplete or purely hypothetical. Sometimes running such a model is how we pinpoint precisely where it’s wrong, and this pushes new advances in theory, or guides new physical experiments.
In his now-infamous interview with Joe Rogan, Jordan Peterson got every single one of these ideas about modelling wrong. At the conceptual level, he says “there is no such thing as climate. Climate and everything are the same word”. No, scientists across many fields have a working definition of “climate”. It has fuzzy boundaries, the time span that differentiates “weather” from “climate” is not a fixed and rigid dividing line, and what physical processes are involved can vary depending on their relative importance to a question. But ultimately “climate” is the large scale (in time and space) variation of atmospheric variables and their feedback with the land and ocean.
At the mathematical level, he says “Your models are based on a set number of variables. That means you reduce the variables, which are everything, to that set.” Setting aside that climate is not “everything”, part of the point of modelling is specifically to figure out what variables are essential to our understanding of climate, to test what descriptions work and dissect why. It’s also worth noting that Peterson probably doesn’t object to sticking a meat thermometer in a turkey to gauge its temperature, when it would be more “accurate” to measure the kinetic energy of every single particle.
Peterson, like many others before (looking at you, Jonathan Franzen), also misses the history. Climate models began as simplistic, intentionally wrong tests of base principles (both of physics and numerical methods). Now-Nobel-laureate Syukuro Manabe was instrumental to the field of climate modelling with his and Kirk Bryan’s 1969 model of the Earth as just two chunks: A hemisphere of just land, and one just the sea sea. Of course it was wrong, but they had an idea what to expect from such a simple system, so we could test our ability to model it. Did the maths make sense? Did the code work? From there, Manabe and others began increasing the resolution and realism of the model, and more and more accurate data could be filled in.
The true scope of numerical modelling, be it in climatology or virology or ecology, encompasses layers and layers of scientific rigour from conceptual and philosophical definitions, to the laying out of the mathematical equations and analyses of observations, to constructing the numerical models, to the analysis of results, to cycling back and repeating the process with improvements.