Mathematical models have become something of a hot topic of late, with the UK government and its public health advisors referring frequently to the use of models to inform the national response to the COVID-19 pandemic. On the one (thoroughly washed) hand, it’s excellent that mathematical modelling and those who practise it expertly are in the spotlight; on the other (equally thoroughly washed) hand, we cannot expect an instantaneous elevation of the levels of statistical literacy required to interpret, interrogate and critique these models. This statistical literacy ‘gap’ has been revealed by the need to use models to determine risk factors, decision making and both individual and collective behaviour. Should I travel? Should I visit a relative? Should I cancel an event? Should we close businesses and public services? These decisions, often small but when multiplied becoming writ large, are both influenced by and in turn influence the models – demonstrating very powerfully the iterative nature of such modelling.
There is a common perception that mathematics is precise, neutral and one hundred percent objective. Many people will express either their love or hatred of the subject from their schooldays because ‘the answer is always right or wrong’. But in the messy real world, the way that mathematics plays out is far from objective. In cases such as the current public health emergency, we are reliant on data that no matter how well-collected will inevitably be imperfect, incomplete, and represent to some extent the bias of the agency defining the particular metrics to record. (How do we define ‘cases’? Who do we test? How do we account for unknown and unreported cases? How do we attribute and record cause of death when there may be several?) It is particularly important in this case because the inconsistencies between the way different countries and agencies report disease metrics are often not accounted for when comparing them and looking at the spread of the virus in other countries is one of the key ways to determine information about its likely progress at home.
The key message is this: models are not objective. And few who work closely with them would ever make this claim. Often however, the invocation of a model can be seen as a form of mathwashing: a way of lending credibility to an argument or justifying a course of action by appealing to mathematical authority (as if there were only one).
So what does this mean in practice? The oft quoted ‘All models are wrong, but some are useful’, credited to the statistician George Box, is particularly relevant in the case of COVID-19. The model or models used by the government, disease control centres or the media will undoubtedly be imperfect oversimplifications of the complex interactions between virus, host and wider population; the power of statistics, however, is that in most cases the combination of millions of complex, unpredictable, seemingly random actions generally combine into something predictable, stable and broadly reliable. The trick, such as it is, is to build a model that provides information such that its level of wrongness is less significant than its level of usefulness.
The key to a useful model is in the assumptions: making them reasonable and making them transparent. These models and their attendant assumptions are often highly complex, but our exposure to mathematical modelling in school is often structured as if all the key decisions have already been made for us. Take Newton’s laws of motion: students will encounter Newton’s second law Force = Mass x Acceleration and use it to calculate with those quantities in ‘real’ situations. What is less often considered is that this equation is a model which assumes, for example, that friction, air resistance and a whole host of other things are of negligible importance; in a sense the model is ‘wrong’, but little time and attention is given to exploring the significance of this (although many students may reflexively suggest that ‘take air resistance into account’ is an excellent way to improve almost any model thrown at them).
And so back to COVID-19. The decisions that are being made are based predominantly on assumptions, and those assumptions are made by humans with their own objectives, biases and naivetes. They attempt to merge the mathematics of exponential growth, transmission rates, mortality rates, NHS capacity and saturation with assumptions about human behaviour, acceptable economic and social costs, political calculus and the likelihood people will tolerate extensive restrictions or lack of perceived action. These are human decisions with human costs and while no course of action will be consequence-free, we must all acknowledge that the models can only suggest a set of possibilities based on available information: the ultimate responsibility for action lies with the people who make decisions based on this information in line with their own moral and ethical considerations. It is essential as this crisis develops that the humble mathematical model neither becomes a scapegoat in the event of tragedy nor that its essential contribution as a basis for decision-making be weaponised by mathwashing.
Join the conversation: You can tweet us @CambridgeMaths or comment below.