The virus analogy for misinformation is the idea that misinformation spreads from person to person like a virus, and that people become infected by misinformation like they become infected by a disease. This epidemiological metaphor extends to proposed solutions for misinformation such as inoculation interventions, which are described as metaphorical vaccines that deliver mental antibodies to confer resistance against misinformation. Inoculation interventions have enjoyed widespread attention, particularly in the form of serious games.
In our previous article in The Skeptic, we argued that the virus analogy for misinformation is misleading and alarmist. We cautioned that the analogy implies that we are all to some extent vulnerable to misinformation and in need of a psychological vaccine to resist it. This idea of widespread vulnerability contradicts much of the current empirical evidence, which shows that people can typically distinguish between true and false news quite well, are generally sceptical rather than gullible when navigating news on social media, and report using various strategies to detect misinformation.
In this article, we want to address a common pushback to these criticisms, which is that mathematical models from epidemiology can be adapted to track the spread of misinformation across social networks. These models have been found to fit social network data well, which has been interpreted as evidence supporting the virus analogy for misinformation.
At a descriptive level, we do not question that it is useful to draw an analogy between the spread of misinformation and the spread of biological viruses. Given the extensive modelling work in epidemiology on how biological viruses spread, it is reasonable to apply these frameworks more broadly. In fact, the use of epidemiological models in the social sciences extends beyond misinformation research. As academic philosopher Dan Williams wrote when facing the same pushback for critiquing the virus analogy:
There is nothing unique about misinformation that makes it amenable to such modelling … the models will apply equally to engaging truths, juicy gossip, funny jokes, new fashions, and so on.
Williams was right: essentially, any phenomenon with a positively accelerating growth rate is a potential candidate for this type of modelling. For example, song popularity, growth in church memberships, opinions, ideas, and a plethora of other social phenomena have all been mathematically modelled like the spread of a viral contagion. But this modelling does not necessarily mean that we should conceptualise and treat these social phenomena as if they were viruses.
The problem is that there can be similarities between the spread of misinformation and the spread of biological viruses even though the underlying systems that cause their spread are fundamentally different. Indeed, in terms of the mechanisms of infection, there are substantial differences between the two systems. Viral infection can occur when virus particles bind to receptor proteins on cell membranes and inject their genetic material in the form of ribonucleic acid (RNA) or deoxyribonucleic acid (DNA) into the cells, thereby allowing the virus to replicate. Misinformation “infection”, on the other hand, does not take a physical form in the same way – there are certainly no cells, proteins, or genetic material, and if there are psychological counterparts, they remain unidentified.
Furthermore, people are not passively “infected” by misinformation like they are with a virus. People have motivations, beliefs, and interests that determine the (mis)information they consume, share, and believe in. Super-spreaders of misinformation are not akin to super-spreaders of biological viruses. Misinformation super-spreaders often spread misinformation for specific purposes, whether it be financial gain or popularity. Such motivations are not relevant for biological virus super-spreaders.
These differences in the mechanisms of infection may ultimately limit what can be learned by comparing how each phenomenon spreads throughout the population, but epidemiological comparisons are certainly a good starting point for understanding the potential exponential spread of misinformation.
At the end of the day, biological viruses are just a metaphor for misinformation and, like all metaphors, there are similarities and differences between the tenor and the vehicle. Using the extensive epidemiological modelling of virus spread to understand misinformation spread highlights the similarities. However, similarity in one dimension, such as spread, does not preclude differences in others. Depending on the number and severity of these differences, the usefulness of the metaphor could be completely undermined.
As noted above, one such dimension where we believe the tenor and the vehicle diverge is vulnerability to infection. Oddly, this claim was rebutted by noting that misinformation and viruses spread similarly, which essentially led the author (Sander van der Linden, the main proponent of the virus analogy for misinformation) to conclude that differences in vulnerability to each phenomenon were not problematic. But similarity in spread is not relevant to our claim about differences in vulnerability, because they are distinct dimensions of comparison. Thus, considering the ways in which the tenor and the vehicle are different (as well as similar) is important to avoid problematic reasoning and unwarranted conclusions.
As another case in point, van der Linden stated,
If misinformation does behave like a virus, then we can also create a vaccine.
This statement is a non sequitur – the premise that misinformation may behave similarly to biological virus in no way leads to the conclusion that we can then also create a vaccine against it. A virus compromises our immune system; misinformation does not. Fundamentally, this is the issue that we have with the virus analogy of misinformation: it has been overextended to the point that psychological vaccines are promoted as a cure for misinformation, despite the obvious differences between the immune system and the mind. In our view, this overextension is misleading and alarmist.
In conclusion, we do not deny that misinformation is shared among people and can therefore be mathematically modelled like a viral contagion. Moreover, we encourage research efforts of this sort. There may be better metaphors where the tenor and the vehicle are both in psychological space, such as comparing the spread of song popularity, or other social phenomena with a positively accelerating growth rate, with the spread of misinformation. Nonetheless, the extensive modelling work in epidemiology cannot be ignored.
However, encouraging such work does not speak to the somewhat dystopian assertions of the virus analogy that we are challenging: that people become “infected” with misinformation in the same way they do with a virus, and that there are “psychological vaccines” that can generate “mental antibodies” to combat it. There are fundamental differences between (mis)information transmission and viral contagions. Framing the former as the latter reduces complex cognitive and social processes involved in human communication to mere “infections” and “cures”, which ultimately distorts our understanding of the misinformation problem.