The researchers investigated the psychology behind effective mitigation strategies and the effectiveness of three “vaccines” against online disinformation.
The research by Thomas Zerback (University of Zurich) and Florian Töpfl (Free University of Berlin), shows that forged comments can change the political opinions of audiences online. Furthermore, it seems inoculation against manipulated messages only works when it debunks the exact fake argument. Sadly, the research also shows the effect only works for a short period and is then forgotten.
Astroturfing, a form of manipulation, is the craft of masking the sponsors of a message to make it appear as though it originates from and is supported by grassroots participants. It aims to give more credibility to statements or organizations.
This trick is probably as old as politics. In Shakespeare’s Julius Caesar, Cassius writes forged letters from “the public” to encourage Brutus to assassinate Caesar.
Like viruses, disinformation can be stopped through isolation. But obviously, that comes with a price open societies are reluctant to pay; economic downfall and frustration of the free flow of information. So it might not be surprising many hope a vaccine will be found that beats both COVID-19 and the infodemic.
How does online inoculation work then? According to the study, it aims to inhibit or even prevent the impact of persuasive attacks by administering a weakened version of “the virus” to the audience.
In general, effective vaccination consists of two elements. The first element is called “the threat”. The individual receives a warning about a pending persuasive attack that will challenge their existing attitudes.
The second element is “refutational pre-emption”. Two variants exist here; refutational-same pre-emptions raise and refute exactly the same arguments as used in the subsequent attack message. Refutational-different pre-emptions include arguments that are not part of the subsequent attack message, the study explains.
The research focused on the effectiveness of these vaccination elements. It investigated three aspects of online astroturfing. First, the study tested the comments’ effects on political opinions and opinion certainty. Second, it tested the efficiency of three vaccination strategies to prevent these effects and, third, the duration of the immunizing effects.
The researchers first created fictitious Facebook news teasers by a reliable media outlet on three topics relevant to Russia’s disinformation campaigns: the poisoning of ex-Russian agent Sergei Skripal in Salisbury in 2018, US presidential elections in 2016, and the use of toxic gas in Syria.
Each teaser was accompanied by two user comments representing typical astroturfing attack messages, doubting Russia’s (and in the last case the Syrian government’s) involvement in the events. The researchers used EUvsDisinfo content to help construct messages. Later on, the researchers tested three different inoculation strategies.
Results show that astroturfing comments can indeed alter recipients’ opinions, and increase uncertainty, even when recipients are inoculated before exposure.
Even worse, the effectiveness of threat-only and refutational-different pre-emptions seems very limited.
So, Mark Twain was right when he famously claimed: “it’s easier to fool people than to convince them that they have been fooled.”
However, the research also provides some good news. Online vaccination can work, although only under specific conditions. Banners or warnings displayed in the immediate vicinity of commenting fields that refute the exact same argument, seem to be the only effective strategy. In other words, to debunk effectively one needs to be specific.
Apparently, people do appreciate the logic. To some extent.
The researcher also found that even the immunizing effect of the refutational-same treatment was only short-lived and vanished after two weeks.
The experiment was limited to a specific form of disinformation. Further, the results are gathered in an experimental setting, not real life. Still, it could be that the psychological mechanisms at work here are similar to the mechanisms at work for those exposed to other forms of disinformation. That would suggest that only those counter-narratives that debunk the exact argumentation of disinformation work. And the counter-narrative has to be timely, probably right before the disinformation reaches its targets. These notions, although they require further validation, could prove valuable in strategic thinking on disinformation.
A further question, however, is how specific a vaccine needs to be to achieve efficiency. Disinformation often uses recurring narratives and arguments. For example, pro-Kremlin outlets regularly employ the concept of “Cui Bono” – who benefits – virtually anytime Russian state operatives have been caught red-handed. This point has been used as regards MH17 and extensively on almost anything that is going on in Syria. Even when the Dutch police arrested a group of GRU officers attempting to hack the computers of the OPCW at The Hague, pro-Kremlin narratives tried to convince the audience that it was all a set-up to justify aggression against Russia.
As always with good research, it not only provides answers, it also leads to more questions. But it looks promising to delve deeper into the psychological mechanisms behind the disinformation and its countermeasures.
- Secret labs and George Soros: Kremlin’s COVID-19 disinfo
- Disinformation that can kill: coronavirus-related narratives of Russian propaganda
- Pro-Kremlin disinformation about the protests in Belarus
- EU watchdog for disinformation debunks 8000 cases in 6 years