Misinformation in the midst of a pandemic

Stephan Lewandowsky, Chair of Cognitive Psychology at the University of Bristol, examines the epidemic characteristics of fake news.
Stephan Lewandowsky

Chair of Cognitive Psychology

02 Jul 2021
Stephan Lewandowsky
Key Points
  • Misinformation spreads in a way analogous to disease transmission. Like with a disease, people can be inoculated against misinformation.
  • Misinformation is prevalent in a range of topics. However, the creators of misinformation often rely on easily identifiable rhetorical techniques.
  • Scientists enjoy a large degree of public trust. As such, the scientific community is excellently positioned to help dispel misinformation.
  • The nature of social media is conducive to spreading misinformation. Social media platforms like Facebook, thereby have a significant role to play in mitigating adverse effects.

Misinformation in the midst of a pandemic

Photo by arindambanerjee

It's an exciting opportunity to talk about misinformation in the middle of a pandemic because you can analogise between how misinformation and disease spread. For instance, misinformation isn't something that naturally occurs and is then unstoppable. People online, acting on social media, decide to spread false information, make it go viral, and then influence public discourse. In that sense, the idea of transmitting information from one individual to another is very much like a disease.

Moreover, like with disease, one of the countermeasures for misinformation involves what we call inoculation. Inoculation consists of exposing people to a small dose of misinformation. In particular, we expose them to information about how misinformation is designed and what techniques people who want to mislead use to spread their message.

Numerous laboratory and online experiments have shown that inoculation is effective. If we expose people to misleading argumentation and warn them how it might mislead them, they become resistant to those messages. Overall, it works a little bit like a vaccination in keeping invading misinformation at bay. That's how we can suppress the spread of misinformation. So, the key challenge then is to spread these messages and inoculate people against false information. There are researchers around the world who are now working on precisely that problem.

Rhetorical techniques to mislead people

One of the things about misinformation is that it can cover a whole range of topics. People may mislead you about vaccinations, climate change, political issues, or immigration. Misinformation is everywhere, and it addresses any topic.

Interestingly, though, the rhetorical techniques used to mislead people online tend to be very similar across domains. For example, there is a lot of denial of scientific findings, be it climate change or vaccinations. The Covid-19 pandemic has inspired conspiracy theories that scientists are just making up the disease, and it's all just a big hoax.

Regardless, all conspiracy theories share characteristics. For example, conspiracy theories are incoherent. They are mutually contradictory. In the context of Covid-19, I have seen people on the Internet claiming the virus has been around forever, that we got it as children through vaccinations and the only reason it's coming out now is that we have to wear masks — clearly nonsense.

However, it is interesting that the same person will then claim something like Covid-19 is a Chinese biological weapons experiment that went wrong and was mistakenly released on us — also nonsense. However, that person will hold these two completely incoherent beliefs: the virus has been with us forever, and a Chinese weapons laboratory just released it by mistake.

Well, it can be only one or the other. It cannot be both. Yet, people who believe in conspiracy theories tend to believe these incoherent, mutually contradictory statements. This is also true for climate denialism and other cases. For instance, even with Brexit, many people who argue to leave the European Union have a contradictory view of the European Union. On the one hand, they purport that it will fall apart soon, while, on the other hand, they say it's a superstate that holds too much power. Those ideas don't sit well together; it's either one or the other.

Inoculating against misinformation

The point is that although misinformation occurs in a variety of issues, how people are misled is often very similar across topics, gives us a chance to inoculate people by educating them ahead of time. We can warn people to look out for incoherence. If something is incoherent, then we can forwarn people that it’s not going to be true because the world isn't incoherent. The world is either flat or round, but it can't be both. So, you can protect yourself against misinformation the moment you recognise incoherence.

Of course, incoherence is just one marker, and there are many other markers. Regarding Islam, there is Islamophobic misleading information and Islamist radical misleading information. Research done by one of my PhD students has shown that both types of messages invoke emotion. The content creators are trying to provoke people emotionally against either Muslims or non-Muslims, depending on their perspective.

We can teach this. We can tell people that if you get angry or outraged by something on Twitter or YouTube, the chances are that somebody is pulling your string. The chances are that somebody designed that message to be hyper-emotional to mislead you or to change your attitudes. A recent experiment we conducted showed that you could inoculate people by just telling them that. Following, people become significantly less susceptible to that misinformation.

Overall, there's a range of techniques used to spread misinformation. For each of those techniques, there are counter messages that we can use to inoculate people. The beauty of this is that we don't have to know what precisely the message will be. If we understand the techniques, we can protect people against misinformation regardless of the content they see.

The necessity of trust

Photo by Vera

Naturally, one of the issues with this approach is that it requires trust. Ultimately, people have to trust you to say things that they can use to protect themselves against misinformation or to correct their misconceptions. The issue then becomes: what determines trust?

As it turns out, the matter is fairly nuanced. On the one hand, on average, people trust experts despite what populist politicians may try to convince you about the elite. There are surveys across most European countries, and the United States, that identify scientists as among the most trusted individuals. As such, scientists have a reservoir of trust we can rely on when communicating with the public.

During the last six months of the Covid-19 pandemic, there has been a dramatic increase in trust in scientists in several countries. For example, in the United Kingdom, two-thirds of respondents in a recent survey said that they now trust scientists more than they did at the beginning of the pandemic. In Germany, there is overwhelming support for scientists as the number of people who fully trust scientists has tripled during 2020. In times of crisis, people suddenly recognise that empty slogans and social media fluff aren't going to solve the problem. Instead, people who know what they’re talking about will solve the problems. I think we have an opportunity to educate people on how to protect themselves against misinformation and there's a lot of data to support that.

Nonetheless, I also recognise that some people have resorted to conspiracy theories throughout the pandemic. There is an abundance of conspiracy theories on the Internet. We have to be careful about differentiating between different groups of people. The vast majority of people are sensitive to evidence. The vast majority of people respect and recognise the expertise and are responsive when corrected if misinformed. However, these people coexist with a notable — but not a huge — number of people who do not trust experts. Instead, they are inclined to believe in conspiracy theories, and it is much more challenging to interact with them.

So, we have to recognise that these two groups exist along with a whole range of people in between. The critical thing to do is to understand who we are addressing in our communication efforts. Concerning inoculation, we should focus on the two-thirds or more of people who are willing to listen to experts because they are receptive, and they appreciate having the tools to understand when they are being misled.

Following, we're talking about a minority of challenging individuals who are not susceptible to believing evidence. It is far harder to communicate with them. Yet, our lab has conducted recent studies looking at radical Islamist or Islamophobic extremism. We have found that we can inoculate people against those radical messages. If we inoculate those who are inclined to believe in conspiracy theories or radical extremist messages before they become captured by them, even these evidence-resistant people can be protected.

Censorship or regulation?

Photo by LuchschenF

So far, I’ve talked about inoculation. Inoculation is affording people the tools to deal with misinformation before they are exposed to it, so they can deal with it and become resistant to it. We have demonstrated this in many experiments, and it has always been successful, and it has never had any harmful consequences. In the spirit of educating people, I think we ought to roll out these messages on a large scale. Therefore, we need social media platforms to cooperate in making people resistant to misinformation.

Sooner or later, we must talk about the elephant in the room, which are the social media companies: Facebook, YouTube, and Google. We've got to think: what are these guys doing? What is their role in society? What do we, as citizens of a democracy, want that role to be? To solve the problem in the long run, we have to think about smart ways of regulating these platforms without resorting to censorship, which is the real challenge of the immediate future.

Moreover, there is another elephant in the room: our governments. Not all governments are always fully democratic. We can see threats to democracy coming from governments in many countries around the world, including in Europe. How do we deal with such a situation where the government may not be friendly in all instances?

Whatever we do, we must avoid censorship. Giving the government the power to regulate content on social media platforms is extremely risky because what's to keep them from saying, ‘Oh, you can't publish this because it's embarrassing to us as a government.’ We have to be very careful about the power we give the government. However, governments should regulate social media platforms and society to address people's skills, education levels, and the context in which misinformation unfolds.

I would never say, ‘Oh, the government has to be able to say what can and can't be published on Facebook.’ However, I think the government can require Facebook to disseminate information that gives people the ability to tell the difference between false information and high-quality information. The government could also mandate that social media platforms provide indicators of the quality of information posted on their platforms.

Technology to warn against misinformation

In this regard, there are many indicators we can examine. Misinformation does not typically rely on citations of reputable sources. On the contrary, it is recycled information that people link to each other within an echo chamber when they're promoting a myth or a meme. These are all things we can identify by technological means without understanding the content of the messages. We can then provide people with warning signs. We can say, ‘Whoa! What are you looking at here? It doesn't look like it's good information because it comes out of an echo chamber.’ Or, we can warn people about information considered false by independent fact-checkers, which Facebook started doing during the Covid-19 pandemic.

Moreover, if you warn people, you can limit the spread of bad information by 95%. Facebook ran that experiment earlier this year. Mark Zuckerberg wrote about it on his blog wherein he said that a warning overlaid on clearly false information reduced sharing by 95%.

Thus, we have the technological tools to slow down the spread of misinformation without censoring. Pointing out that fact-checkers judged certain content to be false isn't censorship. People can still look at the information they want; however, we make it a little harder to do so by putting the warning on top.

Another example is what Twitter has recently done: they introduced a function on Android phones that warns users about sharing information they haven't read. For instance, if Twitter knows you haven't clicked on a link, they will inform other users.

I think that's a brilliant idea because it reminds people of the importance of understanding what they're doing online. Recent research reminds people of the need to be careful about what they share, which makes them more discerning overall. Once people start thinking about what they are reading, they are pretty good at discerning bad information from good information. A problem arises when people do not stop to think about the information they are ingesting. Therefore, I believe we can change the online environment and make it more conducive to the sharing and consumption of better, high-quality information without censorship.

Discover more about

misinformation inoculation

Cook, J., Lewandowsky, S., & Ecker, U. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS one, 12(5). 

Swire, B., Berinsky, A.J., Lewandowsky, S., et al. (2017). Processing political misinformation: comprehending the Trump phenomenon. Royal Society Open Science, 4

0:00 / 0:00