Disinformation: People aren't as gullible as we think | tackling-disinformation-learning-guide | DW | 17.03.2024
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages

Disinformation research

Disinformation: People aren't as gullible as we think

Psychologist Sacha Altay says misinformation doesn't impact people's behavior as much as we think. People aren't that gullible, he says in an interview for DW Akademie's Tackling Misinformation: A Learning Guide.

People stand in the street and read Bangladesh's local newspapers pasted along a wall in the capital Dhaka in September 2023 ahead of elections.

Articles praising Bangladeshi government policies apparently by independent experts appeared in national and international media ahead of Bangladesh's elections, but the authors have questionable credentials, fake photos and may not even exist

Sacha Altay is an experimental psychologist whose research focuses on misinformation, misperceptions, social media and trust. He believes that fears about misinformation are often reminiscent of moral panics about the effects of new technology, and that many concerns about misinformation are based on false assumptions, such as people are gullible and believe anything they see online.

DW Akademie: You published a research paper that concluded: "Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated." How did you come to this conclusion?

There is certainly a 'fake-news' and 'misinformation' hype that started after the 2016 US presidential election [which Donald Trump won] and Brexit [when the UK voted to leave the European Union in a referendum]. Most observers didn't see Trump and Brexit coming and were quick to blame misinformation and people's gullibility for the election of Trump and Brexit. To be fair, there was misinformation surrounding these events, but many jumped to the conclusion that because there was misinformation, it must be misinformation that caused these events. And I think that's wrong, the data doesn't support it.

Many think that misinformation is very prevalent; that people fall for it because they are gullible, and that it has a strong influence on people's behaviors. Yet, since 2016, a lot of empirical work has shown that this is wrong. Misinformation represents a minute portion of people's media diets — notably because most people aren't very interested in the news and politics. Most people are savvy information consumers and tend to be skeptical of reliable information instead of gullible towards misinformation.

It is difficult to quantify the causal effect of misinformation. But we know from decades of work on the effects of media that mass media has a small effect, at best, on behaviors. We also know that people self-select into like-minded content, for instance, not everyone tunes in to Fox News. People are also suspicious of information they disagree with, are skeptical of the news and very often use misinformation to justify preexisting attitudes, values or behaviors.

Screenshot Facebook | Desinformation aus Russland

This example of Russian disinformation was taken from a Russia-linked Facebook ad that was released by US Congress in 2017. It depicts 2016 Democratic presidential candidate Hillary Clinton

Why are we so worried about misinformation then?

A lot of people, including journalists, aren't aware of this body of empirical work, and continue to overemphasize the threat of misinformation. I think it's very intuitive for people to think that misinformation is having all these terrible effects because misinformation always follows crises, such as the COVID-19 pandemic, the Ukraine-Russia war and the Israel-Hamas war. The more we research misinformation, disinformation and fake news, the more we realize that the problem is smaller than we thought, and that it may be less important than we assumed — at least when it comes to explaining complex sociopolitical events such as presidential elections.

Misinformation is the tip of the iceberg. Every time there is a problem, people see misinformation. Whereas it's hard to see the factors that likely cause misinformation to take root such as anti-democratic attitudes, perceptions of corruption, distrust of the government and partisanship.

Do you think we are suffering from increased "information pollution" in that our media ecosystem is flooded with huge amounts of toxic content and false information that has negative impacts on our societies?

I don't agree that there has been an increase in information pollution. People think that conspiracy theories and misinformation are on the rise, and that we live in the golden age of misinformation. But the best evidence suggests that conspiracy theories aren't on the rise, that we idealize the past, and that the past wasn't better than the present.

If anything, exposure to untrustworthy websites decreased between 2016 and 2020 in the US. In some countries, media ecosystems are very likely flooded with misinformation. However, it isn't clear to me that in most Western democracies, media ecosystems are indeed flooded with bad content. For instance, in most countries people mostly turn to trustworthy and established news outlets to learn about the world. And these news outlets often do a pretty decent job.

It's also important to keep in mind that people are exposed to a tiny, tiny fraction of all the information available online. And it's not because most content on the internet is trash that people are exposed to it. People are very good at ignoring stuff they don't care about and scroll or zap to find what they want. We should remember that people have agency and tame new technologies and algorithms in unexpected ways.

Information pollution can have bad consequences for our societies, but the extent to which it has bad consequences is an empirical question. We shouldn't assume that, by default, information pollution has strong effects on societies.

It's often hard to disentangle causal factors: in China, the media ecosystem is polluted given state propaganda and censorship measures, but is the problem the media ecosystem or their political system and the state of their democratic institutions more broadly?

What is the impact of overemphasizing the threat of misinformation?

My main worry is that if we frame complex sociopolitical problems as being caused by misinformation, policy makers may think that by getting rid of misinformation, the problems will go away. While it would be good to reduce the quantity of misinformation in the information ecosystem, problems like vaccine hesitancy, ethnic tensions or sexism won't go away if we remove misinformation. I also worry that policy makers or journalists may use misinformation and social media as scapegoats to avoid having to do the hard work of improving our democratic and media institutions.

Also, populist leaders have accused the press of being 'fake news' to delegitimize the news. Alarmist narratives about misinformation may also have the unintended consequence of reducing trust in the news. By telling people that there is a lot of false information out there, they become more skeptical even towards reliable information. This is a detrimental outcome we should be mindful of.

In the discussion about disinformation, Russian propaganda is a major big topic. Is Russia successful with its disinformation campaigns?

Mis- and disinformation are bad for our societies, and it would be better if there was less of it. But there is little evidence that disinformation campaigns like the ones conducted by Russia in the US are very successful. Again, it's an empirical question, but persuasion is difficult. And while it's possible to influence people's beliefs, it's much harder to influence their attitudes or behaviors.

We should be mindful of influence operations and take them seriously. But we shouldn't assume that they are successful by default and not cover them as such. The goal of many influence operations is to erode trust in democratic institutions. Yet, covering these operations as if they are working, in the sense of having an impact, may do just that — lead people to lower their support in democracy because they think others have been brainwashed, or that the results of the election aren't legitimate because it has been significantly influenced by foreign actors.

What are the main research questions or challenges that keep researchers awake at the moment?

The main challenges relate to data access and data analysis. We have little access to most social media data, and we still struggle to analyze non-text content, such as TikTok and YouTube videos and Instagram images.

We also know very little about misinformation on legacy media, such as TV and radio, in part because it is harder to analyze and classify.

The field is still trying to properly quantify the causal effects of misinformation, its prevalence outside of the US and the role that social media and algorithms play. Many computational scientists are still trying to develop automatic classifiers of misinformation, but it is very difficult given the pragmatic nature of human communication — what we mean when we write something online or post a meme online largely depends on the context. And while large language models like GPT-4 are starting to pick up on subtle contextual cues, many cues will remain hidden given that they are located in the senders' brain or past interactions with other members of the community.

Finally, more and more researchers are testing interventions to limit the acceptance and spread of misinformation and trying to come up with unifying ways to measure and compare the effectiveness of these interventions. There is no panacea, and we are still far from having found reliable and effective interventions that could be deployed at scale.

Is there a causal relationship between media freedom and how resilient the media ecosystem is to disinformation?

It's tricky to empirically demonstrate such causal relationships, but I think most researchers would agree that healthier media ecosystems are more resilient to disinformation. Anecdotal evidence also suggests that strong democratic institutions and good journalism help societies cope against disinformation. For instance, Russia launched a disinformation campaign during the 2017 French presidential election to discredit the candidate Emmanuel Macron in favor of the extreme right wing candidate Marine le Pen. But the campaign, known as #MacronLeaks, miserably failed.

One analysis found that hackers failed, and I quote, because of "luck, as well as effective anticipation and reaction by the Macron campaign staff, government and civil society, and especially the mainstream media." The hackers also made several mistakes, such as using the English language to appeal to extreme right-wing French voters.

There is also some empirical evidence showing that following the news in countries with strong media ecosystems like the United Kingdom help people be more informed and resist misinformation. For instance, a longitudinal study has shown that people in the UK who read more of the BBC and the Guardian online learn more about current affairs and politics, become more aware of misleading claims about COVID-19 and develop less false beliefs about COVID-19.

 A member of the public poses for a photo in front of Tower Bridge whilst wearing a protective mask in 2020 in London

A study found that in the UK, more frequent news users are more likely to be aware of the existence of false COVID-19 claims but are less likely to believe them

What are the relevant questions we should ask when we want to tackle the complex problem of disinformation?

First, we need to have a clear mental model of how misinformation is contributing to the problem we are interested in and at which stage. For instance, misinformation can influence behaviors by providing justifications to rationalize preexisting attitudes. In some cases, misinformation can be used to solve coordination problems: people with anti-Muslim attitudes spread hateful rumors about Muslims to gauge level of support for anti-Muslim actions and ultimately decide on a target to attack.

Misinformation can also be used to undermine trust in reliable information or to confuse people. But in many cases, misinformation may play a very marginal role, and we may instead want to focus on other facets of the problem. For instance, people can be vaccine hesitant without being exposed to misinformation, simply because they are scared of needles or don't trust medical institutions. And the people who aren't vaccine hesitant don't believe anti-vaccine information when exposed to it — or they avoid it altogether.

I think often what people want to fight isn't false or misleading information spread without necessarily the intent to cause harm, i.e. misinformation, but false beliefs that are associated with harmful attitudes or behaviors.

People can hold false beliefs not just because they have been exposed to misinformation and believed it but also because they haven't been exposed to reliable information or disbelieved it. Given that misinformation consumption is small and heavily concentrated among a small group of people, that trust in the news is low, and that a lot of people avoid the news, I think that many more hold false beliefs because of their skepticism and avoidance of reliable information rather than because of their acceptance and consumption of misinformation.

What needs to be done?

I think it is important to promote reliable information as much, if not more, than combating misinformation. They are two sides of the same coin, but for some reason in recent years we have focused a lot on misinformation. It would be too simplistic to say, "people should trust the news and the government more," as people's distrust is often justified and may reflect deeper problems within societies such as inequalities, precarity or the marginalization of minorities. It is thus paramount to work both at the individual level to form informed citizens that can navigate today's complex information ecosystem, and at the systemic level to improve our institutions and how they work and communicate with the public.

How can science help identify the best approach to fight disinformation?

There has been a lot of progress made on this front. For instance, there was this common worry that fact-checking backfires, that is, corrections exacerbate misperceptions instead of reducing them. Yet, efforts to replicate this finding across a wide range of topics have shown that these fears are totally unfounded and that, on average, corrections do work at reducing misperceptions.

However, fact-checking isn't a silver bullet. While fact-checks reduce misperceptions when people are exposed to fact-checking, a lot of people aren't exposed to fact-checks. And fact-checking fails to affect attitudes, such as how people feel toward a politician, or behaviors, such as who to vote for or who to vaccinate. Beyond its direct effect, fact-checking could be useful to hold politicians accountable.

There are also concerns that when the news fact-check misinformation, it may expose people to novel misinformation which they would not have otherwise seen and that the news therefore contributes to the spread of misinformation. While it is probably correct that by covering and fact-checking misinformation, mainstream media greatly increases the visibility of misinformation, there is no evidence that when doing so they increase belief in misinformation. The best evidence suggests that in countries with strong media ecosystems, the news may increase awareness of misinformation while at the same time reducing belief in misinformation.

Many other interventions have been deployed and tested to combat misinformation. These include media literacy classes, tips and prompts to remind people to think about accuracy, or interventions to help people detect manipulation techniques. While most of these interventions are on average useful, they start from the premise that one, people are excessively gullible, so it would be good to make them more skeptical, and two, people are exposed to a lot of misinformation, so we should help them resist and detect it.

I don't think these premises make much sense in most Western democracies where people are exposed to little misinformation and are very skeptical of the news that is normally trustworthy. However, these premises may be appropriate in some countries of the Global South, where misinformation prevalence may be higher, and where people may not be able to rely on strong media ecosystems. But I don't think we can reduce the misinformation problems in the Global South to media/digital literacy skills or knowledge deficits. In my opinion, the problems are much deeper and require systemic interventions such as funding the free press and public broadcast services, improving democratic institutions and fighting corruption and precarity.

Jeremias Langa, President of MISA Mosambique, stands on a street in Muputo surrounded by journalists holding microphones

Media persecution has a long history in Mozambique, like in many other nations in the Global South, which helps disinformation florish

The concept of the inoculation theory, or psychological inoculation, has gained some traction lately. The idea is that people can be immunized against disinformation by exposing them to a weakened version of a misleading argument or fake news. This prepares them to recognize and resist false information when they encounter stronger versions of it in the real world. What is your take on this?

I'm not a fan of the biological metaphor. While at the macro level, the flow of information can be modeled using biological models, misinformation isn't a virus and there is no vaccine against it. At the psychological level, misinformation doesn't infect minds like viruses infect bodies. People have socio-cognitive systems that allow them to filter communicated information and figure out who to trust and what to believe.

While survey experiments have shown that various forms of inoculation can help people detect specific manipulation techniques, I'm skeptical that inoculation really helps people discern truth from falsehoods. It may help a little bit, but I think it works mostly by making people more skeptical of everything, which isn't great given that people are rarely exposed to misinformation outside of experimental settings.

I'm also worried that the manipulation techniques people are taught about are often used in non-misleading ways by trustworthy news outlets. For instance, the use of negative emotions is considered to be manipulative, together with the use of polarizing words. But negative emotions such as anger or fear can be important for people to realize the horror of mass shootings or war crimes. Similarly, progressive movements challenging the status quo have been divisive and polarizing but that was necessary to fight for things like women's rights or black people's rights.

A woman holds a smartphone in her hand. On the screen is ChatGPT.

A big question is what AI means for misinformation

The new hot topic is AI. Will generative AI lead to deep fakes taking over the world?

Generative AI will certainly change how we consume, produce and share information online. Just like any technology, it will be used to do good and bad things. People will certainly use generative AI to create convincing fakes en masse. However, I don't think that generative AI will cause an epistemic apocalypse.

The bottleneck is demand not supply. The extent to which misinformation spreads and is believed is mostly determined by the number of people predisposed to believe it and who are looking for it, that is demand, rather than the amount of misinformation available, that is supply. Given that generative AI can only affect the supply of misinformation and not demand, I think generative AI has little room to operate. In the digital age, producers are fighting to get people's attention, but it is very limited. Given the huge amount of information already available online, generative AI content may just be another drop in the ocean.

What can we say today about what to expect tomorrow from AI?

The past can help us make educated guesses about the future. Convincing deepfakes have been around for a while, and people have mostly used them to create non-consensual pornographic videos starring famous actresses. Most misinformation relies on very "cheap-fakes", such as videos taken out of their context or grossly edited on Photoshop, rather than sophisticated deep-fakes. We also faced similar challenges in the past with photography, Photoshop, video-editing, voice changers and other things. More broadly during our biological evolution, humans have also faced challenges to make the best of communication and avoid being misled by others. Before deepfakes people could lie and write nonsense. But we have socio-cognitive mechanisms that help us know who to trust and what to believe.

Generative AI should be taken seriously, and news organizations, digital platforms and democratic institutions will certainly need to adapt, but I don't think we should panic — well not yet.

A post-doctoral researcher at the Digital Democracy Lab at the University of Zurich in Switzerland, Sacha Altay holds a Ph.D. in cognitive science from the Ecole Normale Superieure in Paris.

This article is part of Tackling Disinformation: A Learning Guide produced by DW Akademie.

The Learning Guide includes explainers, videos and articles aimed at helping those already working in the field or directly impacted by the issues, such as media professionals, civil society actors, DW Akademie partners and experts.

It offers insights for evaluating media development activities and rethinking approaches to disinformation, alongside practical solutions and expert advice, with a focus on the Global South and Eastern Europe.

DW recommends