Facebook data reveal the devastating real-world harms caused by the spread of misinformation
By 2024 Oxford University visiting research fellow RIJS; Professor of Political Communication,Andrea Carson,La Trobe University; Justin Phillips,Senior Lecturer,University of Waikato
Copyright theconversation
Twenty-one years after Facebook’s launch, Australia’s top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook’s reach more than ever, posting far more stories there than in the past.
With access to Meta’s Content Library (Meta is the owner of Facebook), our big data study analysed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread.
The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic.
The data reveal misinformation’s real-world impact: it isn’t just a digital issue, it’s linked to poor health outcomes, falling public trust, and significant societal harm.
Misinformation and hydroxychloroquine … and floods
Take the example of the false claims that antimalarial drug hydroxychloroquine was a viable COVID treatment.
In Australia, as in the United States, political figures and media played leading roles in the spread of this idea. Mining billionaire and then leader of the United Australia Party, Clive Palmer, actively promoted hydroxychloroquine as a COVID treatment. In March 2020 he announced he would fund trials, manufacture, and stockpile the drug.
He placed a two-page advertisement in The Australian. Federal Coalition MPs Craig Kelly and George Christensen also championed hydroxychloroquine, coauthoring an open letter advocating its use.
We examined 7,000 public comments responding to 100 hydroxychloroquine posts from the selected media outlets during the pandemic. Contrary to concerns that public debate is siloed in echo chambers, we found robust online exchanges about the drug’s effectiveness in combating COVID.
Yet, despite fact-checking efforts, we find that facts alone fail to stop the spread of misinformation and conspiracy theories about hydroxychloroquine. This misinformation targeted not only the drug, but also the government, media and “big pharma”.
To put the real-world harm in perspective, public health studies estimate hydroxychloroquine use was linked to at least 17,000 deaths worldwide, though the true toll is likely higher.
The topic modelling also highlighted the personal toll caused by this misinformation spread. These include the secondary harm of the drug’s unavailability (due to stockpiling) for legitimate treatment of non-COVID conditions such as rheumatoid arthritis and lupus, leading to distress, frustration and worsening symptoms.
In other instances, we saw how misinformation can hurt public trust in institutions and non-government organisations. Following the 2022 floods in Queensland and New South Wales, we again saw that despite fact-checking efforts, misinformation about the Red Cross charity flourished online and was amplified by political commentary.
Without repeating the falsehoods here, the misinformation led to changes in some public donation behaviour such as buying gift cards for flood victims rather than trusting the Red Cross to distribute much-needed funds. This highlights the significant harm misinformation can inflict on public trust and disaster response efforts.
Misinformation ‘stickiness’
The data also reveal the cyclical nature of misinformation. We call this misinformation’s “stickiness”, because it reappears at regular intervals such as elections. In one example, electoral administrators were targeted with false accusations that polling officials rigged the election outcome by rubbing out votes marked with pencils.
While this is an old conspiracy theory about voter fraud that predates social media and it is also not unique to Australia, the data show misinformation’s persistence online during state and federal elections including the 2023 Voice referendum.
Here, multiple debunking efforts from electoral commissioners, fact-checkers, media and social media seem to have limited levels of public engagement compared to a noisy minority. When we examined 60,000 sentences on electoral topics from the past decade, we detected just 418 sentences from informed or official sources detected 418 sentences from informed sources.
Again, high-profile figures such as Palmer have played a central role in circulating this misinformation. The chart below demonstrates its stickiness.
Curbing misinformation
Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public.
Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms.
The study offers evidence of a change over time in audiences’ news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritise posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts.
Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.