The battle against misinformation on social media is yet to prove if they can be helpful tools for avoiding more sickness and death in the midst of a pandemic.
Article by Jasmine Enberg | eMarketer
False recommendations about how to avoid contracting the virus or what measures infected people should take to avoid spreading it have the potential to cause more sickness and death from a pandemic that has already taken thousands of lives worldwide.
According to data from social media analytics platform Sprinklr, there were more than 19 million mentions related to COVID-19 across social media, blogs and online news sites worldwide on March 11. For context, mentions of US President Donald Trump on the same day came in at roughly 4 million. Many of the COVID-19 mentions likely came from legitimate sources, but given the novelty of the disease and the fast-changing nature of related news, it’s safe to assume that a large portion was inaccurate or outdated.
The current battle against misinformation on most social media platforms is primarily concentrated on so-called “bad actors” that deliberately spread lies and misleading information, sometimes for political gain. Facebook, for example, uses an automated system to serve potentially inaccurate content to third-party fact-checkers who then identify, review and rate inaccurate stories so that their distribution can be reduced. It’s a resource-heavy and time-consuming process, and questions about its effectiveness were raised before the coronavirus conversation exploded on social media.
Platforms like Twitter and Facebook were also among the earliest sources of accurate COVID-19 information. But since average citizens, celebrities, politicians and others use social platforms to share their coronavirus experiences, air grievances and simply kill time while self-isolating, important health and safety information easily gets drowned out. Many users may be well-meaning but uninformed, and they could be unintentionally spreading inaccurate information.
As a result, social media platforms have taken unprecedented steps to stop the spread of coronavirus-related misinformation. Facebook has provided the World Health Organization (WHO) with as many free ads as they need and blocked ads from brands that may be exploiting the situation by claiming that their products can cure the virus, for example. That’s in addition to increased fact-checking and a pop-up that directs users who search for coronavirus directly to the WHO’s website or a local health authority. Twitter also directs users to local health authorities’ sites like the Centers for Disease Control and Prevention (CDC) in the US.
On Monday, the major social platforms—Facebook, LinkedIn, reddit, Twitter and YouTube—along with Google and Microsoft, issued a joint statement announcing that they had banded together to fight COVID-19-related misinformation. “We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the statement read.
The swift and extensive action is to be applauded, but it also raises larger questions about social media’s ability to police platforms outside of a global health emergency. None of the tactics used were necessarily groundbreaking—promoting facts, demoting lies and banning false information are all part of their current strategies against misinformation. But the concerted effort among the platforms shows just how much work it takes to significantly reduce the spread.