The pandemic highlighted the importance of online safety, as many aspects of our lives, including work, education, and entertainment became fully virtual. With more than 4.7 billion internet users globally, decisions about what content people should be able to create, see, and share online had (and continues to have) significant implications for people across the world. A new report by the World Economic Forum, Advancing Digital Safety: A Framework to Align Global Action, explores the fundamental issues that needs to be addressed:
- How should the safety of digital platforms be assessed?
- What is the responsibility of the private and public sectors in governing safety online?
- How can industry‑wide progress be measured?
While many parts of the world are now moving along a recovery path out of the COVID-19 pandemic, some major barriers remain to emerge from this crisis with safer societies online and offline. By analysing the following three urgent areas of harm we can start to better understand the interaction between goals of privacy, free expression, innovation, profitability, responsibility, and safety.
Health misinformation
One main challenge to online safety is the proliferation of health misinformation, particularly when it comes to vaccines. Research has shown that a small number of influential people are responsible for the bulk of anti-vaccination content on social platforms. This content seems to be reaching a wide audience. For example, research by King’s College London has found that one in three people in the UK (34%) say they’ve seen or heard messages discouraging the public from getting a coronavirus vaccine. The real-world impact of this is now becoming clearer.
Research has also shown that exposure to misinformation was associated with a decline in intent to be vaccinated. In fact, scientific-sounding misinformation is more strongly associated with declines in vaccination intent. A recent study by The Economic and Social Research Institute’s (ESRI) Behavioural Research Unit, found people who are less likely to follow news coverage about COVID-19 are more likely to be vaccine hesitant. Given these findings, it is clear that the media ecosystem has a large role to play in both tackling misinformation and reaching audiences to increase knowledge about the vaccine.
This highlights one of the core challenges for many digital platforms: how far should they go in moderating content on their sites, including anti-vaccination narratives? While private companies have the right to moderate content on their platforms according to their own terms and policies, there is an ongoing tension between too little and too much content being actioned by platforms that operate globally.
This past year, Facebook and other platforms made a call to place an outright ban on misinformation about vaccines and has been racing to keep up with enforcing its policies, as is YouTube. Cases like that of Robert F Kennedy Junior, a prominent anti-vaccine campaigner, who has been banned from Instagram but is still allowed to remain on Facebook and Twitter highlight the continued issue. Particularly troubling for some critics is his targeting of ethnic minority communities to sew distrust in health authorities. Protection of vulnerable groups, including minorities and children, must be top of mind when considering balancing free expression and safety.
Child exploitation and abuse
Other troubling activity online has soared during the pandemic: reports showed a jump in consumption and distribution of child sexual exploitation and abuse material (CSEAM) across the web. With one in three children exposed to sexual content online, it is the largest risk kids face when using the web.
Given the role of private messaging, streaming, and other digital channels that are used to facilitate such activity, the tension betweenprivacy and safety needs to be addressed to solve this issue. For example, encryption is a tool that is integral to protecting privacy, however, detecting illegal material by proactively scanning, monitoring, and filtering user content currently cannot work with encryption.
Recent changes to the European commission’s e-privacy directive requiring stricter restrictions on the privacy of message data, resulted in a 46% fall in referrals for child sexual abuse material coming from the EU; this occurred in only the first three weeks since scanning was halted by Facebook. While this law has since been updated, it is clear that tools, laws, and policies designed for greater privacy can have both positive and negative implications to different user groups from a safety perspective. As internet usage grows, addressing this underlying tension between privacy and safety is more critical than ever before.
Read the full article here.
By Cathy Li & Farah Lalani
About the authors: Cathy Li isHead of Media, Entertainment and Sport Industries, World Economic Forum; Farah Lalani is Community Curator, Media, Entertainment and Information Industries, World Economic Forum.