Skip to content
14th Jan 2022

The Harmful Content Challenge: Tackling Online Harms While Protecting Freedom of Speech

Watch our fireside chat or read the article below

Chris Cooke
Founder and Managing Director
CMU Insights

For social media and digital platforms, two big talking points continued to demand attention throughout 2021. First, how to protect users from online harms. And second, how to safeguard freedom of speech amid the perceived rise in cancel culture. Both will remain key issues in 2022, with platforms set to face new legal responsibilities in multiple countries.

The two issues are related, of course. Politicians around the world have called on digital platforms to do more to stop the distribution and proliferation of harmful content. But then other politicians – and sometimes the same politicians – have also criticised the removal of some content from digital platforms, arguing such takedowns are an attack of free speech.

The tricky question is: how do you identify truly harmful content – and how do you ensure efforts to stop online harms don’t in turn infringe on freedom of expression?

As a starting point, it is useful to identify and organise the different kinds of harmful content. In the latest Building Trust white paper we propose four main categories:

Offensive content simply offends certain people. It includes content of a sexual or violent nature; content that includes words that some people consider offensive; controversial political opinions; and content that mocks or attacks specific people or groups of people, especially when that is based on things like race, nationality, religion, sexuality or gender.

Unlawful content is content that it is actually unlawful to create in the first place. This category would definitely include all and any forms of content linked to terrorist activity or child sexual exploitation and abuse. But it might also include extremely violent content and more extreme forms of pornography.

Abusive content sets out to attack, harass, bully or intimidate specific individuals, or small groups of individuals, commonly referred to as cyber-bullying, cyber-stalking or trolling. The creation of the content in itself is often entirely legal, it’s the way the content is delivered or targeted that makes it abusive and causes concern.

Misleading content is basically disinformation and propaganda that is deliberately designed to mislead individuals and/or to manipulate public opinion. This category might be dubbed ‘fake news’, although that’s an ambiguous term that can be used to refer to everything from biased journalism and political spin, to outright lies and meritless conspiracy theories.

It’s useful to organise harmful content in this way, because each category raises slightly different concerns, and therefore requires slightly different solutions. And many would argue that some of these categories need more urgent attention than others.

In political terms, the debate is very much the extent to which social media and digital platforms have a duty – or should have a duty – to stop, restrict or remove each category of harmful content. And also, how such measures can be achieved in a practical way that doesn’t have too negative an impact on free speech.

Many people argue that social media and digital platforms have simply not gone far enough to deal with harmful content on their networks. To that end, in both the UK and the EU formal proposals have been made to increase the legal liabilities of said platforms. Meanwhile, in the US, there have been various proposals in Washington to reform Section 230, the law that currently restricts the liabilities of digital companies in this domain.

Pretty much all of those proposals are proving controversial. Some argue they don’t go far enough and will have little tangible impact in stopping online harms. Others argue they go too far and will infringe on free speech – or limit the ability of platforms to protect the privacy and security of their users. Others still say politicians are looking for simple solutions to complex problems, basically blaming social media for what are wider social issues.

The Building Trust white paper runs through all the key proposals in the UK, EU and US, and explains the issues that have been raised by critics on all sides. It also considers how – for many platforms – commercial pressures have created an urgency to deal with harmful content even before the politicians decide what new rules to make law.

You can download your free copy of the white paper here.

Meanwhile, expect plenty more debate and plenty more controversy in 2022, as the proposed new laws continue to be refined, and social media and digital platforms continue to face calls to more proactively meet the harmful content challenge.

Download our white paper

Check out the other article in this series:

Why is Safe Harbour so Controversial

Safe Harbour in the News

Share: