Watch our fireside chat or download the full white paper – click the Download PDF button below
The obligations of internet companies and digital platforms in relation to so called “harmful content” or “online harms” have become a big talking point within the wider media and political community in recent years – with headline-grabbing incidents on social media and other platforms regularly putting the spotlight back on this debate.
Although a key challenge within that debate is defining what is even meant by harmful content. After all, the term can be applied to an assortment of different content types, including…
- Content that some people find offensive.
- Content that it is actually illegal to create and/or distribute.
- Content that is abusive towards individuals or small groups of people.
- Content that sets out to misinform and mislead.
The legal responsibilities of internet companies and digital platforms in this domain vary from country to country, although their liabilities are usually limited to some extent, especially when compared to more traditional media.
However, lawmakers in multiple jurisdictions are now considering new rules to increase those legal responsibilities, sparking much debate as to how that can be done without negatively impacting on privacy and free speech. This includes…
- The Online Safety Bill currently working its way through the UK Parliament.
- The Digital Services Act that is currently being considered in the European Union.
- The various debates in US around the future of Section 230.
Beyond any actual legal responsibilities, it’s becoming ever more important that internet companies and digital platforms do more to tackle harmful content – and are seen to be doing more to tackle harmful content – in order to protect their corporate reputations, and to ensure that advertisers don’t become nervous about the kinds of user-generated content their promotions and commercials might appear alongside.
To that end many companies and platforms already have harmful content policies that go beyond their legal obligations. Although, many critics argue that those policies nevertheless do not go far enough. While others claim the employment of such policies is inconsistent or is threatening freedom of speech.
To help platforms, rights-holders, advertisers and responsibility facilitators stay on top of all of these various debates, this second white paper in the ‘Building Trust’ series sets out to define the different kinds of harmful content; reviews the legal debates in the UK, EU and US; and considers the commercial pressures that also shape the approach internet companies and digital platforms choose to take.