Online harm controls must be baked-in as technology advances


By Neil Brady

22nd March 2022

Ireland’s responsibility as the EU lead on countering online harm and defamation will be to implement solutions at scale, and the only way to do this is by taking a structural approach.

Online harm controls must be baked-in as technology advances

Software developers often refer to the “baking-in” of particular features or priorities when constructing technology. The idea is relevant here because, as governments around the world debate how to address the challenge of online harm and defamation, they must start to think along these lines to effectively implement their responses. This is especially the case given the growing demands for increased accountability and safety measures

In Ireland, this imperative was brought into sharp relief in recent weeks by two things. One was the testimony of Frances Haugen, the Facebook whistleblower, to the Oireachtas committee on media, and her call for regulators to focus on algorithmic design and methods of construction

Another was the inclusion in the recently published report of the review of the Defamation Act 2009 of an entire chapter dedicated to “online defamation”, and the need for “special measures” to address it.

While the incorporation of such a chapter is to be welcomed, it is telling that none of the recommendations discusses the role technology must play to address the problem at scale. The principal reason why internet platforms have become core drivers of defamation and harm is that, unlike traditional publishers, they are insulated from liability by the e-Commerce Directive of 2000. This, as the review of the 2009 act notes, “[exempts] online service providers from legal liability for the content held on their websites”. A variant of that situation exists in most jurisdictions around the world.

How to make it fit for the realities of 2022, and strike a healthy balance between freedom of speech and moderation, is a difficult question. But the existing approach of intermediary platforms offers something of a guide.

Historically, most platforms have regulated speech through legally binding terms of service. In recent years, as awareness has grown of the ways such platforms can exacerbate societal polarisation and undermine democracy, many have responded in a two-pronged way. First, they have grown their taxonomies, or categories, of defined harm and protected characteristics. Secondly, they have developed tech tools to help human moderators implement these policies.

Both Google and Facebook, for example, have invested immense amounts of time and money in building language models upon which their moderation tools are built. This approach has had mixed results, however, for several reasons.

The first, one compounded by the fact that different platforms have different terms of service, is that there is no universally accepted taxonomy of harm or hate speech. In addition, the development of these technologies has largely been overseen by individuals who are, to paraphrase the former Guardian editor Alan Rusbridger, typically not well equipped to think deeply about the complexities involved in free expression. As Haugen and others have also noted, these efforts invariably conflict with the business models of the architects of these tools – the platforms – which are built around “engagement”.

The review of the Defamation Act 2009, as well as a growing number of similar reports produced in other jurisdictions, is clear in its recommendations to government: a notice of complaint process to facilitate the “expeditious” taking down of defamatory content must be put in place.

Such individualised systems, whereby citizens can complain if they feel they have not been treated properly, are the norm in most other industries and sectors. Increasingly, they use machine learning and other artificial intelligence-based techniques as a first filter. Relevant information is elicited to assess and classify the complaint correctly, direct the complainant within the system and, where necessary, secure human intervention. This approach is, as demonstrated by technology companies’ reliance on it, a vital if imperfect part of any effort to handle digital speech at scale.

The Australian government has taken a strong lead here, recognising the potential of such technology to assist it in its efforts to regulate the digital space. To that end, the e-Safety Commissioner’s office in Canberra has established an industry affairs and engagement department that is exploring the potential to work with commercial and research companies on this issue.

As Ireland faces up to the onerous supervisory challenge of being the host country within the EU of the world’s major social media and internet platforms, and as it establishes the office of its own online safety commissioner, it must be understood that this task cannot be discharged using conventional processes.

This reality, taken together with the findings of the review of the 2009 Act, should inform the deliberations of the newly-established expert group considering the viability of an individual complaints mechanism for harm and hate speech. To fail to do so would amount, as Senator Shane Cassells put it during Haugen’s testimony, to “fighting with one hand behind our back”.

This article was originally published in The Business Post on March 20th, 2022


Contact us

Get a closer look at how our solutions work and learn more about how CaliberAI's technology can integrate with your technology stack and editorial workflow.

Get in touch with sales@caliberai.net