CAMBRIDGE CATALYST ISSUE 04

AI SPECIAL

It’s important to consider whether unelected corporations should be the self-appointed gatekeepers of censorship and free speech in our modern digital democracies"

offensive nature of the notification would be identified automatically, in real time, and the sender could receive a warning message:

protection would be beneficial, since the sender and the recipient may share offensive views that are not shared by other users. The basic approach summarised here is similar to quarantining methods that have been used since at least the 1980s to protect computers from malware. It is also similar to other protective measures that certain social media companies have introduced for images in recent months. As a method for countering the growing online hate speech problem, it acknowledges the traditional tensions between freedom of expression and appropriate censorship, and it tries to find an acceptable middle ground between the equally dubious extremes of entirely unregulated free speech and coercively authoritarian suppression. Quarantining, it seems, may need to become a more familiar part of our global digital ecosystem.

This may be homophobic hate speech. Are you sure you want to post it? No Yes

An interface like this would also enable the recipient to see who sent the message, but the content would remain undisclosed initially. In other words, the potentially harmful post would be temporarily quarantined, and the recipient would be able to decide whether to read it or not, and whether it should appear or not. And even if the recipient allowed the message to appear, the same warning could still be received by anyone else who happened to be reading through the posts. This additional level of

21

ISSUE 04

cambridgecatalyst.co.uk

Powered by