FEED Issue 15

59 FUTURE SHOCK Content Moderation

PROTECT THE VULNERABLE AI content moderation could ensure that human moderators don’t have to deal with distressing content

order to identify and analyse objects, actions, and emotions that determine the potential impact of a piece of content. “We have a full data team that acquires and prepares data for the AI, and we have advanced machine learning engineers that monitor the learning process and make sure that the appropriate context is being learned,” says Valossa’s Rautiainen. Kumar explains that it’s important to look out for what he refers to as “false positives”. False positives are cases where the AI learns a concept or an action that is not there. “We take our data set and we run our software on it to identify any false positives or false negatives (where concepts are there). Then we try to figure out what is missing in our training or design of the classification model and go through the process of training and measuring the results again.” Some video concepts are harder to train than others. The complexity depends on the variations in concept representation. “Making cognitive AI infer threatening situations from visual media scenes is more challenging than recognising weapons in view,” says Rautiainen. “The AI technology is getting better every month, though, so its inference capabilities are gradually increasing with the challenging concepts.” “Actions are much more difficult to classify than objects” explains Kumar. “For example, kissing is an action, but nudity is an object. Action cognition requires using two-dimensional and three-dimensional convolutional neural networks (CNN) to track an action over multiple frames, which helps classify whether an action is happening.” Video use cases are also different from the typical AI challenges of data classification, because the forms and patterns of information in video content do not have any predetermined representation constraints, so machine learning models need to perform well with any kind of inbound data. This unfettered content concept is referred to as ‘in the wild’, and in the wild, models are tasked with minimising false or missing interpretations without knowing the extent of variety in content patterns. Rautiainen explains, “if you train a domain-constrained classifier, for example one that sorts out pictures containing only cats or dogs, it becomes easier to reach high classification accuracy. However,

THIS IS A GREY AREA; THE STRUGGLE BETWEEN FREE SPEECH AND CENSORSHIP WILLSTILLREQUIRE SOMEHUMANJUDGEMENT

these domain specific models would not survive ‘in the wild’ recognition tasks, since they would see cats or dogs in every data sample they inspect. This is because that’s all they have ever learnt to see.” WHAT HAPPENS TO THE HUMANS? Most believe that AI isn’t nuanced enough to take over human content moderation. In 2018, Forbes reporter Kalev Leetaru wrote, “we still need humans to vet decisions around censorship because of the context content appears in.” This has been true in the past; remember the Facebook censorship fiasco where its algorithms removed the iconic Pulitzer-winning photograph of the Vietnam war? The photograph depicts a group of children, one of whom is nude, running away after an American napalm attack. It was posted by a Norwegian writer who said that the image had “changed the

course of war”, but Facebook removed the post because it violated its terms and conditions against nudity. This is a grey area AI technology is still trying to get a grasp on; the struggle between free speech and censorship will still require some human judgement. But governments are taking more regulatory action and publishers are being forced to react. It may be just a matter of time before content moderation is fully automated. “We disagree with the claims that technology to automatically detect and flag and potentially inappropriate and harmful content does not exist – it does, and publishers are beginning to adopt it,” says Valossa’s Rautiainen. Perhaps it’s no longer an issue of whether AI can replace humans to scrub the digital sphere of excrement, but an issue of whether our free speech will be overseen by robots online.

feedzine feed.zine feedmagazine.tv

Powered by