58 FUTURE SHOCK Content Moderation
“Humans cannot simply keep up with managing, versioning and curating content for the fast-paced world of modern video,” says Mika Rautiainen, CEO of Valossa, a Finnish company providing AI tools for assessing video content at scale. “Advertisers are more conscious of the content they advertise with, and online video platforms are in need of deep metadata to help identify elements that make the content hit the right audience, in order to go viral,” says Rautiainen. “Today, virtually anyone has access to dangerous online content and wrongly placed video advertisements could cause negative publicity for advertisers.” Brands need to be able to identify content and profile it based on its potential to cause offense to effectively contain the distribution for the right audiences. Brands also need to be reassured that their content is going to be moderated properly amongst the surge of other content generated online today. RECRUIT THE ROBOTS Valossa’s solution: a state-of-the-art AI and machine learning technology with computer vision and pattern recognition. The tool is capable of handling huge amounts of content uploaded and distributed across multiple platforms, and can automatically describe and tag what appears on screen, as well as the context in which it appears.
Interra Systems, which specialises in systems for video quality control, is also working towards simplifying the toil of content moderation. The company provides software for classifying audio- visual concepts, and uses AI and machine learning technologies to identify key elements in content according to the regulations of different countries, regions or organisations. “The basic algorithm inside the software is focussed on identifying concepts, then there are filters outside of that which will try to map out which audience and geography the content is appropriate for. These filters can be adjusted and revised because the software is constantly learning,” says Shailesh Kumar, Interra Systems’ associate director of engineering. SCHOOLING AI-driven content moderation tools require a substantial amount of sample data in
Rautiainen explains: “For example, a person on the street with a knife in hand is different from a chef who is using a knife to chop vegetables; the actions are different, though the identified knife concept is the same. With this combined metadata, the tool creates an emotional intelligence that can identify sentiments from human facial expressions. Facial expressions are used to evaluate negative or positive human sentiment in a scene.” The AI identifies a huge variety of nuanced concepts around sexual behaviour, nudity, violence and impact of violence, substance use, disasters and bad language, including sensual material like partial or occluded nudity, cleavage, lingerie, manbulges, suggestive content, smoking, alcohol – you name it. The broad vocabulary for inappropriate content elements also means that the engine can be customised for different regional preferences.
HIGH ALERT The content moderation tool that Valossa makes uses AI to detect unwanted concepts for filmmakers, broadcasters or studios
feedzine feed.zine feedmagazine.tv
Powered by FlippingBook