FEED issue 22

37 ADVERTISEMENT FEATURE Sony

of applications like speech-to-text, facial recognition and edit corrections. News organisations such as Bloomberg are already committing strongly to automation to cover so-called ‘commodity news’ – reports on finance markets, for example, which are essentially reports on stats – with the hope that it will free up journalists’ time for more involved feature stories. By some accounts, 90% of all articles in the world will be written by AI before the end of the next decade. The potential for AI to transform news and content production is clear and, over the next few years, it will take on a prominent role in deciding what content we see and read in our daily lives. But what level of power and control should be given to AI? Whilst technology that ‘thinks’ is rapidly becoming more useful, it needs to adhere to some form of ethical principles. This is particularly important in the fight against fake news. Right now, AI is being actively used to operate ever-more clever bots and fake social media accounts, some of which are hard to distinguish from real people or real information outlets. Machine learning, the science of computers learning from data, identifying patterns and making decisions with minimal human intervention has become a tool for obfuscating the truth and also for promoting it. The hope would be for machines to eventually improve their performances – in fact their production over time – and become progressively autonomous. Before we get to this point, machine- learning algorithms must be trained and programmed by humans to improve their accuracy. This is vital given machines without high-quality human input lack the ability to put things into context, causing much difficulty when it comes to accurately identifying an element in a piece of content. If news organisations left AI to run its own course without human input – that is, without context – AI is unlikely to produce much of long-term use. But even if AI starts to positively support a real news organisation, there is a danger the technology and economies

of scale it provides will start to change what news ends up being reported. AI-driven journalism might produce and manage huge amounts of facts and data. But will it be able to describe the suffering in a country torn apart by war? Or will it just offer numbers on a scoreboard of wounded and dead? Will it be able to select stories that help humans better handle climate change? Or will it just provide temperature and CO2 levels – even as it also provides users with oil company stock prices? BURSTING THE BUBBLE The personalisation of content can create higher-quality experiences for consumers, as we’ve seen from streaming services such as Netflix recommending shows based on personal watching history. News organisations are no exception, and they are already using AI to meet the demands for personalisation. For example, a service called James, developed by The Times and The Sunday Times for News UK, will learn about individual preferences and automatically personalise each edition (by format, time, and frequency). Its algorithms will be

programmed by humans but will improve over time through machine learning to provide whatever experience The Times ’ owners find best engages readers. While algorithmic curation – the automated selection of what content should or shouldn’t be displayed to users and how it is presented – meets consumer demand for personalisation, it can go too far. What if consumers are only hearing and reading the news they want to hear rather than what is actually taking place? Most commonly known as the ‘filter bubble’ concern, designed by platforms to keep users engaged, this quickly leads to audiences only seeing content that reinforces their views – and makes opinions outside that bubble seem increasingly alien. In the absence of legislation, the onus is on media organisations to get the balance right between providing tailored content, and ensuring consumers are actually being informed rather than pandered to. We must ensure the use of AI will benefit all, but it needs to be in a way that is ethical and in line with journalistic principles. To do that, we have to hold media organisations to account and put ethics front and centre in AI’s deployment. Humans must take every step to ensure AI is used with the right controls in place, from unbiased training to collecting data transparently. Otherwise, in the long term, the use of AI might create far more problems than it solves.

A FUTURE WITHOUT AI IS HIGHLY IMPROBABLE, SO IT IS ESSENTIAL THAT WE DEVELOP A SMART APPROACH TO THE TECHNOLOGY FROM THE VERY BEGINNING

feedzinesocial feedmagazine.tv

Powered by