FEED Issue 14

11 YOUR TAKE AI

cultural sensitivities (or legal guidelines) in different markets is all automated. Another effective use case for AI and machine learning is intelligently searching archives to create schedules of relevant content for a particular channel or regional service. Broadcasters are already investigating how algorithms and machine learning can be used to search for the archived content that would best connect with their target audiences without human intervention. By combining AI with real-time search engine and scriptable production engine technology, content creators can focus on producing the right content for the right audience. With the downward pressure on budgets, it is critical that media organisations maximise the ROI of every piece of content they own. In the smart studio, AI and machine learning tools can handle re-editing for different audience segments at the same time as the main piece of content is being scripted, vastly speeding up the process. METADATA: AT THE HEART OF VIDEO Rich metadata is central to realising this approach. The integration of TVU MediaMind with Media Asset Management systems puts metadata at the heart of video workflows. The barriers to efficiency, erected by multiple content production workflows, are removed by creating one centralised search engine for all raw materials, which feeds all distribution channels, live or recorded. In today’s

While AI prestige projects, such as the Lexus advert, gain public attention, what we at TVU Networks are most excited about is the future potential to use AI to target, edit and distribute personalised programs in a way that suits each individual viewer. ENABLING MEDIA AND BEYOND How AI is deployed in content production is dependent upon the genre and audience. The raw material of video, audio and graphics is stored and processed in real time by a machine learning/AI engine, which embeds fine-grained metadata into all content assets. The metadata and AI- powered editing tools enable staff to work much more quickly by reducing the time taken to search and access both new and archive footage. AI can also allow editors to efficiently produce multiple versions of a programme based upon viewer profiles. The ultimate aim in some content genres is to be able to make personalised versions of a programme for highly granular subgroups of viewers. We aim to drive this through the development of AI-powered ‘enabled media’, in which AI-driven processes can take indexed raw materials, as well as AI- or human-edited programme segments, to produce an almost infinite number of programmes for which distribution and final edit are AI-powered processes. All these developments ensure the AI revolution will be televised, although we may see different versions of this exciting revolution!

AI CAN ALLOW EDITORS TO EFFICIENTLY PRODUCE MULTIPLE VERSIONS OF A PROGRAMME multiscreen environment, producers handcraft the many versions required for the delivery of a TV show. In the smart studio, they can focus their attention on the primary piece of content, which can then be automatically re-versioned according to predetermined parameters. Today, around 95% of video captured in live productions is never used at all, representing a vast untapped resource in a content hungry market. The smart studio will change this by bringing in a new level of efficiency and optimising content delivery to each target audience segment. All the cloud-based video and AI- powered voice and object recognition technologies needed to transition to the smart studio models are already available. Now, broadcasters and media organisations can begin revolutionising the way video is produced, distributed and consumed.

feedzine feed.zine feedmagazine.tv

Powered by