FEED Issue 04

27 TECH FEED Content Management

The risk in sectors like news is that growing polarisation in our culture could be exacerbated by automatically generated clips delivered to viewers based on their users’ preferences and social media profile. “While the process could be tuned to give more balanced content, the money will be in giving people what they want, which will lead to reinforcing an opinion rather than informing,” says Schleifer. CAN CONTENT CREATE ITSELF? If the answer to the question is yes, the follow-up should be, at what value? There are art installations where sound is generated as people walk through the exhibition. Based on pre-chosen notes it can create a mood, and it can entertain people, but does it approach the emotional impact of music that was created with purpose? “We have access to much more

BROADCAST ARCHIVES ARE TRADITIONALLY MASSIVE IN SCALE, AND THE MORE CONTENT THAT EXISTS, THE MORE DIFFICULT IT IS TO MANUALLY TAG CONTENT

automatically create highlight clips and pitch them to social media. “This extends the abilities of current live production operators to manage hundreds of automatically created stories,” says Jay Batista, general manager at Tedial (US). “Applications are being developed, especially in reality television production, where auto-sensing cameras follow motion, and AI tools such as facial recognition augment the media logging function for faster edit decisions as well as automatic social media deliveries.” At the moment, AI is not accurate enough to completely remove the need for human intervention. In the case of iconik’s AI framework, the user can set rules so that any metadata tag with a confidence level below, say, 50% is discarded, anything above 75% is automatically approved, and anything else is sent for human approval. CREATED BECOMES CREATOR IBM suggests that AI is intended to be a resource, rather than a replacement, and Amazon prefers the term ‘assisted intelligence’ emphasising that at the end of the day, humans are still in control. “AI-based technology will make mistakes, but the best thing about it is that it will learn from them and become more accurate over time,” says Azimi. Eventually, in addition to technical discovery, curation and assembly, AI will create content – and sooner rather than later. IBMWatson’s automated selection of clips for assembly of a trailer for the film Morgan is one step on the road to automating the production of scripts, storyboards, video streams and sound tracks. At first, the machine might assist producers, directors and artists in producing content, but, as in many industries, the machine could progressively assume an increasingly central role, forcing the human to redefine itself and invent new roles.

programming to an automated process, but rather the loss of editorial judgement that can change based on external factors,” suggests Schleifer. “Systems that produce content in this manner will adhere to specific rules and as a result will produce consistent content that will never challenge us to get out of our comfort zone.

“The challenge will be to figure out how a system like this can continue to push the envelope and challenge us,” he says. “After all, media as a form of communication is focused on surprising, challenging and helping us grow.” A GLOBAL TAGGING STANDARD sophisticated methods to create content these days, but the ultimate question is likely to be ‘to what end?’” ponders Schleifer. “Is content just a babysitter for people who have too much leisure time? Or are we purposefully informing, challenging, entertaining and educating people?”

A global tagging standard, applicable across industries (sports, corporate, live, broadcast, post) and enforced by hardware, cameras, encoders, archives, etc. is probably an unachievable utopia. Take a simple example like cameras. The embedded metadata that describes the camera, such as its serial number and lens data, is embedded in different ways across a range of camera output files. Similarly, asset status is a standard piece of metadata, but its usage can vary hugely across organisations. However, there’s a critical requirement for flexible tools that can read and share metadata from a variety of sources, and that are easily customised to suit a particular customer’s needs. Some MAMs can ingest content from a range of cameras and then move the metadata to a single place depending on its source. Going back to the camera example, the camera model and lens can be shown consistently, irrespective of the camera itself.

“A flexible metadata interchange mechanism, rather than a predefined metadata definition, would allow systems to share data more easily,” explains Dave Clack. “This needs to include asset-based metadata (ie, creation date), as well as time-based metadata (ie, a ‘goal’ that happens at a certain time in the content). “There are a number of open candidates for this. For instance, XML- or JSON-based self-describing and human-readable metadata that can be handled by a number of different tools. “Final Cut Pro XML remains a very flexible exchange format that, due to its popularity, is close to an interchange standard,” he says. “It can handle asset-based metadata and time-based metadata, and it can include sequence/ EDL metadata describing how content fits together in a movie.” CatDV itself supports various types of FCP XML, as well as its own CatDV XML. The CatDV REST APIs support JSON instances of similar data.

“The risk and challenge is not in our ability to move certain types of

feedzine feed.zine feedmagazine.tv

Powered by