FEED issue 29 Newsletter

42 ROUND TABLE Machine Learning

distributed intelligence that can process image, sound etc – data which could enable many new applications. We might, for example, expect that machine learning- based language translation will become near perfect. Machine learning will likely be as widely used and accepted as cloud technology is today, having largely dispelled its science-fiction image. Machine learning will be operating behind the scenes in almost every walk of life, making decisions about how to more optimally and repeatably decide on actions based on data. These various applications will require different neural network topologies and sizes, appropriate for each task and data structure. It must be remembered, however, that ML is only as good as its training, and that’s only as good as the data set. The biggest risk to ML is its use with poorly qualified or insufficient data for training. Additionally, the results of ML are inevitably subject to a level of uncertainty – so where a deterministic and accurate answer is required, it is not the correct tool. Data science will become the key skill, as that is the fundamental knowledge that enables ML to be used successfully. ARASH PENDARI: Let me just ask my Google Home. I believe very few people are qualified to answer this question. But the movie Her from 2013 with Joaquin Phoenix and Scarlett Johansson is a great perspective of what voice services, together with machine learning, could achieve in the near future.

FEED:WHATWILLMACHINELEARNINGLOOKLIKE INTENYEARS? BEN DAVENPORT: Since the evolution of software capabilities is not a linear curve, but more of a Moore’s Law logarithmic- type of curve, we will see far more JON FINEGOLD: The good news is our industry, media and entertainment is driven by creative people and I don’t envision that changing – certainly not in ten years. That creativity will still come from humans, with AI and ML helping to automate tasks and allow things to happen more quickly and efficiently.

development in these areas than we saw in the last ten years up to now – it’s easy to forget that. In fact, ten years is a very long time. It seems likely that we’re going to see greater standardisation, so long as efforts to do so can keep pace, and that standardisation will allow more services to be integrated more easily. While the majority of applications we see in media today are around media processing, new services will touch every part of media workflows. For example, we are now using machine learning to generate highly accurate estimates of the time taken for an asset to move through content prep and/or production workflows – the next logical step here is to extend that to automate ‘red line’ workflows and then to optimise resource utilisation. We can already see that machine learning will extend from ‘simple’ services executed as a step in the media supply chain to being applied across the whole value chain.

TONY JONES: Over the last few years, machine learning has moved from being a niche technology found only in high-end computer platforms to one that is supported by dedicated silicon and is readily available in multiple cloud providers as a service. More recently, ML architecture has become available in silicon that is present in consumer devices, allowing the use of trained algorithms to be run on those devices, as long as the training has been performed for those specific devices, plus potential for local training. This is already being used to recognise objects with camera apps, and we can expect that this will develop further to process other types of complex data. This means that we will have widely

THE BIGGEST RISK TOML IS ITS USEWITH POORLY QUALIFIEDOR INSUFFICIENTDATAFORTRAINING

feedzinesocial feedmagazine.tv

Powered by