FEED Issue 06

58 ROUND TABLE AI

THERE NEEDS TO BE SOME RULES AND GUIDING PRINCIPLES AND A REGULATORY APPROACH TO BUSINESSES BUILT PURELY ON AI FEED: What concerns should we have about AI? What are the dangers and how should we deal with them? BHAVESH VAGHELA: One of the key concerns around AI is that it provides a silver bullet to an operator’s problems, and that embracing machine learning will magically solve challenges operators have. For AI to be successful and to make tangible changes to your business model, AI needs both time and to be trained.

FEED: What jobs will AI do away with and what jobs will it create? BHAVESH VAGHELA: We think that AI will create a new variety of jobs – for every technological advance there is the need for human experience and creativity to complement it. As a company focused on supporting operators to provide the best possible customer experience for their subscribers, we understand the value that blending human intuition with next-generation technology provides – particularly when it comes to making sure subscribers feel valued and that their experience is personal. STUART COLEMAN: Repetitive, administrative roles but also parts of roles which have tasks in the nature of the work that can be augmented, automated or supported by machines. Paralegal teams will increasingly use machines to mine contract clauses, hunt down and identify legal examples that have gone before and/or inform decisions about how to word critical clauses ANDREAS JACOBI: I think we’ll see a change in jobs rather than a removal of jobs. AI will be an aid rather than a replacement. For example, it will help prepare highlights reels more quickly but it won’t take away the creative and editorial role of creating great content that tells a brilliant story.

STUART COLEMAN: There should be more concern about the people! AI-based tools and techniques are only as good as the people who write the algorithms, train the models or make the decisions around them. Ethics and the moral compass of AI should be an essential pillar of any application. Increasingly businesses of the future will run on algorithms. Many already do. There’s also programming bias in algorithms. For example, using a machine-learning process to score an insurance applicant – and this has already happened – might discriminate against someone of an ethnic origin based on wider data inputs. Platforms like Facebook have already shown weaknesses in the way they serve news content up – through automated machine learning-based services AI is not emotional. It can’t empathise, understand or adapt to the illogical side of human nature and as such is largely limited by the people behind it. Whilst models and algorithms can learn and adapt, there needs to be some rules and guiding principles and a regulatory approach to businesses built purely on AI as they will all impact our lives either as employees or customers. We are already seeing this in the gig-economy – and that is just the start. ANDREAS JACOBI: AI will allow broadcasters to do more, but it will need to be rolled out in a way that allows for learnings. Human oversight, creative input and control will always be critical.

feedzine feed.zine feedmagazine.tv

Powered by