ROUND TABLE
building a machine-learning AI system into DaVinci Resolve Studio – known as the Resolve Neural Engine – that offers a range of tools and features to solve complex or repetitive tasks effectively each time the operator uses it. For example, a new AI-based Voice Isolation tool can quickly analyse audio clips and recognise the different sources between unwanted background noises and human voices. Then, with a simple UI palette, users can suppress unwanted elements to clean up the audio and improve the clarity of voices. This task would take much longer with standard audio tools and a lot more experience. It means users can clean up interview or location audio almost instantly. Elsewhere, for video, our IntelliTrack AI optimises tracking and stabilisation tasks. It both recognises the objects seen in the frame and can attach parameters to them, such as automatically panning audio onto objects that move through the frame to build AI-created immersive audio. Similarly, it uses the same AI- based recognition to process object removal in frame, tracking an unwanted object in a clip and painting it out by using the background information from other frames. These tasks would be time-consuming when done manually by a user, but can be quickly processed in DaVinci Resolve, allowing users to
concentrate on the creative storytelling. Our intention with AI is to empower the creative, not replace them, so we do not have any AI-generation tools or processes that replace humans. There are many other AI options out there if this is wanted, of course, but we see AI as an assistant and a support to workflows to build efficiency and cost savings without compromising creative output. QJ: At the moment, AI’s impact on broadcast production seems limited, perhaps most evident in subtitling and translation. However, technologies like depth sensors are getting closer to being integrated into cameras or used externally, making live VFX integration and intelligent AF easier and more efficient. AI can automate camera movements, track subjects and adjust settings in real time, leading to smoother, more dynamic shots without constant manual input. In post-production, AI is helping to speed up tasks like colour grading, editing and VFX work. While it’s hard to say exactly how much of this is in full use today, these tools are definitely starting to make their mark – and it’s clear that they’re poised to play a larger role as the tech matures. Studios like Warner Bros. or Sony prohibit the use of GenAI in VFX due to the copyright issues inherent in the generation process. While
the technology exists, there is still a way to go before GenAI is widely adopted in linear VFX content production. DEF: How do you foresee the convergence of traditional and virtual production impacting the creative process and technical challenges of live broadcasts? CH: We’ve been involved in a number of productions recently – specifically in sports broadcasting – where virtual elements have been integrated into live studios. A crucial technical demand encountered is one of latency between the virtual aspects and the real production space; ensuring all production tools are in sync and that they remain locked in place without error or drift is critical. Using a range of products to digitally glue both the real studio and the virtual or rendered elements together ensures customers can create a consistent and convincing hybrid environment for audience. Likewise, we have encountered challenges when using camera tracking systems to ensure 100% accuracy and avoid lag in the processing and delivery of blended virtual and live elements. This is critical, as the movement of hosts within the frame and the camera’s perspective of the frame must be convincing to the audience when WHILE THE technology exists, THERE’S A WAY TO GO BEFORE GenAI IS WIDELY ADOPTED”
LIVE & KICKING The ARRI ALEXA 35 Live Multicam System brings the cinematic quality of its ALEXA 35 to live productions
50
DEFINITIONMAGAZINE.COM
Powered by FlippingBook