FIRESIDE CHAT INDUSTRY
OVER THE RAINBOW CAS made Dorothy to be modular, allowing rapid hardware upgrades
DR: We believe that grounding digital assets in real-world data
ensures authenticity, accuracy and, most importantly, creative integrity. AI tools are evolving rapidly and certainly have a place in the pipeline, but they’re only as good as the data they’re trained on. Our priority is to build a foundation of truth using reality. We believe that using real light, real geometry and real motion delivers a more realistic end product. Our focus on replicating reality is at the core of what we do. Ultimately, we could be open to integrating AI in the future – if it enhances quality or efficiency – but never at the cost of realism. The physical world remains our gold standard. DEF: FINALLY, WHAT IS ONE RECENT PROJECT OR TECHNICAL ACHIEVEMENT YOU ARE PARTICULARLY PROUD OF? DR: Alongside our partners DI4D and Symbiote, we’ve developed a 4D performance-capture pipeline that produces per-frame texture maps, enabling us to capture and reproduce an actor’s performance in unparalleled fidelity. This technology, which we announced to the public at SIGGRAPH in 2024, was first used to capture the performance of Demi Moore for the climactic scenes of The Substance , where her face appears on part of the Monstro Elisasue character. To create the shots, the production combined full-body and prop scans of the various prosthetic elements with 4D performance data from a session Moore did in our Dorothy rig. This approach means that, even when the audience is looking at a full VFX asset, they are still watching an authentic performance from Moore – recreated exactly as she gave it, down to the minutest detail. Considering her performance in this film earned her a best actress nomination at the Oscars, it speaks to the potential of our technology as a way of capturing physical scenes and subjects in a way that allows the visual effects to supplement – rather than replace – real-world elements.
DEF: YOU WORK ACROSS FULL-BODY SCANNING, FACIAL CAPTURE, LIDAR, AERIAL SURVEYS AND PROPS. HOW DO THESE DISCIPLINES COME TOGETHER WHEN BUILDING COMPLETE DIGITAL ASSETS FOR PRODUCTIONS? DR: The core of our work at Clear Angle Studios is all about supplying high-quality data that can be implemented seamlessly into production workflows. For example, LiDAR ensures that our scans integrate flawlessly into virtual environments, while aerial surveys offer macro-level context for real-world environments. Our goal is to eliminate the guesswork by capturing every detail – from a wrinkle on a forehead to the shadow cast by a mountain range. When you bring all these elements together cohesively, productions can move faster with fewer surprises in post. DEF: CAS HAS TAKEN A FIRM STANCE ON PRIORITISING REAL CAPTURE OVER AI- DRIVEN GENERATION. WHY IS MAINTAINING A FOCUS ON REAL- WORLD DATA KEY, AND DO YOU SEE THIS OUTLOOK CHANGING?
For studios pushing to get photorealistic characters and next-generation VFX, this is a game changer, shortening the path from scan to screen and significantly elevating the quality of digital performance. DEF: RADIANCE FIELDS AND GAUSSIAN SPLATTING ARE HOT TOPICS IN VFX. HOW DO YOU SEE THESE DEVELOPMENTS INFLUENCING THE FUTURE OF DIGITAL ASSET CREATION? DR: These technologies are incredibly promising. Radiance fields and Gaussian splatting are redefining how we think about spatial representation and light capture. Together, they are pointing toward a future where complex geometry and photorealism could be derived from minimal data input. This innovation could potentially reduce overheads and unlock faster workflows for artists across a wide range of disciplines. At the moment, they don’t appear to fit into a traditional visual effects workflow, but we’re looking forward to seeing how this evolves in the future – especially for environmental capture and look development.
71
DEFINITIONMAGS
Powered by FlippingBook