ARTIFICIAL INTELLIGENCE INDUSTRY
tools such as Midjourney, Stable Diffusion and Runway’s Gen-2, while building relationships with local universities. “One important trend we’ve identified is that, as these AI tools evolve, they’re responding to the needs of users by providing more hands-on control over the output, not less,” explains James Pollock, creative technologist at the company. “This means VFX artists can leverage their skills through tighter AI-assisted workflows and greater integration with existing industry- standard software, such as Houdini and Photoshop. We’re also starting to see the question of commercial usability addressed, with models like Adobe’s Firefly designed for commercial use.” For Darkin, the tools at the industry’s disposal are ‘such a moving target’ that anything he says about AI in animation ’will be out of date’ rather quickly. Recently, Alibaba Group’s Institute for Intelligent Computing introduced Animate Anyone, a character animation technology that uses cutting-edge diffusion models to transform static images into vibrant character videos. He says text to video is going to take things to the next level. “It’s at the stage where text to image was six months ago. But in the mid-term, we will be able to create high-quality video simply by typing prompts – or even having GPT 4 create them,” Darkin adds. “At that stage, ‘what film shall I watch?’ will be replaced by ‘what film do you want me to make for you?’ – and the whole of Hollywood, Netflix and Disney will vanish in a puff of smoke. However, probably nobody will notice because other industries will be imploding all over the place.” As live action debates the various uses of AI, animation’s pioneers are already reaping its rewards.
couple of months for the last decade, a new piece of software has appeared. These developments wipe out huge parts of the VFX process, but after the initial shock of the new tools, artists instantly find fresh ideas for things they could never have done before.”
TEETHING PROBLEMS While the benefits of AI’s use in
animation are evident, there are some strings attached. ”It still requires human intervention,” Robinson adds. “Initially, voices were generated using PlayHT, yet we eventually reverted to actors. AI’s proficiency in lip-sync dubbing doesn’t match its capabilities in live action. Achieving accurate lip sync is notably easier with real actors. In the next year, I expect to see full lip-sync dubbing for animation via AI." Darkin notes that video to video workflows employing Stable Diffusion currently face challenges in making adjustments or maintaining a high level of control and stability in their output. “You can turn a video clip into anime or cyberpunk style, or add ’virtual makeup’ to a character. However, you can’t yet control the details, or rely on that character’s look staying the same throughout a 15-minute show. It’s a constant balance between stability, control and quality.”
INFINITE HORIZONS Midjourney (above) and Runway’s Gen-2 (top right), and Kartoon Channel (top left) which has embraced AI
COMING UP CEO and co-founder of Move.ai, Tino Millar, highlights advancements making mocap more accessible for content creators, overcoming hurdles like impractical suits and budget constraints. “We want to make the creation of 3D art more flexible and iterative,” he explains. “We’ve invested years of research to create a toolset that turns 2D video into 3D character animation powered by leading AI, physics and biomechanics.” Lux Aeterna, synonymous with high- end VFX work, has been exploring AI through extensive internal R&D over the last two years and experimenting with
WE WANT TO MAKE CREATING 3D ART more flexible and iterative ”
61
definitionmags
Powered by FlippingBook