LIVE Mar/Apr 2025 – Web

ARTIFICIAL INTELLIGENCE

49

In Here, Metaphysic’s deepfake tech returns Tom Hanks and Robin Wright to youthful, Forrest Gump versions of themselves

moved around the room and spoke. “We are training large neural models, which are able to memorise lots of things about images at scale and then recreate them,” shares Ulbrich. But these models have to be pretty well fed to believably recreate a person’s likeness – and that’s where additional footage comes in. With Hanks, it was relatively easy. “Tom Hanks is probably one of the most documented humans in the 20th century,” argues Ulbrich, “just by the sheer number of films he’s starred in and other appearances. He’s a very well-documented person – there is an incredible portfolio of him. That’s the source data. “We need high-quality data, which means high-resolution footage, and we need a variety of lighting conditions,” Ulbrich continues. “We want to see that subject in as many different scenarios as possible, shot many different ways, and with different lenses. We’re not just training on their face, we’re training on light, how it behaves on their face and on how their face articulates. “AI doesn’t see shots; it sees pixels,” Ulbrich clarifies. “We get it to analyse and memorise the trillions of pixels that make up Tom Hanks’ face in the case of Here . We can then point that AI at an input, and in this case our input is a live performance from actual actors.” In filmmaking, digital de-aging doesn’t need to be instantaneous (it hasn’t been, up until this point), but it definitely doesn’t hurt. “During shooting, we can look up and see Tom and Robin in their 50s and 60s, while seeing them at 20 years old on the playback monitor. For me, that was jaw-dropping,” Ulbrich remembers. “I can’t forget this now that I’ve seen it for myself.”

studios and AI companies, testing the available technology for whether it could support Zemeckis’ vision. “At the end of the day, only one test – Metaphysic’s – was successful,” states Ulbrich, who was working elsewhere at the time. “I see the test; it’s Tom Hanks delivering a line from the movie. Then

they show the same footage again, and he’s 20 years old, delivering the same line. Then they tell me they’re doing it live, in real time. I’m like ‘wait, what?’” This instantaneous element has serious implications, creating avenues for live performances and productions on top of traditional, pre-recorded ones. Of course, the Elvis AGT moment had already happened, proving that live GenAI was possible in entertainment, but Here was the first time Ulbrich had witnessed it with his own eyes. “It blew my mind,” he admits. “I could take the best CG team and the best company on Earth – hundreds of people with limitless sums of money – and we couldn’t deliver the same thing. It’s a completely unique technology for production.” A MODEL WITH A GOOD MEMORY Besides saving time, live GenAI also boasts another benefit: “It’s always photoreal,” claims Ulbrich. Trained on ‘trillions of pixels’, the AI model ‘starts on the other side of the uncanny valley’ as opposed to CGI. “Things that are hard to do in CGI, like eyes, mouths, lip sync and emotionality, are easy for us. It’s all built on photography. “We, as humans, have seen other humans since birth, so we’re all experts in the human face. When it’s weird and wrong,” Ulbrich claims, “we know it.” When working with late talent, data procurement options are limited by what’s physically possible – that is, Frank Sinatra and Sammy Davis Jr can’t show up for a photoshoot. When training the model for Here , however, Hanks, Wright, Paul Bettany and Kelly Reilly (who play Hanks’ parents) were able to visit the studio, where Metaphysic could record them for ‘30 or 40 minutes’ while they

This instantaneous

element has some serious implications, creating avenues for live productions on top of traditional, pre-recorded ones”

Powered by