Definition June 2021 - Web


ABOVE Arrays are most commonly mounted to helicopters and drones, but can also be mounted to cars

Fast & Furious 9 . This often involved mounting the array to a car, creating background images that were later used to place the principal cast at the heart of action scenes. Often, a scene was shot several times, with the array car driven in the position of one of the picture cars. “The main unit would shoot, and while that’s resetting (or that shot’s over), we jumped on to the arrays and did all their runs. So, if there were five cars racing down the road, weaving and hitting one another, the camera vehicle would drive each car’s position. The stunt drivers are very accurate and match the positions very well.”

One thing that everyone involved seems to agree on is that the appetite for production work has, since the doldrums of 2020, become insatiable. “From October, it just went stratospheric and we’ve got so much,” Marzano enthuses. “I’ve only got five days off between now and the end of June!”

regulation. What’s more, big camera arrays are beginning to test the limits of the technology. Braben describes the concerns of size, space and weight, as well as the performance of the stabilised camera system. “The Shotover – or whatever it may be – has to be able to stabilise that payload. Going from a single camera and lens (albeit a fairly weighty zoom) to six cameras and six lenses is getting to the limits of what these systems were designed to stabilise.” However, helicopters are not the only platform for arrays. Oh was head operator for drones, and operated all of the array work on

STITCH IT IN POST John Moffatt is a visual effects supervisor with a two-decade history of films, including many of the Harry Potter series, as well as movies such as Wonder Woman 1984 , The Da Vinci Code and Atonement . Good results from arrays, he confirms, require planning. “Each of the lenses needs to be grid tested, which means we shoot a black & white grid, mounted on a flat board, for each of the lenses on each of the cameras. That allows us to see what distortion and barrelling the lens is creating. Once we’ve analysed that, we’re able to remove the distortion, allowing us to put the six elements together in Nuke or the 3D software.” As the end user of the data created by camera arrays, Moffatt is a big fan. “This technology allows you to acquire more information than you need, to make choices later on. There’s many a producer who says, ‘Woah, why do you want that?’, because there’s a significant upfront cost. But, ultimately, it allows for the creative choices downstream, which is the thing everyone remembers. Whether the director and cinematographer were happy with the results, it’s easy to find people who comment on visual effects work and it gets labelled as ‘bad CGI’. What you don’t see is the good CGI, because it doesn’t get noticed!”


30 DEF I N I T ION | JUNE 202 1

Powered by