Definition Apr/May 2026 - Web

DEFINITION THE VIRTUAL FRONTIER

THE COPYRIGHT WILD WEST The legal framework surrounding AI- generated environments is uncertain. Paprocki likens it to a ‘wild west’. When scanning real locations, the existing rules still apply. Permissions may be required, particularly for recognisable buildings or protected spaces. However, AI-generated environments occupy a grey area. “If we reproduce a location by scanning, we follow the rules,” he says. “But when AI generates a space from a prompt – how do copyrights even exist in that situation?” For now, the company has taken a pragmatic stance: act first, respond later. “If I am able to do it, I do it. If somebody has concerns, then we deal with it. It’s too early to define strict rules.” It is a position which highlights broader industry ambiguity, as legal systems struggle to keep pace with generative technologies. “Many brands are concerned about how AI models are trained, so we often ‘sandbox’ the system,” Shaw explains. “Inputs and outputs stay private so the model is only trained on material we provide, rather than scraping data from the internet. “We can’t just type prompts like: ‘Use a Canon 25mm lens and make the background look like X location.’ You have to be careful about training data, intellectual property and how the

TAKING A LOOK AT AI WORLDBUILDING AND PHYSICAL LIGHTING

W hen we talk about using AI- generated worlds with physical lighting, we’re really talking about a pipeline where AI provides the structure and intent, and the lighting department then turns that intent into something physically and photographically correct. “The workflow starts much earlier than people think,” Russ Shaw says. “We don’t necessarily just jump straight into AI imagery by using prompts (perhaps for storyboards), but we begin the process with 3D blockouts.” These are usually simple geometries with rough architecture, basic horizon lines, proxy props and even just cubes and planes standing in for major shapes. Blockouts deliver true spatial relationships, for example, where light

would fall, where shadows land, where occlusion might occur. “We take the 3D layout and feed it into an AI model that can ‘skin’ the blockout, which is essentially texturing it, detailing it, proposing materials, weathering, atmospheric effects, colour grades, vegetation patterns and surface detail,” Shaw explains. “This keeps the structure grounded while letting the AI generate the aesthetic layer. Alternatively, a storyboard can provide a visual structure reference. AI then fills in the environmental detail based on the framing, angle and intended mood of each shot. “Either way, the goal is the same: AI is generating a world, but the design must still come from us,” Shaw says. AI is used to generate visual surfaces for the shapes, like brickwork or vegetation. “The structure of the scene still comes from the creative team,” Shaw says. “AI is essentially just skinning it.” “We also create mood boards and train models on specific references. If we’re working on a car, for example,

generated image might feed back into future training datasets.” One technique involves combining AI-generated textures with traditional 3D modelling. Artists first create simple ‘blockout’ scenes – basic geometric shapes that define the layout of buildings, streets or landscapes.

POINT TO POINT Veles

uses Gaussian splatting, which reconstructs 3D environments using dense point clouds derived from images or scans

54

DEFINITIONMAGAZINE.COM

Powered by