DEFINITION August 2018

47

HOLOGRPAHIC CAPTURE FEATURE

devices you have access to these six degrees of freedom, the X, Y, Z position of the viewer and the exact direction that they’re looking. The display has to be 6DOF and the content has to be also as the mobile VR market is moving towards 6DOF. By the time VR matters as an industry the current 3DOF approaches like 360 video might be superseded.” Ryan thinks that there are only really two approaches to create holographic content: a general class of volumetric video and a light-field approach. “The polygon model which is the volumetric approach is one that basically treats the world as a video game, whatever content you want to display you then represent as a series of polygon meshes that you then wrap with textures. “The light-field model is totally different. When I talk about light field I mean all of the light rays you can see in a scene. One of the problems with volumetric capture is that capturing the world as polygons is not really a mature technology as yet. There are a handful of techniques but it’s all defined by the need to generate your polygon maps and then have your texture maps. There are different ways of scanning the world; some people use LiDAR, some people use a structure light-type approach like Connect where you cast a pattern over the scene with IR; some just use lots and lots of cameras and try to infer the construction of the scene from multiple perspectives. “In contrast to that are light fields. This is a relatively new way of representing light to the world. When you see something you are taking a series of point samples of the light field as it bounces off an

object; all the light bounces off the thing you’re seeing and bounces back in to your eyes,” continues Ryan. “If you imagine that light and actually capture it and recreate it exactly as it was, you would effectively have a hologram. You’d see every possible perspective, as you move around you see the correct reflections, so fundamentally if you have light rays you don’t need the model, this is the more direct approach. “The downside is that this is just a colossal amount of data, that’s fundamentally the biggest problem. That and the aliasing resulting in moiré and interpolation are other big problems with light field and VR’s future.” Visby is pursuing a pure light-field future – no polygons, just the encoding of light. Other companies like OTOY are building volumetrically and laying light fields on top. In this hybrid system, the light field is mostly representing the specularity element. But again this is data intensive as it uses polygons and light-field texture maps.

DEGREES OF FREEDOM Visby provided the ‘engine’ for the data coming out of the 24 Sony cameras. Co-founder Ryan Damm, who calls himself a light-field evangelist, says: “The devices I’m interested in are 6DOF, which means you can translate in three dimensions and you can look in three directions. A 0DOF device is no different from putting a TV on your face – no matter where you look you are seeing the same pixels. There are the 3DOF devices like the Google Cardboard, with which, again, you can look in any direction, but you have no freedom to move around or walk around. They are convenient but you couldn’t say they are fully holographic. “But the way of the future is really these 6DOF devices. Today that means the Occulus, the Vive, the PSVR and lots of other smaller players in the space. The difficulty is when you are developing for these

ABOVE No fewer than 24 Sony RX0 cameras make up the new Meridian light-field camera. RIGHT Lytro’s 95 camera Immerge light-field capture device.

@DEFINITIONMAGAZINE | @DEFINITIONMAGS | @DEFINITIONMAGS

AUGUST 2018 DEFINITION

Powered by