FEED Issue 10

Exploring the future of media technology

HOUSES OF WORSHIP

INSIDE IRL STREAMING

ELECTRIC OB VEHICLES

FORTNITE SKIRMISHERS

BUILDING THE FUTURE

3 WELCOME

So that was 2018 – the year when machine learning became an essential part of everyone’s marketing pitch, the year when everyone realised that esports was not just big, but that it had been big for a long while, and most importantly, the year that

EDITORIAL EDITOR Neal Romanek +44 (0) 1223 492246 nealromanek@bright-publishing.com

launched FEED . We’re still only ten months into our journey, but we’re inspired by and grateful for all the support and enthusiasm we’ve received from all sides of the online moving image industry. Keep letting us know how we can help! December is a time for lights and shows and delightful illusions, so we thought this issue would be a good place to explore the revolution kicked off by augmented and mixed reality. Incorporating realistic virtual sets and digitally originated content into a live video feed is becoming available to just about everyone. Live productions, like the opening ceremony for this year’s League Of Legends championship, are blowing audiences away by adding lifelike animated characters in a real-time multi-camera environment. Pokemon Go has made AR a household word, but the processing power each of us carries in our back pocket is about to unleash a whole new level of data and graphical interaction to our everyday lives, whether it’s seeing how Ikea furnishings might look in our living room, or how our street looked in the year 1775. In this issue we also take a look at one of the fastest growing sectors in online video – houses of worship. We visit churches in Korea and the UK that are using digital tools to bring in supporters, and we visit an innovative TV station on the grounds of the Jamia Mosque in Nairobi that is becoming a major Islamic broadcaster for the continent of Africa.

CONTRIBUTORS Ann-Marie Corvin Laura Jeacocke Adrian Pennington Philip Stevens CHIEF SUB EDITOR Beth Fletcher SENIOR SUB EDITOR Siobhan Godwood SUB EDITOR Felicity Evans

ADVERTISING SALES DIRECTOR Matt Snow +44 (0) 1223 499453 mattsnow@bright-publishing.com KEY ACCOUNTS Chris Jacobs +44 (0) 1223 499463 chrisjacobs@bright-publishing.com

DESIGN DESIGN DIRECTOR

Andy Jennings DESIGN MANAGER Alan Gray SENIOR DESIGNER & PRODUCTION MANAGER Flo Thomas DESIGNER Lucy Woolcomb

PUBLISHING MANAGING DIRECTORS Andy Brogden & Matt Pluck

NEAL ROMANEK, EDITOR nealromanek@bright-publishing.com @rabbitandcrow @nromanek

Need to update or cancel your FEED subscription? Email us at feedsubs@bright-publishing.com BRIGHT PUBLISHING LTD, BRIGHT HOUSE, 82 HIGH STREET, SAWSTON, CAMBRIDGESHIRE CB22 3HJ UK

TO SUBSCRIBE TO FEED GO TO PAGE 62

feedzine feed.zine feedmagazine.tv

06 NEWSFEED

50 ROUND TABLE

Dispatches from the world of online video

What does our media tech future look like? And what should it look like?

12 YOUR TAKE

58 HAPPENING

How to choose a content recommendation solution

FEED teams up with the SportsPro OTT Summit in Madrid

15 STREAMPUNK

Samsung’s CJ890 curved monitor is good for your content – and your eyes

64 FUTURE SHOCK

Megahertz rolls out the very first all-electric newsgathering van for zero-carbon broadcasts

32 XTREME

Broadcasting the Fortnite Summer Skirmish 36 HOUSES OF WORSHIP

68 START-UP ALLEY

This month, we showcase new ways to share content with friends, a game show app and the science of advertising

How churches are boosting their reach by streaming video

46 THE LIVE LIFE

72 OVER THE TOP

The pleasures and sorrows of IRL (In Real Life) streaming

We have the technology, but do we have the vision?

20 AR, MR, ALL THE R’S We look at the explosion of virtual content in live broadcast 26 GENIUS INTERVIEW AUGMENTED REALITY GO

32

36

Imaginary Pictures’ Richard Mills talks about capturing content for AR

58

64

feedzine feed.zine feedmagazine.tv

5

SEE PAGE 62

feedzine feed.zine feedmagazine.tv 20

BREAKING NEWS FROM THE STREAMING SECTOR

WHITE HOUSE IN FAKE VIDEO FURORE

BuzzFeed News did its own analysis of the video. BuzzFeed reporter, Charlie Warzel, concluded in his article, Welcome To The Dystopia: People Are Arguing About Whether This Trump Press Conference Video Is Doctored: “While it’s not technically ‘sped up’ by intent, it effectively is in practice. The video-to- GIF conversion removes frames from the source material by reducing the frame rate. The GIF-making tool, GIF Brewery, for example, typically reduces source video to ten frames per second. Raw, televised video typically has a frame rate of 29.97 frames per second. When frames drop out, the video appears jumpier. Acosta’s arm seems to move faster.” Mock YouTube videos followed, one of which included Jim Acosta’s karate chop

Last month, the White House revoked the press credentials of CNN journalist Jim Acosta after a heated exchange with President Donald Trump. The reason given for the revocation was “disrupting press proceedings”. This was followed by assertions that Acosta had “karate chopped” an intern after declining to give up a hand-held microphone to her. The White House press team pointed the public to a video purporting to show the ‘chop’ up close. The video was sourced from Paul Joseph Watson, a British YouTuber and correspondent for conspiracy-centric news site, InfoWars. A furore erupted over whether the video had been digitally manipulated, with some claiming it had been sped up to make Acosta’s effort to hold on to the microphone seem violent. Watson’s video was a zoomed-in clip, in GIF format, of the original. Watson denied manipulating the video in any way, saying he was forwarding something that had originally appeared on The Daily Wire . In an article on Vice’s Motherboard (motherboard.vice.com), two experts in image analysis believed there had been no alteration to the original video, although one of them, Jeff Smith of University of Colorado, Denver did note that there were “duplicate frames at the moment of contact”. Hany Farid, a UC Berkeley professor with a speciality in image analysis and human perception, said that after his analysis, he did not believe the video had been doctored.

severing the intern’s arm at the shoulder, complete with sprays of blood. The debate has galvanised concerns about the impact digital manipulation of documentary video evidence can have on public discourse. It’s feared that the very question of whether or not an image or video has been manipulated begins to erode trust in the press and institutions. The New York Times reporter, Adam Ellick, commented on NPR’s radio programme, Fresh Air: “A lot of people, a lot of analysts, even, still describe fake video as sort of the future frontier of fakes. But it’s the present… I just worry that we’re constantly lagging behind a lot of other countries when it comes to detecting this and sort of wrestling with the consequences of disinformation.”

feedzine feed.zine feedmagazine.tv

7 NEWSFEED Updates & Upgrades

CRITERION COLLECTION TO LAUNCH ITS OWN VOD SERVICE

The Criterion Collection is starting its own on-demand service in spring next year. The brand is renowned for its high-end releases of artistically and historically significant films. The Criterion Collection had been available on Turner’s classic film service FilmStruck, until FilmStruck abruptly shut down last month. The loss of an online Criterion Collection service was met with dismay by film-makers and film enthusiasts worldwide. “We are incredibly touched and encouraged by the flood of support we’ve been receiving since the

letters and newspaper articles, and raised your voices to let the world know how much our mission and these movies matter to you.” The new Criterion Channel will continue with the same types of programming and series that had been available on FilmStruck. The new service will be wholly owned and controlled by the Criterion Collection. The service aims to open in the US and Canada at launch, with additional territories rolled out later. The Criterion library will also be available through a new WarnerMedia consumer platform, which is due to launch in late 2019.

announcement that FilmStruck is shutting down,” announced the Criterion Collection on its official website. “Our thanks goes out to everyone who signed petitions, wrote

NHK LAUNCHES FIRST 8K TV CHANNEL

NHK began regular 8K broadcasting this month. On 1 December the Japanese public broadcaster launched two new channels, one broadcasting in 4K, the other in 8K. “I believe this will be the world’s first 8K TV channel,” said NHK senior producer, Mika Kanaya, in an interview earlier this year at MIPCOM. The 8K channel will kick off with specially produced 8K content, including concerts by three major European orchestras – the Berlin Philharmonic, the Vienna Philharmonic and the Royal Concertgebouw Orchestra. The 8K channel will broadcast 12 hours and ten minutes of 8K content each

day, while the new 4K channel will broadcast 18 hours of content. 8K, or Super Hi-Vision, offers a resolution 16 times that of HDTV (7680x4320 pixels) and higher frame rates, including 120, 60 and 59.94fps. NHK’s Super Hi-Vision format also features 22.2 channel immersive audio, which includes top, middle and bottom speakers, in addition to left and right and surround, as well as a LFE (low frequency effects) speaker. broadcast infrastructure in preparation for the Tokyo 2020 Summer Olympics, including 8K coverage of the 2018 Winter Olympics in South Korea. NHK has been aggressively developing its 8K capture and

feedzine feed.zine feedmagazine.tv

8 NEWSFEED Updates & Upgrades

DAVINCI RESOLVE GETS MAJOR UPGRADE

Blackmagic Design has released DaVinci Resolve 15.2, a major update to its editing, colour correction, visual effects and audio post software with more than 110 new features added. In the new DaVinci Resolve, the editing timeline now draws at a higher frame rate, making the editing experience feel faster and more fluid. New animations have also been added so clips slide in and out of position, helping to quickly determine how edits affect other clips in the timeline. In addition, the new version 15.2 adds features that make editing of clips between multiple timelines easier. Timelines or compound clips can now be loaded into the source viewer and

edited into the current timeline in their decomposed state. Editors can tap the X key to instantly mark a clip in the source timeline and edit that clip directly into the active program timeline. Keyboard customisation has been completely redesigned. The new interface lets editors quickly see which keys are in use and assign shortcuts. Keyboard sets can be shared between systems and shortcuts can be assigned to different pages and user interface panels within the application. It also includes new ResolveFX plug- ins. A Blanking Fill plug-in automatically fills black letterbox or pillar box areas of the screen with defocused video, which makes it easier to use vertical video or

photos in standard widescreen timelines without having to manually create a background. A new ‘Beauty’ plug-in is designed to smooth textures and blemishes on skin and other surfaces. There are also new plug-ins for ACES transformations and limiting gamut. The upgrade also includes new FairlightFX audio plug-ins. A new Stereo Fix plug-in features presets for the common channel mapping operations, so clips can be fixed with a click. Users can fix problems such as dual-channel mono-dialogue that was edited in stereo tracks. A new Multi-Band compressor features real-time spectral analysis and four different independently adjustable frequency bands.

Huawei joins other consumer tech providers, including Apple, in looking toward practical and widespread use of augmented and mixed reality wearables and accessories within the next year or two. HUAWEI DEVELOPING AR GLASSES

Chinese technology company, Huawei, has revealed it’s developing augmented reality glasses. Richard Yu, CEO of Huawei Consumer Business Group, unveiled the news to CNBC with the aim to release the AR headset in 2020. “With this, you can have AR glasses working with your phone, maybe watch more of a large area,” Yu said.

Huawei has aggressively entered the digital technology space in recent years, offering everything from sophisticated OTT platforms to a high-end range of smartphones. Its AI-powered Huawei P20 Pro has been billed as one of the top media mobile phones around. With its superior camera, AI is now an attractive alternative to Apple and Samsung.

feedzine feed.zine feedmagazine.tv

10 NEWSFEED Updates & Upgrades u

VERIZON DIGITAL MEDIA SERVICES GETS UNREEL

Facebook is reportedly testing a new ‘watch videos together’ feature for its Messenger app. It appears the feature would allow Facebook Messenger users to view and comment on videos playing simultaneously across devices in the Messenger chat window. The new feature was discovered in Messenger’s code by software developer Ananay Arora. The code included descriptions of feature, such as ‘Tap to watch together now’, ‘Try restarting the video to notify friends you want to watch together’, and ‘Chat about the same videos at the same time’. A Facebook spokesman confirmed with technology news site, TechCrunch, that the code represented an internal test. Facebook’s Watch Party is a similar feature, which allows simultaneous viewing of video and commentary in Facebook Groups. FACEBOOK MESSENGER TESTS ‘VIDEOS TOGETHER’ Verizon Digital Media Services has partnered with Unreel Entertainment to help customers launch branded OTT video streaming apps and online destinations. Unreel Entertainment is a developer of direct-to-consumer turnkey OTT solutions and currently provides OTT technology powering several Verizon Digital Media Services customers, including Bonnier Corporation, Level One, The Enthusiast Network’s Adventure Sports Network, World Poker Tour and Zazoom Media Group. The Verizon Digital Media Services platform offers a global content delivery network, with video streaming services, and compliance and monitoring services aimed at helping broadcasters, content owners and service providers deliver

high-quality OTT content at scale, regardless of location. Verizon Digital Media Services’ Channel Schedule API can stitch VOD content together to form a 24/7 ad-supported live channel for use within a customer’s branded app or online destination provided by Unreel. “We are committed to simplifying the OTT content delivery ecosystem for

broadcasters and content providers,” said Chris Carey, chief strategy officer for Verizon Digital Media Services. “Our customers can leverage Unreel Entertainment to manage multiple apps and online destinations, syndicate more than seven million videos into content streams and distribute their own content to millions of viewers within Unreel’s OTT network.”

feedzine feed.zine feedmagazine.tv

12 YOUR TAKE Content Discovery

Choosing a content discovery engine for your online video business requires the same personalisation and care that you would want to pass on to your customers REQUEST FOR CONTENT DISCOVERY

hen operators want to upgrade their systems they often turn to a process that we know as the RFP (Request

For Proposal) process. There are a number of steps and versions: Request for Information, Request for Quote, Request for Purchase, and probably others less known, but they are all very similar in structure. The process generally consists of the operator sending out a long list of requirements, essentially functionality and user stories that the desired system should meet in order to be viable. The requirements might range from what standards to support to how a specific input or data point should behave within the system and how it will integrate into the current framework. RFP is a great tool, when there are clear standards and many equal suppliers, and when you are primarily looking for a low price. Say you are a pay TV service looking for a new set-top box to replace your old one. You know pretty much what you want the box to do, the formats it should support and the systems that it needs to integrate with. After some work, you will end up with two to three almost equal suppliers that you can have compete to become the lowest bidder. Job done! Now you will have

Words by Arash Pendari, CEO Founder,Vionlabs

feedzine feed.zine feedmagazine.tv

13 YOUR TAKE Content Discovery

“Because you watched” or even simple personalised sorting and display, something more nuanced is needed. The more advanced, often AI driven, engines of today are more and more based on perception, behaviour and emotions. This provides you with a more accurate set of tools to hyper-personalise a service. When you are looking to upgrade your video discovery and personalisation, you need to take this into account. If you are looking to innovate and create a service that inspires trust and guides your users through what might be a huge library of assets, you cannot simply choose the lowest bidder. You must select the best possible option for the experience you want to create. So, instead of sticking to a standard RFP process, adapt it, change it. Or perhaps even kill it and try something new. DISCOVERING YOUR DISCOVERY First, you need to ask yourself the key question: How strategically important is the content discovery to you and the customer experience you want to deliver? If you decide it is an area of critical importance, make room for a number of pilot projects. As long as you have a reasonably flexible back end CMS, you can easily add a cloud-based system to start supplying you with the data you need to complete more advanced use-cases. If you are buying a new system, make sure it can handle multiple suppliers so you can A/B test the results and quickly shift between suppliers as the tech evolves. If it’s not critical but still important, take some extra time to define the key

performance indicators (KPIs) you care about rather than specific features. In the end, you are looking for results and you will never know exactly how you want to build the system in two, three or five years time. OTT, SVOD and on-demand has not even been around for ten years, so trying to determine in detail today how it will look in three years time is not a great strategy. Instead, try a pilot or a prototype, choose a specific use-case, build it, deploy it, and if it works keep building additional ones. Updating things in small steps and A/B testing the effects will give you a good way of determining the quality of the service and at the same time a better understanding of the behaviour of your customers. Perceptions and behaviours are central to the user experience, so make sure you have a KPI set that matches up with behaviour you want to create. For most recommendation engines, measuring two simple things will give you a fast indication of whether you are doing the right thing or not. The first is time to content – ie, the time spent from starting the interface to watching something. You want users to spend their time watching, not searching. The second is additions to “My Watchlist”. A good discovery experience will expose a more relevant selection of content to your users. A longer watchlist will make the customer stay longer and consume more. In summary, video is art. Its value is in the emotional experience it produces. Your user experience should enable and strengthen that.

HOW STRATEGICALLY IMPORTANT IS THE CONTENT DISCOVERY TO YOU ANDTHE CUSTOMEREXPERIENCE YOUWANTTODELIVER? But there are also many times where the traditional RFP processes will not get you a better deal, especially when subjective evaluation is involved. If you want to build a new marketing campaign, negotiate your channel lineup or create a strong, new asset for your SVOD service, you should probably avoid a traditional RFP process. It’s not the number of actors, the length of the script, or the number of explosions written into it that will determine how successful your content is. I have never heard of someone using an RFP to pick their new hit show. Stories, emotions and scripts are hard, if not impossible, to evaluate based on those types of metrics. The process has to be one in which you look at each individual script or idea and try to determine the value and strategic fit for each specific opportunity. Looking at the recommendation and video discovery field, this particular problem also surfaces for many operators. EMOTIONS VS MEASUREMENTS The discovery process is based on user experience and emotions. Some parts of the process can be measured and optimised, and systems need to adhere to the technical standards of your back end, but the real value between services can only be measured through interaction with consumers. Using an RFP process when looking for a discovery engine might have been a great choice back when they were essentially analytics and search engines based on a specific rule-set. Such engines produced a list of items according to criteria, like the date ingested into the system, popularity, genres and actors. These systems were basically an automated search engine based on the mindset of your editorial team. In order to enable more advanced features, such as “Weekly Discovery”, the best product for the lowest price, or at least pretty close to it.

WHAT SHALL WE WATCH TODAY? When it comes to ‘because you watched’ lists, today’s engines use customer behaviour and emotions to provide the best customer experience possible

feedzine feed.zine feedmagazine.tv

15 STREAMPUNK Tool of the Month

SAMSUNG CJ890 CURVED MONITOR Samsung’s CJ890 monitor can make for a more flexible and immersive work and viewing experience – and a healthier one

part from looking very cool, there is some science behind the new curved screens from Samsung. A study by the Seoul National University Hospital suggested that Samsung’s curved monitors were more effective in reducing eyestrain than its flat monitors. The volunteer subjects rated their experience of a number of factors, including eye strain, eye pain, headache, double vision, blurred vision and tearing when watching content on flat and on a variety of four different curved monitors. Sessions in front of the curved monitors were associated with lower ratings for each of these symptoms. Another clinical study conducted by Dr. Gang Luo, Assistant Professor of Medicine, and Dr. Eli Peli, Professor of Medicine, both of Harvard Medical School, showed that, when performing nearly one hour of intensive visual tasks, fewer users reported eyestrain and blurred vision using a curved monitor than a similar widescreen flat monitor. Samsung’s CJ890 is a 43in LCD monitor with LED backlight, with an aspect ratio of 32:10 and a resolution of 3840x1200. It currently sells for £899/$1000. The LCD screen is a VA (vertically aligned) type as opposed to IPS (in-plane switching) – VA screens are optimised for looking at

head-on, whereas IPS screens can still be seen at high angles of view. VA screens also have better contrast and black uniformity. HOWWIDE IS WIDE? The wide screen’s aspect ratio doesn’t naturally fit with any other common formats. The point of this monitor is to offer a vast canvas you can configure for whatever job is at hand. The CJ890 can help with features that offer the multi-screen user a bevel-free life with Picture by Picture and Picture in Picture features, and you can choose between your device’s native aspect ratio or use the monitor’s entire 3840x1200 pixels. Multiple inputs encourage new kinds of workstation configurations and enable users to have workflows that would normally occur over multiple monitors operating side by side on the same screen. Inputs are dominated by USB Type-C, with Type-C connections and one just for PC connection. These give you data plus power and, with the right adapters, access to DP and HDMI devices. Other inputs include HDMI, DP and audio. CUSTOMISING YOUR CURVE There are four small buttons on the front of the panel that will allow you to navigate your

CURVY AND PROUD OF IT The curvature of the Samsung monitor is not just a style feature; it has also been shown in tests to reduce eyestrain and blurred vision

way around the monitor’s built-in options. Three buttons offer shortcuts to finding a signal source, initiating PIP and PBP, and switching between your USB inputs. The solitary button is more of a jog wheel with which you can access setting, PIP/PBP, source and the off button. The picture menu is your standard deep dive into brightness, contrast, sharpness and colour changes (with what Samsung calls “Magic Bright”), as well as Eye Saver Mode which dampens blue light levels (studies show that long exposure to higher frequencies of light can cause sleeplessness and eye damage). Visual purists might like a flat rectangular screen for its ability to accurately represent a genuinely two dimensional image, but the Samsung CJ890 curve might represent the future – something easier on the eyes, and more immersive for both work and fun.

THE POINT OF THISMONITOR IS TOOFFERA VAST CANVAS YOUCANCONFIGURE FOR WHATEVERJOB ISATHAND

feedzine feed.zine feedmagazine.tv

16 ADVERTISEMENT FEATURE AWS Elemental

Milk VFX used cloud rendering services from AWS to create some awesome water effects for the survival thriller Adrift GETTING THE DRIFT OF CLOUD

t’s a truism that filmmaking and water don’t mix. The malfunctioning mechanical shark on Jaws , the aquatic heart of

darkness that was James Cameron’s The Abyss , or the now legendary nightmare of Waterworld – if you’re shooting on or around water, you’re asking for it. Luckily, we have an amazing array of digital tools available to us today, and all those problems are solved! Right? Alas, no. Digital water is almost as much a challenge as the real thing. Any visual effects (VFX) supervisor worth his sea salt will tell you that creating realistic digital water effects is always a challenge. London-based Milk VFX took up that challenge big-time when they embarked on the effects for Adrift , an American thriller about a young couple stranded in the middle of the ocean after a hurricane. “For Adrift, the biggest technical challenge was simulating and rendering

THE BIGGEST TECHNICAL CHALLENGEWAS SIMULATINGAND RENDERINGTHEOCEANOVERVERYLONGSHOTS

THINK ABOUT RENDERING In 2017, AWS acquired Thinkbox Software. Thinkbox offered a number of top rate tools, but what attracted AWS especially was AWS Thinkbox Deadline’s render management solution. It was originally used as an on-premises solution. Render workloads are some of the most compute- intensive for a VFX or animations studio and can require a notable amount of infrastructure, especially as VFX becomes more and more elaborate and complex.

the ocean over very long shots,” says Benoit Leveau, head of Pipeline at Milk. “The ocean surface had to be animated by hand to follow the director’s vision, and only then could we first simulate it and then render it, which means there were a lot of iterations to get it right. And with two very long shots at 3000 and 7000 frames long, we knew from the beginning that our local infrastructure would not cope with the amount of rendering required. Thus, we planned to use the cloud for rendering.”

feedzine feed.zine feedmagazine.tv

17 ADVERTISEMENT FEATURE AWS Elemental

There are actually very few movies made now without some kind of visual effects. Even documentaries are employing some kind of VFX work. As a result, VFX studios are being challenged to raise the bar with every production. It also means that any given studio might have multiple projects they are bidding on. If they happen to land a couple of them, chances are their on-premises infrastructure won’t be able to accomplish that job and so they are forced to extend it. Their options are they can forklift in a bunch of servers into the basement or they can go to the cloud. Milk needed a huge burst in rendering power to complete its oceanic effects for Adrift , and AWS Thinkbox Deadline provided the solution. “We needed to write some in-house tools to facilitate the process,” says Milk’s

Leveau. “In the end, all of the rendering for this project was done on the cloud over a six-week period. On average, we used 80,000 cores, peaking at 132,000 cores for the last two weeks.” Milk opened its doors in 2013, moving to cloud resources for production as early as 2015. AWS Thinkbox Deadline has allowed the company to manage its local render farm. Using AWS has meant that Milk was able to use the EC2 Spot Instances, which are cheaper interruptible spare capacity. It’s also enabled them to tap into different resources, with more cores and more RAM, and scale as needed for each shot and each project. AWS Thinkbox Deadline has also provided continuous support and collaboration in getting the best out of the cloud resources.

MILLIONS OF PARTICLES Simulations such as those in Adrift are some of the most compute-intensive components of VFX work. It’s rare that a VFX company is going to render anything denser or more complex than the major water effects of a movie like Adrift . Companies need to have something that is art directable and has to get the physics right. Each frame is made up of millions of particles. “Most companies in VFX are now using the cloud to some extent,” continues Leveau. “For large studios with a big local render farm, they will use the cloud for peaks of production but will still largely rely on their local infrastructure. But there’s also a trend, which we’ve been part of with our recent projects, including Adrift and our YouTube Originals' new series Origin , where studios use the cloud to tackle projects that they couldn’t have before. It also allows studios to scale up and handle multiple projects simultaneously. “The future will see even more reliance on the cloud with existing studios moving their whole infrastructure – storage, computer and also artist workstations – to the cloud, while new virtual studios will open entirely in the cloud.” MOST COMPANIES INVFX ARENOWUSING THE CLOUD TOSOMEEXTENT

OCEANS APART For Adrift, the ocean effects were first animated by hand, then simulated and rendered by Milk, using AWS Thinkbox Deadline and using up to 132,000 cores

feedzine feed.zine feedmagazine.tv

18 ADVERTISEMENT FEATURE AWS Elemental

A VIDEO COMPRESSION PRIMER Video compression can be complex, but it doesn’t have to be complicated. We step back to get an overview of the most essential element in digital video distribution

hat used to be a relatively simple proposition for broadcasters – delivering a single analogue or digital signal to one type of device – has become a highly complex series of technological tasks. Broadcasters, pay TV operators and OTT services must deliver video to every type of device now, from big HD and 4K televisions to mobile phones and tablets. The caveat: audiences expect video to look just as good on the smallest device as it does on the largest one. Today’s TV shows are shot at the same quality as feature films, quality that translates into a data rate that could be as high as 5TB an hour or more. Those files are far too large for even the fastest Internet connection to deliver consistently, and most consumer devices aren’t designed to play back these video source formats. So how do you stream premium content to viewers on smart TVs, mobile phones, or tablets – whether those devices are connected to Gigabit Ethernet, a cellular

network, or any connection speed in between the two? The answer: compress video files into smaller sizes and formats. The dilemma: enabling high-quality playback of these files on every possible type of screen. Codecs are the key to addressing size, format and playback requirements. A codec allows you to take video input and create a compressed digital bit stream (encode) that is then played back (decoded) on the destination device. There are two types of codecs: lossless and lossy. Lossless compression allows you to reduce file sizes while still maintaining the original or uncompressed video fidelity. Digital cinema files are generally encoded in a lossless format. The video files that you watch on your smart TVs and personal devices are all encoded with a lossy codec, which means taking the original file and reducing as much of the visual information as possible while still allowing the image to look great. All streaming video is encoded using lossy codecs.

There are many different video codecs available – including proprietary ones, algorithms that have been developed by a company and can only be played back on devices made by that company or by licensing its technology. More common are open-source or standards-based codecs. These are typically less expensive to use and don’t require content owners to be locked into a closed system. Today’s most popular standards-based codecs for video streaming are AVC/H.264 and HEVC/H.265. Broadcasters and pay TV operators sometimes still use MPEG-2, ALL STREAMINGVIDEO IS ENCODEDUSING LOSSY CODECS

feedzine feed.zine feedmagazine.tv

19 ADVERTISEMENT FEATURE AWS Elemental

COMPLEXIT Y

BITRATE

which is more than 20 years old. The crucial goal in codec development is to achieve lower bitrates and smaller file sizes while maintaining similar quality to the source. Until fairly recently, most video encoding was performed using specially built hardware chips. AWS Elemental pioneered software-based encoding, which runs on off-the-shelf hardware. HOWVIDEO COMPRESSIONWORKS To reduce video size, encoders implement different types of compression schemes. Intra-frame compression algorithms reduce the file size of key frames (also called intra-coded frames or I-frames) using techniques similar to standard digital image compression. Other codec algorithms apply interframe compression schemes to process the frames between key frames. For standards-based codecs, AWS Elemental builds and maintains its own algorithms that it uses to analyse I-frames and the differences in subsequent frames, storing information on the changes between these frames in predicted frames (P-frames) and processing only the parts of the image that change. Bi-directional predicted frames (B-frames) can look backward and forward at other frames and store the changes. Once compression has been applied to the pixels in all the frames of a video, the next parameter in video delivery is the bitrate or data rate – the amount of data required to store all the compressed data for each frame over time, usually measured in kilobits or megabits per second (Kbps or Mbps). Generally speaking, the higher the bitrate, the higher the video quality – and therefore, the more storage required to hold the file and the greater the bandwidth required to deliver the stream. LIVE AND VOD ENCODING AWS Elemental compression products are divided into two different categories – live encoders and video-on-demand encoders. Live encoders take in a live video signal and

MPEG-2

AVC

HEVC

MPEG-2

AVC

HEVC

encode it for immediate delivery, in cases such as a live concert or sporting event, or for traditional linear broadcast channels. While live encoders are limited to analysing and encoding content in real time, VOD encoders that use file-based media sources are not restricted to processing the content in real time. VOD encoders may take multiple passes at the video, analysing it for motion and complexity. For example, explosions in an action movie might require more effort to encode than a talking-head interview. Constant bitrate (CBR) encoding encodes the video at a more or less consistent bitrate for the duration of a live stream or entire video file. Variation occurs because different frame types will have different data rates, and because buffers on clients allow a small amount of variability in the bitrate during playback. Variable bitrate (VBR) encoding means the video bitrate can vary greatly over the video’s duration. A VBR encode spreads the bits around, using more bits to encode complex parts of content (like action or high-motion scenes) and fewer bits for simpler, more static shots. In either CBR or VBR encoding, the bitrate target is followed as closely as possible, resulting in outputs that are very close in size. Overall, especially at lower average bitrates, VBR-encoded video is going to look better than CBR-encoded video. Another factor with VBR is that the encoder needs to be configured so that the peak data rate isn’t set so high that the viewing device can’t process the stream or file.

Quality-defined variable bitrate encoding (QVBR) – a new encoding mode from AWS Elemental – takes VBR one step further. For live or VOD content, QVBR brings sophisticated algorithms to consistently deliver the highest-quality image and the lowest bitrate required for each video. Simply set a quality target and maximum bitrate, and the algorithm determines the optimal results no matter what type of source content you have (high action, talking heads, and everything in between). OPTIMISING QUALITY For premium content, such as a television series or movie, AWS Elemental Server on- premises or AWS Elemental MediaConvert on the AWS Cloud encodes video files to the highest quality. On-demand services may employ a fleet of AWS Elemental Servers, or just use the auto-scaling MediaConvert service to process VOD assets. The codec of choice for delivering video evolves every few years and each advance brings with it improved video quality at lower bitrates. At the same time, screen sizes and resolutions get bigger, so it takes considerable computing power to generate compressed files with high video quality. Until recently, H.264 (also known as AVC) was the best codec for optimising quality and reducing file sizes; many content publishers still use it today. But it’s quickly giving way to HEVC: High-Efficiency Video Coding. It’s also referred to as H.265; different standards groups employ different designations to refer to the same standard. HEVC requires more computing power than H.264, but it’s more efficient. As AWS Elemental writes its encoding software from scratch, rather than relying on off-the-shelf encoding software, it can take the efficiency gains in each new generation of codec and adjust the software accordingly to get the best quality and/or smallest video file sizes, as well as the shortest encoding times.

EVOLUTION OF CODECS

BIT BY BIT Despite the evolution of codecs over the last 20 years, some broadcasters are still using MPEG-2 with its high bitrate

100

50% bitrate saving target

50% bitrate saving target

75

50

MPEG-2 1994

AVC 2003

25

HEVC 2013

0

1995

2005

2015

feedzine feed.zine feedmagazine.tv

20 TECHFEED Augmented Reality

feedzine feed.zine feedmagazine.tv

21 TECHFEED Augmented Reality

THE IMPOSSIBLE When games engines meet live broadcast, the real and photoreal are interchangeable

feedzine feed.zine feedmagazine.tv

22 TECHFEED Augmented Reality ools that enable broadcasters to create virtual objects that appear as if they’re really in the studio have been available for years,

but improvements in fidelity, camera tracking and the fusion of game engine renders with live footage has seen augmented reality go mainstream. Miguel Churruca, Marketing and Communications director at 3D graphics systems developer Brainstorm, explains: “AR is a very useful way of providing in- context information and enhancing live images while improving and simplifying the storytelling. Examples of this can be found in election nights and entertainment and sports events, where a huge amount of data must be shown in-context and in a format that is understandable and appealing to the audience.” Virtual studios typically broadcast from a green screen set, but AR comes into play where there is a physically built set in the foreground, and augmented graphics and props placed in front of the camera. Some scenarios might have no physical props at all with the presenter interacting solely with graphics. “Apart from the quality of the graphics and backgrounds, the most important challenge is the integration and continuity of the whole scene,” says Churruca. “Having tracked cameras, remote locations and graphics moving with perfect integration, perspective matching and full broadcast continuity are essential to providing the audience with a perfect viewing experience.” The introduction of games engines, such as Epic’s Unreal Engine or Unity, brought photorealism into the mix. Originally designed to quickly render polygons, textures and lighting in video games, these engines can seriously improve the graphics, animation and physics of conventional broadcast character generators and graphics packages.

K/DA IS A VIRTUAL GIRL GROUP CONSISTING OF SKINS OF FOUR POPULARCHARACTERS INLEAGUEOFLEGENDS

VIRTUAL POP/STARS Last year a dragon made a virtual

Riot Games tapped Oslo-based The Future Group (TFG) to bring K/DA to life for the opening ceremony of the League Of Legends World Championship Finals at South Korea’s Munhak stadium. The show would feature the real life singers performing their K/DA song Pop/ Stars onstage with the animated K/DA characters. Riot provided TFG with art direction and models of the K/DA characters. Los Angeles post house Digital Domain supplied the motion capture data for the group, with TFG completed K/DA facial expressions, hair, clothing, texturing and realistic lighting. “We didn’t want to make the characters too photorealistic,” says Lawrence Jones, Executive Creative Director at TFG. “They needed to be stylised, yet still believable. That meant getting them to track to camera and having the reflections and shadows change realistically with the environment. It also meant their interaction with the real pop stars onstage had to look convincing.” All the animation, camera choices and cuts were pre-planned, pre-visualised and entirely driven by timecode to sync with the music. “Frontier is our version of the Unreal Engine which we have made for broadcast and real-time compositing. It enables us to synchronise the graphics with the live signal frame accurately. It drove the big monitors in the stadium (for fans to view the virtual event live) and it drove the real world lighting and pyrotechnics.” Three cameras were used, all with

appearance as singer Jay Chou performed at the opening ceremony for the League of Legends final at Beijing's famous Birds Nest Stadium. This year, the developer Riot Games wanted to go one better and unveil a virtual pop group singing live with their real world counterparts. K/DA is a virtual girl group consisting of skins of the four popular characters in League of Legends. Their vocals are provided by a cross-continental line-up of flesh and blood music stars: US-based Madison Beer and Jaira Burns, who both got their start on YouTube channels, and Miyeon and Soyeon from K-pop girl group (G)I-DLE. It’s a bit like what Gorillaz and Jamie Hewlett have been up to for years, except it’s happening live on a stage in front of thousands of fans.

LAWRENCE JONES “We didn’t want to make the characters too photorealistic. They needed to be stylised yet believable”

feedzine feed.zine feedmagazine.tv

23 TECHFEED Augmented Reality

tracking data supplied by Stype including a Steadicam, a PTZ cam and a camera on a 40ft jib. “This methodology is fantastic for narrative-driven AR experiences and especially for elevating live music events,” he says. “The most challenging aspect of AR is executing it for broadcast. Broadcast has such a high-quality visual threshold that the technology has to be perfect. Any glitch in the video not correlating to the CG may be fine for Pokemon Go on a phone, but it will be a showstopper – and not in a good way – for broadcast.” Over 200 million viewers have watched the event on Twitch and YouTube. “The energy that these visuals created among the crowd live in the stadium was amazing,” he adds. “Being able to see these characters in the real world is awesome.” PRO WRESTLING GETS REAL The World Wrestling Entertainment (WWE) enhanced the live stream of its annual WrestleMania pro wrestling event with augmented reality content last April. The event was produced by WWE, using Brainstorm’s InfinitySet technology. The overall graphic design was intended to be indistinguishable from the live event

INFINITY AND BEYOND: Brainstorm’s Infinity Set creates photorealistic scenes in real time, even adjusting lighting to match the virtual sets

staging at the Mercedes-Benz Superdome in New Orleans The graphics package included player avatars, logos, virtual lighting with refractions and substantial amounts of glass and other challenging- to-reproduce, semi-transparent and reflective materials. Using Brainstorm’s InfinitySet 3, the brand name for the company’s top-end AR package, WWE created a range of different content, from on-camera wrap-arounds to be inserted into long format shows, to short self-contained pieces. Especially useful was a depth-of-field/focus feature, and the ability to adjust virtual contact shadows and reflections to achieve realistic results. Crucial to the Madrid-based firm’s technology is the integration of Unreal Engine with the Brainstorm eStudio render engine. This allows InfinitySet 3 to combine

the high-quality scene rendering of Unreal with the graphics, typography and external data management of eStudio and allows full control of parameters such as 3D motion graphics, lower-thirds, tickers, and CG. The virtual studio in use by the WWE included three cameras with an InfinitySet Player renderer per camera with Unreal Engine plug-ins, all controlled via a touchscreen. Chroma keying was by Blackmagic Ultimatte 12. For receiving the live video signal, InfinitySet was integrated with three Ross Furio robotics on curved rails, with two of them on the same track, equipped with collision detection. WWE also continue to use Brainstorm’s AR Studio, a compact version which relies on a single camera on a jib with Mo-Sys StarTracker. There’s a portable AR system

feedzine feed.zine feedmagazine.tv

24 TECHFEED Augmented Reality

too, designed to be a plug and play option for on the road events. The Brainstorm tech also played a role in creating the “hyper-realistic” 4K augmented reality elements that were broadcast as part of the opening ceremony of the 2018 Winter Olympic Games in PyeongChang. The AR components included a dome made of stars and virtual fireworks that were synchronised and matched with the real event footage and inserted into the live signal for broadcast. As with the WWE, Brainstorm combined the render engine graphics of its eStudio virtual studio product with content from Unreal Engine within InfinitySet. The set-up also included two Ncam-tracked cameras and a SpyderCam for tracked shots around and above the stadium. InfinitySet 3 also comes with a VirtualGate feature which allows for the integration of the presenter not only in the virtual set but also inside additional virtual content within it, so the talent in the virtual world can be ‘teletransported’ to any video with full broadcast continuity. AUGMENTED REALITY FOR REAL SPORTS ESPN has recently introduced AR elements to refresh the presentation of its long- running sports discussion show, Around the Horn . The format of the show is in the style of a panel game and involves sports pundits located all over the US talking with the host, Tony Reali, via a video conference link. The new virtual studio environment, created by the DCTI Technology Group using Vizrt graphics and Mo-Sys camera tracking, gives the illusion that the panellists are in the studio with Reali. Viz Virtual Studio software can manage the tracking data coming in for any tracking

UPWARD TRAJECTORY AR elements have improved quickly, moving from looking quite artificial to being photo realistic

system and works in tandem with Viz Engine for rendering. “Augmented reality is something we’ve wanted to try for years,” Reali told Forbes. “The technology of this studio will take the video game element of Around the Horn to the next level while also enhancing the debate and interplay of our panel.” Since the beginning of this season’s English Premier League, Sky Sports has been using a mobile AR studio for football match presentation on its Super Sunday live double-header and Saturday lunchtime live matches. Sky Sports has worked with AR at its studio base in Osterley for some time but by moving it out onto the football pitch it hopes to up its game aesthetically, editorially and analytically. A green screen is rigged and de-rigged

at each ground inside a standard match- day, 5mx5m presentation box with a real window open to the pitch. Camera tracking for the AR studio is done using Stype’s RedSpy with keying on Blackmagic Design Ultimatte 12. Environment rendering is in Unreal 4 while editorial graphics are produced using Vizrt and an NCam plug-in. Sky is exploring the ability to show AR team formations using player avatars and displaying formations on the floor of the studio, appearing in front of the live action football pundits. Sky Sports head of football Gary Hughes says the AR set initially looked “very CGI” and “not very real” but that it has improved a lot since its inception. “With the amount of CGI and video games out there, people can easily tell what is real and what is not. If there is any mystique to it, and people are asking if it is real or not, then I think you’ve done the right thing with AR.” IBERIAN ITERATIONS Spanish sports shows have taken to AR like a duck to water. Multiple shows have taken to using systems and designs from Lisbon’s wTVision, which is part of Spanish media group Mediapro. In a collaboration with Vàlencia Imagina Televisió and the TV channel À Punt, wTVision manages all virtual graphics for the live shows Tot Futbol and Tot Esport . The project combines wTVision’s Studio CG and R³ Space Engine (real-time 3D graphics engine). Augmented Reality graphics are generated with camera tracking via Stype. For Movistar+ shows like Noche de

feedzine feed.zine feedmagazine.tv

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Page 24 Page 25 Page 26 Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Page 39 Page 40 Page 41 Page 42 Page 43 Page 44 Page 45 Page 46 Page 47 Page 48 Page 49 Page 50 Page 51 Page 52 Page 53 Page 54 Page 55 Page 56 Page 57 Page 58 Page 59 Page 60 Page 61 Page 62 Page 63 Page 64 Page 65 Page 66 Page 67 Page 68 Page 69 Page 70 Page 71 Page 72 Page 73 Page 74 Page 75 Page 76

feedmagazine.tv

Powered by