FEED Issue 16

29 ROUND TABLE Storage

clients aren’t really going to be feeling the love. Especially when they get stuttering playback in the online and you take ten hours to export a master! MARC RISBY: Not an easy question to answer without more information, but for most people it would be a tiered storage system that has layers of storage for performance, general purpose use and archive. These systems offer a good performance/price balance. If you’re willing to sacrifice some sharing capabilities for performance, the cheapest storage otherwise is usually application-specific or direct-attach solutions. DBATC: If we don’t care about anything else but performance and speed, what are our best storage options? DUNCAN BEATTIE: Some people opt for an SAN, and that’s fine, but they require IT’S CRUCIAL TO HAVE A POLICY-DRIVEN SOLUTION THAT ALLOWS YOU TO INCORPORATE AMIX OF TECHNOLOGIES

a lot of expensive hardware around them as a starting point and there have been alternatives to SAN for a long time now, which in most cases do a better job and with less complications. It’s not how much storage you have or how fast it is, it’s how its performance is delivered to a specific workflow. People seem to want SSDs for the price of spinning disks. That may come one day; so systems that can deliver very close to this utopia via very clever technologies, meshing SSD and HDD together, is one of the best options available today. JAI CAVE: If money is no object you could look at NVMe storage for every array. In reality any solution is a balance between how much of your storage requires the fastest access, and how much can be a tier below. It’s crucial to have a policy-driven solution that allows you to incorporate a mix of technologies that give you the speed you need without assuming you need to access every file you hold for the same purpose. MEIR LEHRER: If the egress/delivery of assets is solely to make them accessible to collaborating team members, as well as occasionally delivering them to a service provider, then there’s no real difference in approaches to storage. However, there are other considerations to make if the central storage repository is meant to serve B2C (ie OTT streaming services direct to consumers) content as well, or any time- critical delivery in different geographic

locations, and especially if the content will be delivered in ABR format. MARC RISBY: 100% Flash storage, huge performance. Best be sitting down when you get the quote though. DBATC: How are different genres of content going to affect the types of storage we need? DUNCAN BEATTIE: If by genres we are talking about deliverables then how a company plans to deliver content is very important, because that will affect the type of storage you need. Viral content, for example, in its early days didn’t require much storage because the quality was fairly pedestrian. But with today’s production values gravitating swiftly to 4K for delivery to multiple platforms, it also means the content creators need to edit in 4K. Frankly, such viral content creators are more like broadcasters than ever, and have similar, if not greater, storage performance requirements. JAI CAVE: Your workflow pipeline will drive your storage requirements. Film, drama and high-end factual may demand a fully uncompressed 16-bit pipeline, whilst entertainment shows may work with a 10- bit compressed pipeline. Whilst the mix of genres you have will drive the workflow you settle on, other factors such as the studio/ broadcaster spec, systems you use, clients you have and where you position yourself in the market all have an impact too.

feedzinesocial feedzinesocial feedmagazine.tv

Powered by