Dr Emma Young BBC R&D development producer
Tell us about how you ended up working in the audio space. I moved away from home for university at 17 and immediately responded to an ad placed by a band looking for a lead singer. I spent several years on the venue circuit in Glasgow, and writing and recording songs. I wanted to learn to record, mix and master music for myself, so I enrolled in a part-time course at the School of Audio Engineering (SAE) in Glasgow, attending night classes. I later set up an audio production company and spent eight years building experience in the industry, in a broad range of production environments including sound design, music composition and production for games and advertising, film sound recording, TV gallery sound operations, live music mixing and theatre sound, as well as picking up work in recording studios. A degree and a PhD later, I was accepted onto the BBC’s R&D grad scheme, and after completion, joined BBC R&D’s Audio Team, which felt like the perfect fit. My work at the BBC is very broad and highly creative. I’ve worked on innovative audio tech and been the creative lead on several experimental audio productions and public trials. My skills are occasionally called on for producing binaural mixes for BBC Sounds and BBC Radio 6 Music, which I love doing. Recently I’ve been doing editorial and development production for visual computing research, as well as audio technology research – so I’m building skills in VR, AR, virtual production and metaverse-related technologies. Most projects I work on have a music or audio theme; I recently spent time at Maida Vale capturing 360° test content during some radio sessions with bands. In another project, I worked with musicians on a live performance which made use of BBC R&D’s audio device orchestration
THE BEST WAY TO OVERCOME MY IMPOSTOR SYNDROME WAS TO FACE IT HEAD ON
technology, which allows producers to create immersive, interactive audio for synchronised delivery across multiple connected devices like phones and laptops. We built a production tool to make it easy for others to employ this tech in their own work. My latest work is for the multi-partner Horizon-funded MAX-R project, which aims to build a pipeline for making, processing and delivering maximum-quality XR content in real time. What is one piece of advice you would give to anyone hoping to start out in audio? Pick up experience anywhere and any way you can. Before my course at SAE started, I hassled the sound engineer at King Tut’s Wah Wah Hut in Glasgow into letting me help out. He eventually let me shadow him on weekends, and it was there I learnt how to set up mics and coil cables like a pro! I’d also advise getting practical experience using DAWs and exploring the effects of audio processing tools via the supplied plug-ins. I use Pro Tools for most audio production work and have done for over 20 years, having learnt on it at SAE and still finding it does most things I need. Toughest professional challenge you’ve overcome? One of my first live sound jobs was for a theatre production; I’d been drafted in to cover for the sound engineer when he fell ill. In the second act, things fell apart. There was a scene with multiple sound effects in quick succession. I went to skip to the next track on the CD player, but hit the button
twice. Instead of the ‘phone ringing’ sound effect, it was a very loud explosion – which caused the lead actor to look up at the sound booth in disbelief! I was horrified, and the heavy feeling of impostor syndrome set in. The director was nice about it and the cast found it amusing, but I felt terrible. I decided the best way to overcome my theatre-shaped impostor syndrome was to get back out there and face it head on. Two years later I was sound supervisor for one of the largest-capacity venues at the Edinburgh Festival. What is your most essential piece of kit? For hardware, it’s my trusty Zoom F8. It’s compact, robust and has eight inputs, which is usually sufficient for my needs. With regards to software, I’ve been using a Reaper extension for working with and creating audio files that adhere to the ADM (audio definition model) specification. In broadcasting there is now a shift towards the production and delivery of next-generation audio (NGA). What audio technologies are you excited about for the future? I’m interested to see what AI and machine learning can achieve in the audio space, specifically in the recognition of environmental sound and the generation of new sounds, beyond AI-generated music. The effectiveness of AI-powered tools like Dall-E 2 and Stable Diffusion for image generation and its use in animation; and ChatGPT for text generation, has advanced rapidly, democratising access to AI-generated content creation.
@feedzinesocial
Powered by FlippingBook