FEED Winter 2024 Web

you’re watching a foreign film in the cinema, it’ll likely either have subtitles or be

toggled on or off. Lastly, dubs are in a league of their own. Added in post-production, dubbed audio replaces the original without altering the video – a common practice when, say, creating an English- language version of a Japanese film. Dubbing can also be used in other instances, such as when James Earl Jones dubbed the voice of Darth Vader. When captioning, subtitling or dubbing, accuracy is critical to ensure the production’s original intent is preserved rather than getting lost in translation. “An accurate transcription means that every word, tone and context is conveyed precisely, enabling audiences to understand the full meaning without ambiguity or misunderstanding,” explains Sharon Biggar, head of marketing at Happy Scribe. “AI is rapidly advancing in language processing, but it still faces challenges when it comes to translating and localising subtitles with nuanced elements like idioms, sarcasm and cultural references – a process known as adaptation. “While AI is making great strides,” Biggar continues, “achieving truly high-quality, culturally adapted subtitles requires a human touch. At Happy Scribe, we have our own team of skilled linguists who review and refine our AI-generated content. The Scribes, as we call them, adapt jokes, idiomatic expressions and cultural references so that the full text resonates authentically with the target audience and the final output is both accurate and relevant.” “Besides maintaining accuracy for clarity’s sake,” notes Jane Sung, COO at Cinedeck, “if AI makes mistakes in captioning, subtitling and dubbing – and these are not picked up by a media operator before the content is broadcast – this can lead to viewers disengaging or being offended. Depending on the nature of the error, this could damage a broadcaster’s brand image and, at the far end of the scale, there could also be some kind of sanction imposed if they breach regulatory requirements.” These requirements, which include the Americans with Disabilities Act (ADA) and the Web Content Accessibility Guidelines (WCAG), are designed to accommodate all viewers, particularly those who are deaf, HOH, autistic or have ADHD. Captions and subtitles can increase comprehension for these groups, ensuring they have the same opportunity to understand the content as anyone else. “Providing top-rate captions is a must for meeting regulatory compliance as well as the high expectations of today’s quality-conscious audience,” explains Sana Afsar, management staff at Interra Systems. IMPERFECTIONS AND OPPORTUNITIES Besides making the occasional contextual error, AI also falters under other conditions, particularly when transcribing noisy or unclear audio – or if it hasn’t adequately learnt a certain language. At Happy Scribe, “The most common languages such as English, Spanish and German have high AI accuracy – generally 90% or above,” states Biggar. “However, there are other, less common languages for which the AI is not well trained and might have lower accuracy. For these languages, the AI can

dubbed over with English dialogue. If you’re watching TV in public and can’t make

out the audio, it might have closed captions to verbally describe whatever is being displayed. These are just two examples of when captions, subtitles

or dubs might be used. For some, captions are a personal preference and for others they’re necessary for understanding – particularly for neurodivergent people or those who are hard of hearing (HOH). Increasingly, AI and machine learning are being used to generate these audio-visual aids, with speech-to-text technology (and, conversely, text-to-speech for dubbing) cutting the time and cost of doing so manually. With companies expressing a growing attention to accessibility and interest in localisation, AI captioning, subtitling and dubbing platforms are becoming a mainstay in video production. While big-budget studios like Netflix have the financial means to train voice actors

and hire professional subtitlers (think of Stranger Things Season 4’s [Eleven pants] or [tentacles squelching wetly], which were written with a human sense of humour), AI’s appeal is in its efficiency. It doesn’t remove the human element altogether; instead, it supplements it, allowing employees to spend more time on quality control and less on the tedium of transcribing by hand. As with dubbing, this technology encourages localisation of content, making it appeal to global audiences who speak any number of languages – and catering to our increasingly connected world. DOIN’ IT RIGHT First, let’s explore the subtle yet significant differences between captions (both open and closed), subtitles and dubs, as these terms – primarily closed captions and subtitles – are often interchanged. Subtitles are transcriptions of dialogue only, and can be used to translate whatever’s being spoken into a language that audiences can read (for example, a Spanish telenovela might have English subtitles). Captions, on the other hand, describe all aspects of audio – dialogue, sound effects, background music and so on – with ‘open’ meaning always visible and ‘closed’ meaning they can be

feedmagazine.tv

Powered by