麻豆社

  1. Try  
  2. Rate 350 ratings

How did you rate this?


Pilot ended 8th September 2019
8,454
Tried
350
Rated
0
Shared
Casualty: A&E Audio
Casualty: A&E Audio trailer
Casualty: A&E Audio
We want to make it easier for people with a hearing impairment to enjoy our dramas and follow every twist. Watch this Casualty episode and try out a new feature where you can set the audio mix to suit your needs. Available on: LAPTOP / COMPUTER / ANDROID PHONE. Audio described version is below.

The Inside Story

We caught up with Lauren Ward, the brains (and ears) behind this project

An audio described version is available here

Can you sum up the project?

In Casualty A&E Audio the A&E actually stands for ‘Accessible and Enhanced’ audio. In this project we are trialling a new feature that allows you to change the audio mix of the episode to best suit your own needs and preferences.

At the right-hand side of the slider, you get the same mix as heard on TV. At the left, the dialogue is enhanced and some of the other sounds are quieter. You can adjust between these two extremes to get the right balance of dialogue, important sounds and atmosphere for you.

Why have you made this?

Our hearing is very personal and we know that not everyone wants the same thing out of TV audio. The perfect cinematic mix for one viewer might be too loud, or too busy, for another. For many people, the level of background sound in a TV programme can make understanding the dialogue difficult, particularly for viewers who have a hearing impairment. We designed this feature to allow viewers to personalise their listening experience: enhancing some aspects of the audio mix and reducing others so that they can get the most out of TV dramas, tailored to their hearing needs.

What’s new?

Our technology adds two things to the process of making and watching a TV programme. The first occurs after filming, when the audio mixing takes place. At this point each sound, or group of sounds, has an importance level attached to it (stored in metadata) by the dubbing mixer or producer.

The other part is the new slider, added to the online media player, called the Narrative Balancer. This is what's used to personalise the mix. At one end of the slider all the objects are the same level as they are in the original broadcast mix. At the other end is a simplified mix with louder speech and only the most important sounds to the narrative. The viewer is then able to adjust between these two mixes to find the balance of dialogue and other sounds that they prefer.

Behind the scenes, the player is looking at where you have set the slider and for every group of sounds, either turning their volume up or down based on its importance. But for the user, it’s as simple as changing the volume.

Can you tell us a bit more about the technology?

At 麻豆社 R&D we’re researching the next generation of audio for broadcasting - we call it object-based audio, and that’s what this new feature is based on.

Currently, we transmit our programmes as a single pre-mixed stream. This includes vision, speech, sound effects, background music and all the other sounds. However, this makes trying to change the volume of the individual elements, like dialogue, nearly impossible. It'd be like trying to take the eggs out of an already baked cake.

Object-based audio sends all of the elements separately, with instructions on how to play the separate elements, known as ‘metadata’. In our cake analogy, this would be like sending you the ingredients and the recipe rather than the finished cake. This means that when your device reassembles the soundtrack, what you hear can be tailored to you.

What were the challenges?

Not all sounds in a TV or film soundtrack are created equal. In almost all programmes, the dialogue conveys much of the story, so we knew any accessible audio solution we developed needed to enhance the dialogue. However, there is also some core information that's carried in non-speech sounds: consider the gunshot in a ‘whodunnit’ or the music in the film ‘Jaws’. Without these important sounds, the story wouldn’t make sense. From this we start to build a hierarchy of sounds. Speech is at the top, followed by non-speech sounds that convey core elements of the story, and finally atmospheric elements. Our technology uses this idea of how important a sound is to the story to make personalising audio simple, whilst ensuring that the story always makes sense.

Whether you're on laptop, tablet, web browser on a smart TV or smart phone, we’d love you to give it a try! Tell us what you think by rating the experience and help us bring personalised audio one step closer to your living room.

A&E Audio Project Team

The A&E Audio Project is a collaboration between 麻豆社 Research and Development, Casualty and the University of Salford. The project has been supported by the EPSRC S3A Future Spatial Audio Project.

Project Lead: Lauren Ward

麻豆社 R&D Lead: Dr. Matt Paradis

University of Salford Lead: Dr. Ben Shirley

Casualty Series Producer: Lucy Raffety

Casualty Producer: Dafydd Llewelyn

Post-Production Supervisor: Rhys Davies

Engineering Support: Robin Moore

Casualty Dubbing Mixer: Laura Russon

Pre-roll Video: Gabriella Leon (as herself)

AV Production (Pre-roll): Joe Marshall

 

 

 

 

Casualty: A&E Audio