Â鶹Éç

« Previous | Main | Next »

Elbow in 3D Sound

Post categories: ,Ìý,Ìý

Anthony Churnside Anthony Churnside | 23:11 UK time, Friday, 11 November 2011

Here's a short post about a 3D sound experiment that Â鶹Éç R&D's audio team conducted the other week in collaboration with Â鶹Éç Radio 2.

As part of the Â鶹Éç Audio Research Partnership, that was launched earlier in the summer, we are looking at potential next generation audio formats. You may have read some of our work into ambisonics here and there. If you want some more detailed information about what we’ve done, you can read this paper of ours, which is available from the . I think the headline from the paper was that first-order ambisonics is not as good as traditional channel-based audio, like 5.1 surround, for (what can be considered) point sources, but it does work very well for reverberation and diffuse sounds. With this in mind we spotted an opportunity to conduct an experiment using a hybrid of ambisonics and normal channel based audio.

Elbow, the Manchester band, were planning a homecoming gig in . After an email to the right person it was agreed that the team could try some experimental recording. We thought this would provide an excellent opportunity to learn about capturing a full sound scene using a combination of ambisonics and close microphone techniques. It would also allow us to improve our understanding of challenges and compromises faced when integrating 3D audio capture and mixing and into real-world live production environment.

The Soundfield Microphone Position

The Soundfield microphone position

While we suspected that the acoustic of the cathedral would sound great when captured using ambisonics, we didn't really want to capture the on-stage and PA sound with the ambisonics microphone and it was a rather loud sound reinforcement system. We've recorded the Â鶹Éç Philharmonic in ambisonics a few times before but have never had to contend with a loud PA. Thankfully, there are tricks you can perform with ambisonics, such as attenuating the sound from the direction of the left and right speakers of the PA. Plus, Elbow were kind enough to put their drummer in a , so the direct sound of the drums (the loudest instrument on stage) would be attenuated too.

Chris Pike setting up the recording equipment in the back of the outside broadcast truck

Chris Pike, setting up the recording equipment in the back of the outside broadcast truck

Due the nature of the cathedral's structure we couldn't get the ideal ambisonic microphoneÌýposition by slinging it from above. A couple of years ago we recorded the Last Night of the Proms in ambisonics and were able to position the microphone right in the centre of the hall, above and behind the conductor. However, we had to compromise for this event by placing the microphone slightly to stage left of the front of house mixing desk. This worked quite well because it was far enough back from the PA speakers that from most directions it was just capturing reflections from the walls and ceiling of the cathedral. This also helped us to attenuate the PA sound mentioned earlier. The microphone was also raised above the audience by a few metres; while we wanted to hear the audience singing and clapping, we didn't want to hear specific members of the audience chatting.

Tony Churnside, pleased to have everything up and running before the band begin to play.

We could have just recorded the microphone signals without worrying too much about how it sounded, but we thought that it would be nice to at least try to monitor the 3D recording. The space inside one of Radio 2's outside broadcast trucks is very limited with nowhere near enough space for the 16 speaker set up we have in our listening room. To get around this we decided to use a technology called binaural reproduction using a system developed by called a Realiser. This box allows you to simulate surround sound over headphones. It's fairly complicated to set up, but it does a pretty good job of making it sound like you're in your multi-speaker listening room, when you're actually listening over headphones. Normally the Realiser is used to monitor 5.1 surround sound, but we calibrated the system with a cube of 8 speakers to allow us to monitor sound in 3D. It even has a head tracker to keep sounds horizontally stationary relative to small lateral movements of your head.

Â鶹Éç R&D's prototype 3D sound mixer

Along with the ambisonics microphone signals we recorded all the individual sources (about 50 of them) to allow us to remix the recording in our listening room. We have developed our own spatial audio 3D panner that allows us to position each of the sources anywhere in the 3D soundscape and over the next month or so we will experiment with a number of different spatial mixes of the recording to assess which is generally preferred.

We learnt (and are still learning) a lot from this experiment and will be publishing results and analysis over the coming months. In the mean time here's a link to the Radio 2 show where you can watch some clips of Elbow's performance.

We would like to thank Rupert Brun, Sarah Gaston, Paul Young, Mike Walter, Mark Ward and all at Radio 2 for making this experiment possible.


Comments

  • Comment number 1.

    You refer to your AES paper, which I do not have available at the moment, and note that you found "first-order ambisonics is not as good as traditional channel-based audio, like 5.1 surround, for (what can be considered) point sources, but it does work very well for reverberation and diffuse sounds".

    I presume that to come to this conclusion that you were not simply relying on a Soundfield microphone and that you were using the full gamut of mixing tools such as B-Format panpots.

    As far as I personally am concerned, the SFM is the equivalent of the coincident pair in conventional stereo: simple, works well, sounds good, but is extremely limited. In a modern recording environment you need close-miking, you need multitrack, you need panpots and the gamut of processing on which contemporary relies. In such material in my own experience, I would only use an SFM for capturing ambience (as you might use a coincident pair in stereo).

    If, however, you actually did use proper B-format panpots and so on, I am rather surprised that you found conventional channel-speaker mapping systems superior, as this does not correlate with my own experience in this area; I have been working exclusively in first order for some years including mixing a number of albums with it, and in my opinion localisation and immersion are significantly superior to conventional surround and with greatly reduced holes between the speakers and a larger listening area. I am looking forward to reading the paper and will be interested to learn more.

  • Comment number 2.

    Hi Richard,

    Thanks for the comment. The conclusions we came to with first order b format used a selection of test items, some of which were made of mono files encoded (panned) to b format, some were recorded using just a soundfield microphone, and some used a combination of the two.

    The results from our testing back up some of your observations, feedback from subjects often stated they found b format more immersive than 5.1. Tests also showed subjects preferred b format for diffuse sounds, like wind and rain sound effects, but preferred 5.1 for point sources, like speech. Most of our samples had the speech signals panned towards the front (to be representative of typical Â鶹Éç content).

    I guess if you were to set out do a localisation comparison of 5.1 and b format the former, as an irregular layout, would struggle with the gaps to the side and rear. The decoding method and number and location of the speakers would all also be key factors in any localisation test, and there are so many variables it's very difficult to prove one format is superior to another.

  • Comment number 3.

    Hey Tony, love your research, got directed to this from my lecturer as I am currently working on using motion tracking to improve immersion in 3D audio environments. I was wondering if you would mind if I asked you a few questions, as I am really intruiged if the technology I am developing could be integrated in to similar systems to those you are working with. [Personal details removed by Moderator] Keep up the good work.

  • Comment number 4.

    Hi dmanuel, the moderation rules of all Â鶹Éç blogs mean that your personal details can't be displayed, but if you have questions relating to this research and would like to contact the team as part of an academic course, please drop your questions through to the R&D contact email address on our web pages here: /rd/contact/general.shtml
    We can't promise that the team will have the capacity to give a complete response, but we will at least try and help you get in touch.

Ìý

More from this blog...

Â鶹Éç iD

Â鶹Éç navigation

Â鶹Éç © 2014 The Â鶹Éç is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.