Elbow in 3D Sound
Here's a short post about a 3D sound experiment that ´óÏó´«Ã½ R&D's audio team conducted the other week in collaboration with ´óÏó´«Ã½ Radio 2.
As part of the ´óÏó´«Ã½ Audio Research Partnership, that was launched earlier in the summer, we are looking at potential next generation audio formats. You may have read some of our work into ambisonics here and there. If you want some more detailed information about what we’ve done, you can read this paper of ours, which is available from the . I think the headline from the paper was that first-order ambisonics is not as good as traditional channel-based audio, like 5.1 surround, for (what can be considered) point sources, but it does work very well for reverberation and diffuse sounds. With this in mind we spotted an opportunity to conduct an experiment using a hybrid of ambisonics and normal channel based audio.
Elbow, the Manchester band, were planning a homecoming gig in . After an email to the right person it was agreed that the team could try some experimental recording. We thought this would provide an excellent opportunity to learn about capturing a full sound scene using a combination of ambisonics and close microphone techniques. It would also allow us to improve our understanding of challenges and compromises faced when integrating 3D audio capture and mixing and into real-world live production environment.
While we suspected that the acoustic of the cathedral would sound great when captured using ambisonics, we didn't really want to capture the on-stage and PA sound with the ambisonics microphone and it was a rather loud sound reinforcement system. We've recorded the ´óÏó´«Ã½ Philharmonic in ambisonics a few times before but have never had to contend with a loud PA. Thankfully, there are tricks you can perform with ambisonics, such as attenuating the sound from the direction of the left and right speakers of the PA. Plus, Elbow were kind enough to put their drummer in a , so the direct sound of the drums (the loudest instrument on stage) would be attenuated too.
Due the nature of the cathedral's structure we couldn't get the ideal ambisonic microphoneÌýposition by slinging it from above. A couple of years ago we recorded the Last Night of the Proms in ambisonics and were able to position the microphone right in the centre of the hall, above and behind the conductor. However, we had to compromise for this event by placing the microphone slightly to stage left of the front of house mixing desk. This worked quite well because it was far enough back from the PA speakers that from most directions it was just capturing reflections from the walls and ceiling of the cathedral. This also helped us to attenuate the PA sound mentioned earlier. The microphone was also raised above the audience by a few metres; while we wanted to hear the audience singing and clapping, we didn't want to hear specific members of the audience chatting.
We could have just recorded the microphone signals without worrying too much about how it sounded, but we thought that it would be nice to at least try to monitor the 3D recording. The space inside one of Radio 2's outside broadcast trucks is very limited with nowhere near enough space for the 16 speaker set up we have in our listening room. To get around this we decided to use a technology called binaural reproduction using a system developed by called a Realiser. This box allows you to simulate surround sound over headphones. It's fairly complicated to set up, but it does a pretty good job of making it sound like you're in your multi-speaker listening room, when you're actually listening over headphones. Normally the Realiser is used to monitor 5.1 surround sound, but we calibrated the system with a cube of 8 speakers to allow us to monitor sound in 3D. It even has a head tracker to keep sounds horizontally stationary relative to small lateral movements of your head.
Along with the ambisonics microphone signals we recorded all the individual sources (about 50 of them) to allow us to remix the recording in our listening room. We have developed our own spatial audio 3D panner that allows us to position each of the sources anywhere in the 3D soundscape and over the next month or so we will experiment with a number of different spatial mixes of the recording to assess which is generally preferred.
We learnt (and are still learning) a lot from this experiment and will be publishing results and analysis over the coming months. In the mean time here's a link to the Radio 2 show where you can watch some clips of Elbow's performance.
We would like to thank Rupert Brun, Sarah Gaston, Paul Young, Mike Walter, Mark Ward and all at Radio 2 for making this experiment possible.
Comment number 1.
At 14th Nov 2011, Richard E wrote:You refer to your AES paper, which I do not have available at the moment, and note that you found "first-order ambisonics is not as good as traditional channel-based audio, like 5.1 surround, for (what can be considered) point sources, but it does work very well for reverberation and diffuse sounds".
I presume that to come to this conclusion that you were not simply relying on a Soundfield microphone and that you were using the full gamut of mixing tools such as B-Format panpots.
As far as I personally am concerned, the SFM is the equivalent of the coincident pair in conventional stereo: simple, works well, sounds good, but is extremely limited. In a modern recording environment you need close-miking, you need multitrack, you need panpots and the gamut of processing on which contemporary relies. In such material in my own experience, I would only use an SFM for capturing ambience (as you might use a coincident pair in stereo).
If, however, you actually did use proper B-format panpots and so on, I am rather surprised that you found conventional channel-speaker mapping systems superior, as this does not correlate with my own experience in this area; I have been working exclusively in first order for some years including mixing a number of albums with it, and in my opinion localisation and immersion are significantly superior to conventional surround and with greatly reduced holes between the speakers and a larger listening area. I am looking forward to reading the paper and will be interested to learn more.
Complain about this comment (Comment number 1)
Comment number 2.
At 15th Nov 2011, tony churnside wrote:Hi Richard,
Thanks for the comment. The conclusions we came to with first order b format used a selection of test items, some of which were made of mono files encoded (panned) to b format, some were recorded using just a soundfield microphone, and some used a combination of the two.
The results from our testing back up some of your observations, feedback from subjects often stated they found b format more immersive than 5.1. Tests also showed subjects preferred b format for diffuse sounds, like wind and rain sound effects, but preferred 5.1 for point sources, like speech. Most of our samples had the speech signals panned towards the front (to be representative of typical ´óÏó´«Ã½ content).
I guess if you were to set out do a localisation comparison of 5.1 and b format the former, as an irregular layout, would struggle with the gaps to the side and rear. The decoding method and number and location of the speakers would all also be key factors in any localisation test, and there are so many variables it's very difficult to prove one format is superior to another.
Complain about this comment (Comment number 2)
Comment number 3.
At 24th Nov 2011, DavidManuel wrote:Hey Tony, love your research, got directed to this from my lecturer as I am currently working on using motion tracking to improve immersion in 3D audio environments. I was wondering if you would mind if I asked you a few questions, as I am really intruiged if the technology I am developing could be integrated in to similar systems to those you are working with. [Personal details removed by Moderator] Keep up the good work.
Complain about this comment (Comment number 3)
Comment number 4.
At 9th Dec 2011, Ant Miller wrote:Hi dmanuel, the moderation rules of all ´óÏó´«Ã½ blogs mean that your personal details can't be displayed, but if you have questions relating to this research and would like to contact the team as part of an academic course, please drop your questions through to the R&D contact email address on our web pages here: /rd/contact/general.shtml
We can't promise that the team will have the capacity to give a complete response, but we will at least try and help you get in touch.
Complain about this comment (Comment number 4)