大象传媒

Vostok-K Incident - Immersive Spatial Sound Using Personal Audio Devices

Producing, delivering and evaluating our trial which orchestrates phones, laptops, tablets and other devices to create spatial sound - here's a summary of our research.

Published: 26 November 2019
  • Jon Francombe (BMus Ph.D.)

    Jon Francombe (BMus Ph.D.)

    Lead Research & Development Engineer
  • Kristian Hentschel

    Trainee Research Technologist

Just over a year ago, the Research & Development audio team in collaboration with the launched The Vostok-K Incident, a short science-fiction audio drama. The piece works on its own as a stereo production but also lets users connect personal devices as additional loudspeakers. Parts of the sound scene (as well as additional content) are automatically routed to these extra devices. By doing this, we can flexibly create immersive spatial audio experiences without being "locked-in" to particular loudspeaker layouts or technology providers.

We'd been investigating this idea of device orchestration for a while, publishing a a the user experience in a demo system. We decided to commission and create The Vostok-K Incident to develop the production process and also see how the idea of using all available devices played out in the real world.

Last year, we summarised the production process for The Vostok-K Incident. Since then, we've published three papers describing the production, delivery, and evaluation in more detail. These are now all freely available as 大象传媒 R&D White Papers and below is a summary of each paper, as well as giving a brief update on our next steps. Additionally, two more papers on research related to device orchestration (output from a PhD project that we support) have also been made available.

In October 2018, Jon presented an engineering brief at the in New York. Audio content that can adapt to an unknown number of connected devices spread out in unknown positions in the room is a significant departure from the norm of assuming a standard layout of stereo loudspeakers. Consequently, the production process was involved and time-consuming. In this paper, we outlined the bespoke production environment that was set up at 大象传媒 R&D for creating The Vostok-K Incident. We introduced the ruleset that was used to decide which parts of the audio scene should be sent to each connected device, and then reviewed some of the challenges of writing, recording, and producing this kind of content.

Also at the Audio Engineering Society Convention

In addition to the work on device orchestration happening in the audio team, we support a PhD project at the . Craig Cieciura is investigating the development of , and he presented two engineering briefs at the AES in New York. The first (, republished as 大象传媒 R&D White Paper 363) describes a survey looking into the types of loudspeaker devices that listeners have at home as well as how they consume media. The results showed that there is significant ownership of wireless and smart loudspeakers, and a low proportion of listeners have surround sound systems. The second (, republished as 大象传媒 R&D White Paper 364) describes a library of audio and audio-visual object-based content created for use in experiments investigating device orchestration.

Plenty more work by the R&D audio team was represented at the convention. Lauren Ward presented an engineering brief about using audio for delivering accessible mixes. Her narrative importance control works by allowing the producer to tag different sounds with their importance level, then adjusting the object volumes according to whether the listener wants an immersive experience or a clear narrative. This work has since been implemented in a trial with 大象传媒 One's hospital drama series, Casualty. Additionally, Jon chaired a panel of industry and academic experts discussing best practice for recruiting and training participants for listening experiments and was also on a panel talking about on .

Once The Vostok-K Incident had been produced, we needed a way to deliver it over the internet. Kristian's paper at (closer to home in the building next door to our Salford office) summarised the web delivery framework that he built. The short paper was written to accompany a demonstration and starts by placing our work in the context of previous orchestrated experiences. It goes on to introduce the framework, including audio processing, delivery, and routing; synchronisation of devices (); and user interface considerations.

Other 大象传媒 R&D involvement at TVX included a co-chaired by our audio team lead Chris Pike and a evaluating the Living Room of the Future.

Finally, after delivering The Vostok-K Incident on 大象传媒 Taster, we wanted to evaluate it. One way of doing this is to look at how people interacted with the trial. This was the subject of a paper that Jon presented in September 2019 at the in Aachen, Germany. We used interaction logs to look at the time that listeners spent with The Vostok-K Incident, the number of extra devices they connected, and how they used the different options on the interface. We also analysed the results of a short questionnaire that listeners could complete after the experience. The results suggested that there is value in this approach to delivering audio drama: 79% of respondents loved or liked using phones as speakers, and 85% would use the technology again. The interaction results gave helpful pointers for future experiences, suggesting that we should aim to get the greatest possible benefit out of few connected devices, ensure that content is impactful right from the start, and explore different types of user interaction.

The paper also discussed the pros and cons of using a large scale public trial for evaluation, concluding that it is a useful technique that should be performed alongside more carefully controlled user testing. We have subsequently performed such a user test, conducting a small number of telephone interviews with listeners. The early results from these look useful and will be published in due course.

What's next for our device orchestration research?

After evaluating The Vostok-K Incident, we designed and delivered several workshops aimed to better understand audience and producer requirements for the device orchestration technology. These workshops are ongoing, and we're aiming to write up our findings towards the end of 2019. Alongside this, we've been working on a software tool that makes it much easier to create prototype orchestrated experiences. Watch this space for more information! We're also involved in longer-term research projects through industrial supervision of PhD students. Craig's project detailed above is ongoing, and we're shortly embarking on another PhD project at the University of York (as part of the ), looking into the creative affordances of device orchestration.

  • Immersive and Interactive Content section

    IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: