In our first post on audio-led AR, R&D's Henry Cooke discussed why we鈥檙e interested in the technology. After going on a soundwalk, we decided that a interesting next step would be to port our existing experiment in geolocated audio to our Bose Frames devkit: . Here, - until recently a Software Engineer at 大象传媒 R&D - describes the port and what he learned by making it.
Alluvial Sharawadji is 鈥渃rowdsourced soundwalk鈥 work made for the in Catalonia in summer 2018. For the piece, Tim & I built a mobile-friendly web app allowing participants to record sounds, which would be saved along with the current geolocation of the participant. We then used and the Web Audio API to present a 鈥渧irtual soundwalk鈥 around the town.
The Sharawadji soundwalks are represented as remotely stored sound files with associated latitude and longitude coordinates. The coordinate mapping and the sound files are downloaded when the soundwalk starts.
For this rebuild, we use an iPhone鈥檚 GPS to position sounds around the listener, and calculate the volume level of each sound based on its distance from the listener. We then use the (Google VR) library to render the sounds spatially in real time, giving us a dynamic soundscape that changes in response to the listener鈥檚 position in the real world.
The first soundwalk we tested was composed of sounds recorded by various members of our team with their phones around our office in White City, London. Most of these sounds were fairly quiet, ambient recordings with lots of traffic sounds, construction noises and passers-by. At this point, we were using a 鈥渞olloff鈥 mechanism from the Resonance Audio library 鈥 鈥渞olloff鈥 being the gradual attenuation of sound volume as the listener moves away from it.
We found that ambient sounds, especially those recorded in the environment where they鈥檙e being played back, tend to be drowned by real-world noises, and so don鈥檛 provide enough immersion. Furthermore, the built-in rolloff mechanism in Resonance Audio seems unsuitable for our coordinate system, as to walk a longitudinal distance of one degree is to walk about 51 miles in London. Due to the small changes in coordinate values, there was no perceptible rolloff effect at all.
For the second test, we 鈥減laced鈥 three pieces of music at approximately 150 metre intervals along Wood Lane. We also ported the Inverse Square Law-based volume attenuation function from the Sharawadji webapp into our demo, and switched off the Resonance Audio rolloff. We found the custom attenuation worked very well, resulting in well-localised sounds 鈥 at about 5-10m from the sound location, we could hear a faint hint of the music, and following the direction it seemed to come from, we arrived at the spot where it played the loudest.
We are very excited about the possibilities of audio AR technology, and we鈥檙e keen to run more experiments with it. From our tests so far it seems that with the right sound material and adjustments to the placement and attenuation of sounds, it could prove an interesting platform for new localised sound experiences.
- -
- 大象传媒 R&D - On Our Radar: Audio AR
- 大象传媒 R&D - Audio AR: Sound Walk Research - The Missing Voice
- 大象传媒 R&D - Virtual Reality Sound in The Turning Forest
- 大象传媒 R&D - Binaural Sound
- 大象传媒 R&D - Spatial Audio
- 大象传媒 Academy - Spatial Audio: Where Do I Start?
- 大象传媒 R&D - What Do Young People Want From a Radio Player?
- 大象传媒 R&D - Prototyping, Hacking and Evaluating New Radio Experiences
- 大象传媒 R&D - Better Radio Experiences: Generating Unexpected Ideas by Bringing User Research to Life
-
Internet Research and Future Services section
The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the 大象传媒. 大象传媒 focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.