´óÏó´«Ã½

Archives for December 2011

Bringing audio to the web, and the web to audio

Post categories:

Olivier Thereaux | 11:45 UK time, Thursday, 22 December 2011

Some of the exciting research work we do on Audio requires 50 microphones or a State of the art listening room. Some just need a web browser and our will to cooperate with the best in the industry.

For almost two years now, the ´óÏó´«Ã½ has been active in the audio effort of the W3C, the forum where new web standards are being discussed and developed. Since we joined the effort, much work was done, including the building of early implementations by the likes of Mozilla and Google, but there hadn't been a lot of official announcements on progress. Things changed dramatically in the past two weeks, however: just days after the W3C director and management gave the ´óÏó´«Ã½ a significant nod to our contribution and expertise by appointing a ´óÏó´«Ã½ representative (yours truly) as co-chair of its , the working group resolved to publish not one, but two draft specifications which promise to make audio a first-class citizen on the web.

As explained in the , the emerging technologies will allow developers to build rich experiences directly in the browser, as part of the increasingly exciting open web stack:

  • Loop sounds without gaps
  • Control parameters such as Bass and Treble, enhancing the clarity of audio
  • Panning sounds left to right
  • Positioning sounds in 3D space for games
  • Easily process the raw data in an audio stream for scientific research
  • Adding filters and effects to audio for music creation
  • Visualize audio signals for music and streaming applications

Such features will enable great applications from gaming to collaboration on music and video/audio communication between friends, all in the browser, using technologies open to all. Work is not over for the group - indeed, one could argue that publishing two draft specifications is a sign that the serious work can begin, and there are months of collaboration ahead of us on refining the specifications, building test suites to ensure interoperability of implementations - not to mention building consensus towards merging the two efforts into, ideally, a single interoperable technology which would work for all.

But the most exciting thing, perhaps, is knowing that this effort will not only bring great audio experiences on the web, it will also bring the web and the many possibilities of open, decentralised collaboration to the audio world, and will surely give birth to exciting, entirely new, and as yet unheard of applications.

To learn more about this work, read the or visit the .

Controlling 3D Sound Using Natural Gesture

Post categories:

Chris Pike Chris Pike | 11:00 UK time, Thursday, 22 December 2011

In ´óÏó´«Ã½ R&D’s audio team we are investigating immersive audio formats for the future of broadcasting. A key aim for this work is to create a more realistic sense of space in audio content, allowing listeners to perceive sounds in three dimensions by adding height and depth information.

This work recently featured on the . That article introduced our recent experiments with gesture-based control to allow sound engineers position sounds in 3D space. Using the t we have developed a tool that allows the user to move sounds with natural arm movements.

We’ve made a short video about it.

Ìý

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Ìý

Read the rest of this entry

iPlayer3D update

Post categories: ,Ìý

Ant Miller Ant Miller | 16:18 UK time, Wednesday, 21 December 2011

The Digital Service Development group led by Phil Layton in ´óÏó´«Ã½ R&D was involved in the previous trial of 3D at the Wimbledon Tennis Championships this year and also the recently broadcast Strictly Come Dancing Grand Final. In this post Dr Peter Cherriman and Paul Gorley outline the work they did to determine if it was possible to put 3D content onto the Freeview and Freesat versions of TV iPlayer.
Ìý
Generally 3D requires a high bitrate to achieve good stereoscopy. If the video bitrate is too low, depth cues are lost and the 3D becomes tiring to watch. However, due to varying Internet speeds, the higher the bitrate on iPlayer, the less people that are able to watch it. So we had the challenging task of trying to producing high quality 3D at as low a bitrate as possible.

Our 3D television broadcasts on Freeview, Freesat, Sky and Virgin all use a side-by-side frame-compatible format. This combines the Left and Right eye views into a single HD signal, by anamorphically squashing horizontally each eye's view into half of the HD frame, so they appear side-by-side. This HD signal was compressed at resolution of 1920x1080i25, which means 25 interlaced frames per second each of which is comprised of 1920 pixels across and 1080 lines. Each interlaced frame comprised of two fields, each field is 1920x540 pixels, and the fields are captured 1/50th of a second apart.

In order to produce the best quality 3D we decided to use a recording made in Blackpool, rather than use the broadcast feed received via satellite. This had a number of advantages, it meant we weren't limited to the side-by-side 3D format and the recording would have less compression artefacts.

We did a number of experiments with different resolutions and determined the best compromise for bitrate and quality was to convert the recorded 1920x1080i25 interlaced signal for each eye into a non-interlaced 1280x720p50 signal using a professional cross-converter. The intermediate signal created is at 50 frame per second, where each frame is 1280x720 pixels for each eye.

Encoding chain schematic for 3d on iPayer

Ìý

Read the rest of this entry

´óÏó´«Ã½ R&D 9 Lessons & Carols Experiment

Post categories: ,Ìý

Anthony Churnside Anthony Churnside | 15:33 UK time, Wednesday, 21 December 2011

This Christmas the ´óÏó´«Ã½ is conducting a surround sound experiment. You can find out more about what we are doing and why we are doing it by reading this blog post by Rupert Brun.

´óÏó´«Ã½ R&D are involved with one aspect of this experiment. We are investigating ways of improving the experience for people listening to our content using headphones.Ìý

Here is a short video where I explain how and why we are doing this.

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

We have created a number of different versions of 2007'sÌýA Festival of Nine Lessons and Carols.ÌýEach of the versions has been processed in a different manner. At the beginning of each version of the service an audio test sequence plays, giving you time to decide if you like that version or not. You do not need to listen to the entirety of each version in order to take part.

To take part follow this link to hear our recordings (make sure you put on your headphones).Ìý

To fill in the survey and provide us with some feedback please follow .

I hope you enjoy the experiment and have a happy Christmas.Ìý

Prototyping weeknotes #90

Post categories: ,Ìý,Ìý

Chris Godbert | 12:28 UK time, Monday, 19 December 2011

Project-wise, there's a big push by the P2P-Next team to get their formal LIMO deliverable finished and submitted for the end of December deadline. At our weekly team meeting they showed their first end-to-end demo: real-time LIMO event (timed metadata) delivery, synchronised to a live WebM video stream. It's a fantastic achievement and really exciting to see it all working together. It was therefore interesting to read that .

We are also really excited to have just started recruiting for two new roles; an and a .

Read the rest of this entry

Prototyping weeknotes #89

Post categories: ,Ìý,Ìý

George Wright George Wright | 11:47 UK time, Tuesday, 13 December 2011

A lot of activity this week as we approach Christmas, have a lot of deadlines, and sort out next year's priorities.ÌýFor some of us, the week started with a great team branding workshop organised by Olivier. Very useful! There was good progress on FI-Content with new user journeys coming along and Vicky, Joanne and Pete have continued to focus their UX thinking by refining their scenarios and developing new wireframes. We are now looking at the Programme List as a use case for further development.ÌýAlso, it was Theo's birthday on the weekend so we ate some cake.

Read the rest of this entry

White Papers Published Lately

As the library here at R&D increases its White Paper publication schedule, we're going to start posting a monthly round up of the most recent additions to our collection.Ìý All white papers are available to download subject to our Ts & Cs (as listed on the website).

´óÏó´«Ã½ R&D White papers

Ìý

This month the papers come from a typically wide array of research areas, and include several that will provide useful; introductions to interested parties new to the subjects under consideration.Ìý As ever, we also include a number of more technical interest.Ìý Admittedly some of these papers were actually published a little more than a month ago, but this feels like a good initial round up- feedback welcome of course.

Ìý

´óÏó´«Ã½ R&D White Paper 205 - A Simple Model of the UHF cross-polar terrestrial channel for DVB-NGH

Peter Moss, Tuck Yeen Poon and John Boyer

September 2011

In this technical paper from our Distribution Core Technologies Section we detail some aspects of the long term research we are engaged in looking at the platforms for next generation handheld television broadcast.

Read the rest of this entry

Prototyping Weeknotes #88

Post categories: ,Ìý,Ìý

Chris Lowis Chris Lowis | 14:56 UK time, Friday, 2 December 2011

Our weekly show-and-tell team meeting is rapidly becoming one of the things I most look forward to each week. A chance for our enlarged team to get together, share ideas and ask questions. This week was no different. Sean and Chris N. talked about the work they've been doing as part of the P2P-Next project. They've been looking at a GStreamer-powered pipeline which allows timecodes to be injected into the media stream and then accessed in the browser. Chris showed us a demo of a looping video with a variety of different embedded time codes. It's a very powerful approach and the guys are learning a lot about a complicated toolchain. Andrew N has also been busy refactoring the demonstrator LIMO app to integrate with the live stream work of Sean and Chris.

Read the rest of this entry

More from this blog...

´óÏó´«Ã½ iD

´óÏó´«Ã½ navigation

´óÏó´«Ã½ © 2014 The ´óÏó´«Ã½ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.