´óÏó´«Ã½

Archives for December 2012

Pinocchio

Post categories: ,Ìý

Anthony Churnside Anthony Churnside | 14:22 UK time, Monday, 17 December 2012

This week has been quite a week for some of ´óÏó´«Ã½ R&D's current research projects getting air time. A team of us from the North Lab helped The One Show draw a giant Christmas Tree using light painting, and halfRF HD cameras were used for the live broadcast from television centre. This post is about a radio drama that was broadcast on Saturday on Radio 4. It is an adaptation of Carlo Collodi's Pinocchio. Linda, the script writer, has written a blog about adapting the story that you can read here.

Radio drama production workflows have developed over the last 60 years, but the productions are almost always a mixture of microphone recordings of live actors' performances, recorded , sound effects from libraries, and music either from a composer or a music library. These sources are then mixed together to create a stereo (two channel) file which is played out for the broadcast. Recently a few radio dramas have been produced in 5.1, for example earlier this year Private Peaceful was mixed to 5.1 and then rendered to binaural using virtual speakers. This involves the creation of two mixes, a stereo (two channel) file and a 5.1 surround (6 channel) file. The creative process of mixing to these formats had to be performed twice, once in stereo and once in surround.

Ìý

Mixing Pinocchio in ´óÏó´«Ã½ R&D's Listening Room

Mixing Pinocchio in ´óÏó´«Ã½ R&D's Listening Room

Ìý

For Pinocchio we did something a bit different. Steve, the sound designer, and I mixed the recordings, treating the sources as "audio objects". This means that rather than making a stereo mix by panning sounds somewhere between left and right speakers, we forgot about speaker locations and positioned audio objects at locations in space. This means the final mix, rather than being a stereo or surround sound file, is actually a set of audio objects, each with accompanying metadata to describe things like the source level, azimuth, elevation, distance, etc. This data is then rendered to speaker channels before broadcast (in the case of this experiment) or at the listener's home/device (potentially the case in the future).

There are a number of potential advantages to describing audio scenes in this way:

  1. Speakers/listening devices become independent of the mix. This means listeners can put as many or as few speakers as they want, wherever they want. Or they can listen using headphones or a mono tablet speaker. The client system knows the listening environment and can render the scene in the way that provides the highest quality of experience.
  2. An object based scene representation can be rendered differently for different people. For example, people with different hearing abilities may want a different balance between foreground and background sounds. There is also a lot of potential for applications like Perceptive Media when describing scenes using an object based audio approach.
  3. Interactivity can be enabled. When audio scenes are comprised of different object, those objects can be fully interacted with in order to provide computer game like experiences. This opens up a lot of user experience research questions.

    Ìý

    Microphone set up for some of the recording of Pinocchio.

    The microphone set up for some of the recording of Pinocchio.

    This approach has the advantage that we could mix Pinocchio in 3D, placing sounds wherever we wanted (above, below, in-front behind etc.). For exmaple, there is a scene where Pinocchio was swallowed by a shark. Steve and I were able to position underwater sound effects, as audio objects, all around the listener, creating a highly immersive experience.

    We were able to monitor the mix in our listening room, which was equiped with 26 loudspeakers. This set up has more loudspeakers than are likely to be available to the average listener so we used a rendering system created byÌý to create the stereo mix, which was broadcast on Saturday, a 5.1 mix and the 24.2 mix that the production team used to monitor the mix during post-production. So although you can't yet hear a full 3D surround version of Pinocchio, you can download a ( or ) 5.1 version to hear it in surround sound. Instructions for how to do so are given here.

    This is still very experimental and we'd love to see your feedback in the blog comments below.

    halfRF on The One Show- that went well!

    Post categories: ,Ìý

    Ant Miller Ant Miller | 19:00 UK time, Thursday, 13 December 2012

    A few notes from Ken Taylor from the halfRF team who helped make the launch of the ´óÏó´«Ã½ Christmas season such a great evening on Tuesday:

    Ìý

    As you know, the halfRF Radio Camera was used for the first time for a live broadcast of The One Show yesterday evening. JohnB, Adrian, Tuck, Kevin and I spent a very cold day setting up the equipment in the area around the Stage Door of TVC and in the SIS Live outside broadcast studio OB7. There was a great deal of interest in the system from everyone who saw it – particularly the SIS Live crew and cameramen.

    Ìý

    SiS live cameraman using the halfRF rig in preparation for a live link at TVC on The One Show

    SiS live cameraman using the halfRF rig in preparation for a live link at TVC on The One Show

    As expected, some of the cameramen said it was a bit big but were happy to understand that it is only a prototype and a properly engineered solution would be much smaller and ergonomically better. They did say however that the balance on the Sony camera was in fact quite good. Martyn, our cameraman for the evening, seemed to be quite happy with it.

    Ìý

    Read the rest of this entry

    Mood Classification for iPlayer

    Post categories: ,Ìý

    Sam Davies | 14:40 UK time, Thursday, 13 December 2012

    Recently we released the latest version of our experimental , which showcases part of our research into obtaining metadata from content itself, such as video and audio. The aim of our research is to help users find content of interest from the archives, but here we have used the technology to demonstrate how content from iPlayer can be found in new ways. Rather than search for a programme by title, actor or description, people can find programmes based on the mood of programme they fancy watching.ÌýÌýTo use the system, follow this .

    Read the rest of this entry

    Northern Lights

    Post categories: ,Ìý,Ìý,Ìý

    Chris Pike Chris Pike | 09:54 UK time, Thursday, 13 December 2012

    Some people may feel that they have been hearing Jingle Bells on a constant loop since 2003, but at the ´óÏó´«Ã½ the festive season began on Tuesday night on the One Show with the switching on of the ´óÏó´«Ã½ Christmas lights. At Television Centre Alex, Matt and their guests were having a merry old time at a traditional Christmas market and hanging out with Rudolph and his reindeer pals. Meanwhile, up here in Salford, ´óÏó´«Ã½ R&D got together with the One Show team and an army of volunteers to create an amazing 150ft long Christmas tree light painting live on-air.
    One Show light graffiti Christmas tree at MCUK

    The One Show Christmas Tree Light Painting

    Video and more after the bump!

    Read the rest of this entry

    halfRF TV Camera Tech Goes Live: One Show Tonight

    Post categories: ,Ìý,Ìý,Ìý,Ìý

    John Boyer | 18:30 UK time, Tuesday, 11 December 2012

    The project is buzzing with excitement this week as the ´óÏó´«Ã½ launches its Christmas schedule today and as part of that The One Show is showing a Christmas Spectacular at Television Centre (TVC) and we are going to provide a ‘halfRF’ MIMO HD RadiocameraÌý to help present the event.

    We have been itching toÌý try out the ‘halfRF’ MIMO HD Radiocamera outdoors for a while , but bad weather and other commitments had so far been in the way.Ìý It is one thing to know that your system ought to work well outdoors, but it’s much better to have tried it and imminent programme use really concentrates the mind, so last Thursday several of us braved the cold and did some outdoor tests at TVC.

    R&D has a rather elderly transit van which was previously used by our Spectrum Planning Group to check coverage of transmitters. The van is equipped with useful things like benches and mains power making it an ideal base for outdoor radiocamera tests.

    We initially set up just behind security at the front of the car park. The antennas were mounted on lighting stands about 8 feet off the ground and 20 feet apart. We walked the camera through the covered walkway next to Studio 1 and around the doughnut (the open circular area in the middle of TVC). We didn’t expect this arrangement to work brilliantly as there were number of obstructions that we thought might cause problems. We were in fact very pleasantly surprised, as there was only one small break up with this arrangement. We then rearranged the receive antennas so that one was much higher up and had a clearer view of the area to be covered. We repeated the walk and this time the coverage was absolutely perfect with no breakups at all.

    In the spectacularÌý there will be a stage in the upper car park and we thought weÌýmight be able to mount our antennas on some of the lighting rigs, so we moved the van to the upper car park and set our antennas up there. The performance again was perfect around the car parks and in the doughnut.Ìý

    UPDATE: Ant here- I've just been over to see the team setting up as the last of the daylight fades- all going well- the radio camera tech is generating lot's of interest among the SiS-Live OB team.Ìý Signal strength is excellent in all the filming locations- this should be good!

    Pickin' up good vibrations

    Post categories:

    Chris Baume Chris Baume | 16:58 UK time, Monday, 10 December 2012

    One of the universal appeals of music lies in its mysterious ability to manipulate and reflect our emotions. Even the simplest of tunes can evoke strong feelings of joy, fear, anger, sadness and anything in between. Music is a huge part of what the ´óÏó´«Ã½ does - in fact it broadcasts over 200,000 different tracks every week. With so much music to choose from, especially in the digital age, there is more and more interest in finding ways of navigating music collections in a more human way. Some of our colleagues are looking at of finding TV programmes by mood, but can we do something similar for music?

    The alliteratively-named 'Making Musical Moods Metadata' is a collaborative project between ´óÏó´«Ã½ R&D, (QMUL) and . Part of the project involves researching how information about the mood of music tracks can be added to large collections. I Like Music is a company that provides the ´óÏó´«Ã½ with an online music library called the 'Desktop Jukebox', which includes over a million songs. Labelling each of these by hand would take many years, so we are developing software that will do it automatically.

    The Desktop Jukebox interface.

    Read the rest of this entry

    The Benefits of an Open Web: ´óÏó´«Ã½ Research & Development at Mozilla Festival

    Post categories: ,Ìý,Ìý,Ìý,Ìý,Ìý

    Brendan Crowther Brendan Crowther | 14:00 UK time, Monday, 10 December 2012

    A few weekends back a team from ´óÏó´«Ã½ Research & Development and Future Media attended the at the Ravensbourne Building in Greenwich, East London.Ìý Mozfest pulls together people passionate about the future of an open web and provides them with a platform to use their skills and expertise to help shape it. Amongst the guests and exhibitors at the event were , , Ìýand the .

    The weekend kicked off with a . On the ´óÏó´«Ã½ R&D stand we showed off some of the projects we’ve been working on recently. Rosie showed off some work she's been involved withÌýwhichÌýexamines the possibilities ofÌýclassifying music by mood.ÌýIÌýwas also able to let the public quite literally get to grips with some prototypes I helped to develop which explore the possibilities around delivering broadcast haptics into the home, that is to "feel" TV through various sensations such as vibration or pressure.

    The rest of the weekend was taken up with a with subjects ranging from building open platforms, producing remixable content and understanding the legal obligations and ramifications of delivering in an open environment.

    ÌýThe main arena at Mozfest 2012 in London, Greenwich. Picture shows a large room in the Ravensbourne building with a speaker on a platform. He is being watched by hundreds of people.

    The main presentation area at Mozfest 2012 in London, Greenwich

    Mozilla themselves launched their new online video editor which aims to make video more native to the web by allowing you to superimpose related content in the video window such as text, images or hyperlinks. They were also showcasing , a web creation tool that aims to help digital literacy byÌýshowing live effects in the code when editing a web page.

    The ´óÏó´«Ã½ is committed to an open and free internet. We are a sponsor of one the , a programme that aims to put individuals with a high degree of technical web literacy into newsrooms around the world to help content makers make sense of the vast reams of data available to them and communicate the results to audiences effectively.

    The video below explores the idea of the open web and features interviews with ´óÏó´«Ã½ Research & Development’s Ant Miller and Mark Surman of Mozilla. Be warned, there is some bad language towards the end of the film.

    In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

    Finding the easter egg in Breaking Out

    Post categories:

    Ian Forrester Ian Forrester | 14:49 UK time, Tuesday, 4 December 2012

    It may be the wrong season for easter egg hunting?

    A little while ago we mentioned we will be shutting down our first . Having collected enough feedback from you all, we can reveal the easter egg and go into more technical depth about how it was built.

    For this I would like to introduce ( and ) who wrote the code for the whole project. They have most of the detail in including how to enable the easter egg.

    Here be dragons

    To understand how everything was working, we needed to have a hidden control panel, that enabled control over the vast majority of the variables through out the audio play. We call it the and it enables us to have control of many aspects including timing, depth staging, track volume, faders and filters.

    Now you can have a play and have a better understanding of how its all put together by allowing the whole play to load (wait for the lift scene to finish) and wait for the lift buttons. Then click under the last 2 of the copyright 2012 on the bottom right

    Easter egg for Breaking Out audioplay

    Once this is done correctly, you will see the above text. If you find it hard to find the little button, try highlighting the area with your mouse and you should see it clearly.

    After clicking the button, you can see the control panel at the very top right. Clicking on the text will reveal options which you can play with. I highly recommend changing one thing at a time and listening to the difference. Some of them can be applied in real time and others will have you reaching for the stop/reload button.

    One of our favorites is the depth control between foreground and background sounds. You can easily imagine automatically adjusting the depth control if the person is hard of hearing or in a noisy environment.

    So what you waiting for?

    More from this blog...

    ´óÏó´«Ã½ iD

    ´óÏó´«Ã½ navigation

    ´óÏó´«Ã½ © 2014 The ´óÏó´«Ã½ is not responsible for the content of external sites. Read more.

    This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.