´óÏó´«Ã½

Archives for November 2011

Prototyping weeknotes #87

Post categories: ,Ìý,Ìý

Andrew Nicolaou | 12:25 UK time, Monday, 28 November 2011

It's my first time writing weeknotes and there's lots of interesting progress to report from around the team.

Read the rest of this entry

R&D North Lab Open Days

Hi

I’m Adrian Woolard the Project Director, North Lab, ´óÏó´«Ã½ R&D.

Last week, we were pleased to open the doors of our recently completed Research & Development Lab in Dock House, MediaCity:UK to a wide variety of visitors to showcase our work and introduce ourselves to some of our new neighbours who have recently arrived inÌýthe North. We have produced a short film which captures some of the energy of the Open Days.

Ìý

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Ìý

Read the rest of this entry

'TV-Controlled Daleks' - not coming to a living room near you any time soon...

Post categories: ,Ìý,Ìý,Ìý,Ìý,Ìý,Ìý,Ìý

Adrian Woolard | 13:22 UK time, Friday, 25 November 2011

We were somewhat surprised this morning to see suggesting that we're building toy Daleks that come to life whenever Doctor Who is on TV; We're sorry to break the news to the Doctor Who fans out there that sadly, this isn't going to be happening any time soon.

Ìý

Photo of the Dalek & Universal Control

Photo of Andrew Bonney holding the Dalek in front of a Universal Control enabled set-top box. Image credit:

Ìý

Most of our work here at ´óÏó´«Ã½ R&D is behind the scenes, and benefits the licence-fee payer in ways they may not even be aware of. Nicam Stereo and the technology that enabled Freeview HD are perfect examples of this. Occasionally though, one of our ideas really catches people's imaginations, and spreads far beyond the department or the ´óÏó´«Ã½.

That's what happened after we showed our "synchronised Dalek" demo at an open day for the new R&D Lab at MediaCityUK last week.

In short, the technology we've developed allows a network-connected device, whether that's a smartphone, a laptop or a modified toy Dalek, to talk via the home network to a set-top box that is running our software. By doing so they can obtain the information necessary to synchronise things to it, be they images, audio or motion. We call the API that our software implements, "Universal Control". In this post, we'd like to share a bit more about the demo and how it came about; what we learned from it, and how we're using it to inspire programme makers and get them thinking about the future.

We've posted here previously about the technology we've developed that could revolutionise remote control - with particular benefits for users with accessibility requirements, and this was the primary reason for us developing the Dalek prototype: a short project to bring the technology to life in a playful way.

Over the last couple of years, we've developed a number of prototypes that use this technology, such as "single-switch" remote control interfaces for users with severe mobility impairments, and speech-based remote controls for the blind and visually impaired.

Conversations and collaborative work with disability rights groups, assistive technology manufacturers and television platform operators have convinced us that this approach to accessibility would deliver significant benefits to users with these requirements in a way that doesn't impose unaffordable costs on set-top box and television manufacturers.

At the same time, we have been mindful of the potential that a communication standard like Universal Control has for enabling "dual screen" experiences that go beyond what can be achieved with today's technology.

When University of Manchester graduate Andrew Bonney joined our team for six months of his one-year Industrial Trainee placement, we asked him to explore two research questions for us:

  • Can Universal Control clients be implemented on embedded devices?
  • What will concepts such as "" and the "" mean for Dual Screen experiences?

We set him the challenge of producing a demonstration that would answer these questions, and this video of "our TV-controlled Dalek" shows what he came up with:

Ìý

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Ìý

Andrew's deft modification of an off-the-shelf Dalek toy achieved everything we wanted from the project. We gained valuable insights into the challenges of developing Universal Control clients for an embedded platform with just 32kB of RAM, while demonstrating their feasibility in a very striking way. It's also an entirely new take on the concept of "dual screen", demonstrating that the things you can synchronise to a TV programme go way beyond smartphone and tablet applications.

Developing toys is not part of ´óÏó´«Ã½ R&D's remit, but the Dalek helps us with a part of our mission that is far more important: helping ´óÏó´«Ã½ programme makers understand how changes in technology could affect their work. As computing power continues to become cheaper and as more and more devices gain wireless connections to each other and to the Internet, we expect to see an increasing number of everyday things gaining these capabilities.

Toys are a particularly interesting example of this. When toys can interact with and respond to television programmes, it becomes no exaggeration to say that programme makers can take advantage of actors that are right there in the living room with the viewer. While Daleks that take part in an episode of Doctor Who are a particularly eye-catching example of this, there are many other ways that broadcasters could take advantage of these advances in technology. We see some very real educational possibilities - particularly for our youngest viewers.

Of course, this is the world of R&D. Today's televisions and set-top boxes don't support the Universal Control API, or anything like it, so you won't see WiFi-enabled Daleks taking over your living room this Christmas. But we firmly believe that the experiments broadcasters are currently performing with Dual Screen experiences are just the tip of the iceberg, and that tomorrow's television programmes won't always be confined to a screen in the corner of the living room.

The skills Andrew demonstrated in the development of this prototype were equally apparent in last year's interviews for our Trainee Technologist scheme, and he has now joined us full-time. Applications are currently open for this year's scheme.

Prototyping Weeknotes #86

Post categories: ,Ìý,Ìý

Libby Miller | 11:11 UK time, Monday, 21 November 2011

This is my first time compiling weeknotes, so it's been very interesting to get a closer understanding of the work going on in Prototyping as our two teams start to merge (my 'home' group is Audience Experience).

Read the rest of this entry

Prototyping Weeknotes #85 (2011-11-11)

Post categories: ,Ìý,Ìý

Olivier Thereaux | 09:29 UK time, Monday, 14 November 2011

With a number of people in the team either traveling or in training, this week's edition of weeknotes is a little shorter than usual, but meaty and interesting nevertheless.

As I was slowly making my way back from the W3C Technical Plenary week in California, I missed a big meeting (and equally massive amounts of cake, I am told) between the R&D Audience Experience and Prototyping team, which are due to slowly merge into one within the next few months. The two teams were already collaborating on occasion, and this was an opportunity to explore the many overlaps in our work. On the same day, Tristan also met with Nikki, a visiting academic from the US who wants to know about innovation and journalism in the ´óÏó´«Ã½.

Yves has been finishing off a paper submitted to the WWW'12 conference in Lyon, about automated DBpedia tagging of speech audio. He also had a catchup with World Service to show them where we got to, and what kind of applications we can build on the data we are deriving. He worked on trying to make broadcast-related part of a bit better than it currently is, by basing it on the Programmes Ontology created for www.bbc.co.uk/programmes four years ago.

Work on the new iteration of the Programme List is ongoing. Duncan and Dan have been working on notifications for it and now nearly got end-to-end Twitter alerts that can be set on a per broadcast basis. Oh, and they've been writing tests, tests and more tests. The team is not sure yet whether we'll release the new iteration in this state or just test it as a closed trial, we know it's got limitations. Release early, release often?

On the LIMO front, ChrisN and Sean have been digging into the internals of a number of streaming tools, including and . Andrew N., still on a high after the Mozilla Festival where he participated in discussions about personalisation vs privacy and 3D Robots, has been looking at ways to visualise the event timelines we're using in LIMO, especially , which is used in Mozilla's . An impressive, responsive editing app built using web technologies. Speaking of popcorn, Chris N also sent me a link to Popcorn.js, which just . Popcorn and WebGL were used to build the impressive interactive documentary "".

On Thursday, while Theo and Andrew were joining Vicky in Manchester for the "Roar to Explore" workshop, we were treated with some excellent show-and-tell: Yves and Roderick showed us the latest work on the ABC-IP, including an API and interface used to queue, move and process the 50 terabytes of ´óÏó´«Ã½ World Service audio which the project aims to analyse and organise by topic. Andrew McP also showed us some of his team's work on captioning, including a cool alerting you when particular terms are mentioned on bbc programmes.

Speaking of captioning, I came back from the TPAC with some homework to do as part of our involvement in the W3C Text Tracks community group, and started a community effort to of the are commonly used.

And finally, our planning effort is now reaching epic proportions. Led by ChrisG and Tristan, a number of us have started booking in a load of chats with people about our future direction of work. Tristan started first, and on Thursday had a good chat with Mark and Ian from the PSP team (they do user data, social and recommendations work) about their goals, problems and ideas.

Links of the week:

  • Duncan is excited by this site on
  • Tristan sent a whole week-end's worth of reading, with this list of , why , a column on (which is especially interesting in the context of the Programme List), and this excellent
  • ... and my own reading list this week (plus last week) included , , and finally the moderately non-ranty observations by Jan Chipchase on . Oh, and this entirely-rant-free , too.

Elbow in 3D Sound

Post categories: ,Ìý,Ìý

Anthony Churnside Anthony Churnside | 23:11 UK time, Friday, 11 November 2011

Here's a short post about a 3D sound experiment that ´óÏó´«Ã½ R&D's audio team conducted the other week in collaboration with ´óÏó´«Ã½ Radio 2.

As part of the ´óÏó´«Ã½ Audio Research Partnership, that was launched earlier in the summer, we are looking at potential next generation audio formats. You may have read some of our work into ambisonics here and there. If you want some more detailed information about what we’ve done, you can read this paper of ours, which is available from the . I think the headline from the paper was that first-order ambisonics is not as good as traditional channel-based audio, like 5.1 surround, for (what can be considered) point sources, but it does work very well for reverberation and diffuse sounds. With this in mind we spotted an opportunity to conduct an experiment using a hybrid of ambisonics and normal channel based audio.

Elbow, the Manchester band, were planning a homecoming gig in . After an email to the right person it was agreed that the team could try some experimental recording. We thought this would provide an excellent opportunity to learn about capturing a full sound scene using a combination of ambisonics and close microphone techniques. It would also allow us to improve our understanding of challenges and compromises faced when integrating 3D audio capture and mixing and into real-world live production environment.

The Soundfield Microphone Position

The Soundfield microphone position

While we suspected that the acoustic of the cathedral would sound great when captured using ambisonics, we didn't really want to capture the on-stage and PA sound with the ambisonics microphone and it was a rather loud sound reinforcement system. We've recorded the ´óÏó´«Ã½ Philharmonic in ambisonics a few times before but have never had to contend with a loud PA. Thankfully, there are tricks you can perform with ambisonics, such as attenuating the sound from the direction of the left and right speakers of the PA. Plus, Elbow were kind enough to put their drummer in a , so the direct sound of the drums (the loudest instrument on stage) would be attenuated too.

Chris Pike setting up the recording equipment in the back of the outside broadcast truck

Chris Pike, setting up the recording equipment in the back of the outside broadcast truck

Due the nature of the cathedral's structure we couldn't get the ideal ambisonic microphoneÌýposition by slinging it from above. A couple of years ago we recorded the Last Night of the Proms in ambisonics and were able to position the microphone right in the centre of the hall, above and behind the conductor. However, we had to compromise for this event by placing the microphone slightly to stage left of the front of house mixing desk. This worked quite well because it was far enough back from the PA speakers that from most directions it was just capturing reflections from the walls and ceiling of the cathedral. This also helped us to attenuate the PA sound mentioned earlier. The microphone was also raised above the audience by a few metres; while we wanted to hear the audience singing and clapping, we didn't want to hear specific members of the audience chatting.

Tony Churnside, pleased to have everything up and running before the band begin to play.

We could have just recorded the microphone signals without worrying too much about how it sounded, but we thought that it would be nice to at least try to monitor the 3D recording. The space inside one of Radio 2's outside broadcast trucks is very limited with nowhere near enough space for the 16 speaker set up we have in our listening room. To get around this we decided to use a technology called binaural reproduction using a system developed by called a Realiser. This box allows you to simulate surround sound over headphones. It's fairly complicated to set up, but it does a pretty good job of making it sound like you're in your multi-speaker listening room, when you're actually listening over headphones. Normally the Realiser is used to monitor 5.1 surround sound, but we calibrated the system with a cube of 8 speakers to allow us to monitor sound in 3D. It even has a head tracker to keep sounds horizontally stationary relative to small lateral movements of your head.

´óÏó´«Ã½ R&D's prototype 3D sound mixer

Along with the ambisonics microphone signals we recorded all the individual sources (about 50 of them) to allow us to remix the recording in our listening room. We have developed our own spatial audio 3D panner that allows us to position each of the sources anywhere in the 3D soundscape and over the next month or so we will experiment with a number of different spatial mixes of the recording to assess which is generally preferred.

We learnt (and are still learning) a lot from this experiment and will be publishing results and analysis over the coming months. In the mean time here's a link to the Radio 2 show where you can watch some clips of Elbow's performance.

We would like to thank Rupert Brun, Sarah Gaston, Paul Young, Mike Walter, Mark Ward and all at Radio 2 for making this experiment possible.


Setting a dj challenge at the Mozilla Festival

Post categories:

Ian Forrester Ian Forrester | 12:26 UK time, Friday, 11 November 2011

Last weekend staff from ´óÏó´«Ã½ (including R&D) took part in the Mozilla Festival at Ravensbourne College of Design & Communication, London. The weekend event was aimed at exploring the areas of web and digital media freedom based around a whole series of theme related challenges and this year the ´óÏó´«Ã½ got involved by setting challenges around the future of DJ'ing.

Ìý

The event is based around a whole series of theme related challenges and ´óÏó´«Ã½ R&D was involved by setting the

Djing was once a cutting edge expressive art form. But in recent years has been over taken by technological advances in making and generating music. With over 50 dance djs on the ´óÏó´«Ã½â€™s airwaves, who better to take up the challenge and reinvent the dj of the future?

Ìý

Ìý

It wasn't only ´óÏó´«Ã½ involved in the challenge, as we have teamed up with with plan to take the best ideas / prototypes from the challenge into the FutureEverything Festival 2012 programme in Manchester. We were also joined by ex Radio 1 & 1xtra senior producer Hugh Garry and Finn Stamp who helped give a real perspective of the current state of DJ’ing.
The challenge started off with an open Q&A session ranging in themes from how technically do you DJ compared with what a VJ does?

Ìý

After the first day, the following strong themes were taken forward:

  • Can we create a music format which supports tracks/layers in songs, then build Dj software which takes advantage of this.
  • Can we build a club environment which makes use of sensors to feedback to the Dj and Vj in real-time through meaningful visualisations

There were others but these two are the ones which we will be taking forward.

One team made progress on the first idea reusing the OGG wrapper format, as it supports multichannel sound and can support additional elements. Although players like VLC can play it back no problem, the challenging part is when you can DJ with it - this currently tends to be only in custom systems.

A second team focused on the challenge of sensors in club environments, began by trying to describe elements of the clubbing experience and how that could be pulled together using wearable devices which would generate sensed data that could be aggregated together and relayed to the DJ's.

The Mozilla Festival was all about the start of ideas, now comes the real challenge of collaboration and innovation. Keep your eyes peeled at FutureEverything for future development with the DJ challenges...

ABC-IP and work on audio archiving research

Post categories: ,Ìý,Ìý

Dominic Tinley | 09:09 UK time, Tuesday, 8 November 2011

We're a few months in to a new collaboration withÌý where we're looking at how to unlock archive content by making more effective use of metadata. The Automatic Broadcast Content Interlinking Project (ABC-IP for short) is researching and developing advanced text processing techniques to link together different sources of metadata around large video and audio collections. The project is part funded by theÌý Ìý(TSB) under its 'Metadata: increasing the value of digital content' competition which was awarded earlier this year. The idea is that by cross-matching the various sources of data we have access to - many of which are incomplete - we will be able to build a range of new services that help audiences find their way around content from the ´óÏó´«Ã½ and beyond.

Our starting point is the English component of the massiveÌýWorld ServiceÌýaudio archive. The World Service has been broadcasting since 1932 so deriving tags from this content gives us a hugely rich dataset of places, people, subjects and organisations within a wide range of genres, all mapped against the times they've been broadcast.

The distribution of programme genres in the World Service radio archive

The distribution of programme genres in the World Service radio archive

One of the early innovations on the project has been to improve the way topic tags can be derived from automated speech-to-text transcriptions which gives us a whole new set of metadata to work with for comparatively little effort. We've optimised various algorithms to work with the sorts of transcriptions with high word error rate that speech recognition creates and the results so far have been quite impressive.

Other sources of data include everything fromÌý´óÏó´«Ã½ Programmes, including the topics manually added by ´óÏó´«Ã½ producers, and everything fromÌý´óÏó´«Ã½ Redux, an internal playable archive of almost everything broadcast since mid-2007. In later stages of the project we'll also be adding data about what people watch and listen to as well. Blending all this together provides many different views of ´óÏó´«Ã½ programmes and related content including, for example, topics over time or mappings of where people and topics intersect. The end result is a far richer set of metadata for each unique programme than would be possible with either automatic or manual metadata generation alone.

Based on the work so far our project partners have built the first user-facing prototype for the project, called Tellytopic, which lets users navigate between programmes using each of the new tags available. You can find more on .

The plan is that the work we're doing will eventually complement projects in other parts of the ´óÏó´«Ã½, such as Audiopedia which was announced by Mark Thompson last week. We'll talk more about other ways we're going to use the data on this blog over the coming months.

Prototyping Weeknotes #84

Post categories: ,Ìý,Ìý

Pete Warren Pete Warren | 11:47 UK time, Monday, 7 November 2011

It's been a week full of meaty goodness, as on Monday we won our first Technical Innovation Award from the Radio Academy for our work, along with Frontier Silicon and Global Radio, on RadioTAG. It was a team effort on our side by Sean O'Halpin, Chris Lowis, Kat Sommers, Theo Jones and Joanne Moore. George was justifiably proud.

Tristan continued connecting with colleagues across FM to discern their priorities, problems and points of interest in order to help focus thinking on future project possibilities. Alongside this, Tristan and Theo talked to the iPlayer team about what we've learnt from The Programme List so far. Duncan, Dan and Pete continue their octopus-wrestling to integrate the much-requested alerts/reminders features in the current round of Programme List work.

Read the rest of this entry

RadioTAG wins Innovation Award

Post categories: ,Ìý,Ìý,Ìý,Ìý

George Wright George Wright | 17:42 UK time, Tuesday, 1 November 2011

(some of) RadioTAG Application work group

The annual meeting of the radio industry, , happening right now in Salford, has an associated conference called Ìýwhere all sorts of interesting technical talks and demonstrations take place.

TechConÌýhas an award for Innovation around radio and audio - the shortlisted applicatants show their research to the conference attendees before a panel of judges selects the winner.

This yearÌýthe RadioTAG application group (consisting of members fromÌý´óÏó´«Ã½ R&D, Global Radio and Frontier Silicon) submitted a joint entry for their work.

RadioTAG enables easy, cross-platform tagging and bookmarking of radio, using specifications developed as part of .Ìý

In the ´óÏó´«Ã½, we've been blogging about some of the work we've done using RadioDNSÌý- notably RadioVIS and the work on TAG.ÌýOne of the best things about RadioDNS is that it's a collaborative project - working with receiver vendors, UK commercial radio companies and worldwide broadcasters to make radio better for all. So submitting a joint submission for work done together seemed natural.

We're pleased that the judges agreed with this approach, and that the RadioTAG work won the award for technical innovation. ÌýSee above for a snap of Sean O'Halpin from my team in ´óÏó´«Ã½ R&D and Andy Buckingham from the Global Radio Creative Technologies team picking up the award. Not pictured,ÌýRobin Cooksey from Frontier Silicon, and many of the other people who worked on this project. Congrats and thanks to all involved.Ìý

Photo credit: Nick PiggottÌýÌý(used with permission - CC licence, some rights reserved)

More from this blog...

´óÏó´«Ã½ iD

´óÏó´«Ã½ navigation

´óÏó´«Ã½ © 2014 The ´óÏó´«Ã½ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.