´óÏó´«Ã½

Archives for March 2010

Connection Factory Launches

Post categories: ,Ìý,Ìý

Rowena Goldman | 10:00 UK time, Wednesday, 31 March 2010

The funded feasibility study into into the project "Building Collaboration and Engagement for Media Professionals and Academic Researchers"Ìý has been underway for a little over 2 months now. ´óÏó´«Ã½ R&D is a main stakeholder, and overall the project will run for 8 months, ending on 25 September 2010. The star of the whole thing is , and it's inaugural event - -Ìý took place on Thursday 25th March in the august surroundings of the University of Westminster's Board Room at the Regent Street campus.

connectionfactory500.jpg

There are 229 members of Connection Factory now and the number's quickly growing. Trends indicate that the network has become more international than we expected with members from the US, Asia and Eastern Europe. It's attracting the usual suspects from the ´óÏó´«Ã½ and , with a strong presence from academia, social startups, media enterpreneurs, people running websites with a clear public service and social justice focus such as .

The next big event is called . We're also hoping to run an event in Manchester/Salford/MediaCityUK, probably in June which will be called 'Doing Good with Social Media'.

The network's main attractionÌý is that it's so diverse, both geographically and in terms of scope and members' professions that there areÌý amazing opportunities for collaboration on the site. Check it out, and spread the word so we can spread the net. Meanwhile, here's a .


The Mythology Engine - representing stories on the web

Post categories: ,Ìý,Ìý,Ìý

Tristan Ferne | 14:49 UK time, Tuesday, 30 March 2010

The R&D Prototyping team has recently built an internal prototype for ´óÏó´«Ã½ Vision called the Mythology Engine. It's a proof-of-concept for a website that represents ´óÏó´«Ã½ drama on the web letting you explore our dramas, catch up on story-lines, discover new characters and share what you find.

Most TV drama on the web is either deep and detailed fan-produced sites or visually rich but shallow sites from the broadcasters. We believe there is a middle way and it seems like there's a space for something here. Something that expresses the richness and depth of the stories that the ´óÏó´«Ã½ creates. Somewhere that will be the default place to find out about our stories and somewhere that people will link to and share with their friends. So we built a prototype based around the stories of Doctor Who. Theo Jones, Creative Director for Prototyping talks us through the prototype in this video:


In order to see this content you need to have both Javascript enabled and Flash installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

That's a taste of what this prototype can do. Like I said, it's a proof-of-concept that we're using within the ´óÏó´«Ã½ and we're not planning on launching it. That said, do tell us in the comments if you like the idea. We are using Doctor Who as an example because it is a high-profile brand with a large archive and is particularly narratively complex in places - time travel is hard! The rest of this post will look a bit deeper into the project and talk about some of the thinking behind the prototype and the process used to build it.

What should it do?

Our objectives for this project were to build something that would demonstrate how you could express stories in a form tailored for the web, to show how this would allow people to explore ´óÏó´«Ã½ dramas and unlock the archive, and to create a reusable framework that could apply to all dramas and stories. The prototype should let you:

  • Catch up on stories you've missed
  • Explore stories and characters and help you understand plots and relationships
  • Find the stories you are looking for and share your favourite moments or characters

Luckily there is some previous work to look to in this area. Several years ago the ´óÏó´«Ã½ looked at representing , my Radio Labs team built a prototype for a similar concept around the Archers a couple of years ago and Paul Rissen, one of our information architects has done a lot of as have various academic projects like .

Because there are always issues around the rights of distribution of programmes we designed it to work with and without short video clips, as these seemed relatively realistic to have. There is no long form video in the prototype for this reason, and also because it's not designed as a replacement for iPlayer. It should be complimentary to existing ´óÏó´«Ã½ sites.

One way of thinking about this that I've found helpful is to imagine the story existing in the writer's head before the scriptwriting and production creates the broadcast programme. The viewer then watches this, understands it and reconstructs the story. The Mythology Engine is designed to assist in this process; to let the audience explore complex plots or catch up on episodes they missed or stories they remember.

The Mythology circle

Modelling stories

By designing the Mythology Engine to take advantage of architecture of the web with unique pages per concept and interconnecting links everywhere, we increase the findability and sharability of our content. To do this we used a domain-driven modelling approach and this is a simplified version of our data model.

Mythology Data Example

A story can actually be several things; a single episode (like most current Doctor Who) or a multi-episode story (like classic Doctor Who) or a long-term, ongoing story arc (like ). Stories are then collections of events, where an event is a specifically chosen, significant thing that happens in a story. This could be anything, but the important thing is that it is editorially chosen to tell the story. And then, pretty obviously, events occur in places and feature characters, who have relationships between each other and can belong to groups. And there are things, a catch-all term for everything else that might affect the plot - the murder weapon, a sonic screwdriver, things like that.

Stories map onto programmes

This picture shows how the story concept in our model maps back onto a programme as it appears on TV or radio. The story, consisting of events, is represented as scenes in the programme. Often an event will correspond exactly with a single scene in a programme, or maybe a scene will portray more than one event. And sometimes an event can be portrayed in several scenes, maybe to build tension or to show it from different characters' perspectives. And events might not even occur in the "correct" chronological order within the programme, it's all about telling the story and building suspense and that's what the prototype needs to support.

Building it

The site was built in Rails, principally by Duncan Robertson, assisted by Chris Bowley for the Flash visualisations. It uses a with some small customisations to the interface to enter data. Our approach to building prototypes is agile and iterative so we modelled the data, got running code as soon as possible and then did some ad-hoc user testing with some colleagues. The main feedback from this was that we should make stories and time more obvious, have more clarity around your current context in the page and the site, try to increase the interlinking and to focus on the quality of content.

Craft your data

We think this last one is particularly important. Having the model is not enough, you also need to bring to life the things and the connections between the things in a compelling way. So for this project we hired a freelance Doctor Who writer and he created all the data and relationships and wrote all the descriptions that are in the prototype; five whole stories (some classic and some contemporary), a couple of story arcs and about forty characters and thirty places.

Representing time

In the story there is lots of time-travel. Whenever a character is touched by one of the statues they are thrown back in time. We model all of this as ordered events with timestamps so you can imagine there are various timelines that we could present - how things happened in linear time (i.e. earliest first), how things happened from a character's perspective or how the story was presented on screen. In the end we decided to show the timeline as it was presented on screen, which makes it relatively straightforward and is what the storytellers intended.

Blink timeline

You can see how it jumps between the present day, 1920 and 1967.

It's not just Doctor Who

Having completed the Who prototype we wanted to show that the framework was re-usable for another drama. So we re-deployed the code to a new server, wiped the database and set up an Eastenders Mythology Engine in a couple of days, reskinning it and creating a small number of stories and characters. There are some things we would have done differently if we'd started with Eastenders. We would have concentrated less on timelines and more on relationships and characters, but ultimately we think it works across the brands.

The Eastenders Mythology Engine

What next?

We think this is a really exciting concept, the prototype is done, and hopefully we've contributed some original thinking along the way. Having built the Mythology Engine there are several interesting research projects that we've been thinking about:

  • Investigating whether we could parse scripts, subtitles or video to automatically create the outline of the data for a story.
  • Using this framework to tell the stories behind the news and sport and to further explore the archive.
  • Looking at how user-generated content would be fitted into this framework. Is it something that sits on top? Or is it more fundamental than that and could we harness the fans to create the mythology for us?
  • And should this model and framework make us think differently about how we write and produce stories? Could we start to create narratives that are tailored for the web?

Ambisonics and Periphony [part 2]

Post categories: ,Ìý

Anthony Churnside Anthony Churnside | 13:00 UK time, Tuesday, 30 March 2010

In my last post I talked a bit about why ´óÏó´«Ã½ R&D is interested in Ambisonics. This week I am going to talk a bit about what I've done so far, and what we might do in the future.

As most academics will tell you, the first step when undertaking a new research project is a literature review. This is to find out what other people have done already (so you can avoid re-doing work) and to help you get an idea of what it is you need to find out. There is a fair amount of literature about Ambisonics and there has been a small but dedicated research community surrounding it since it was developed over 40 years ago by .ÌýI spentÌýthe first month or so of the project reading.

I've talked a bit about working in the North Lab, and Rowan's talked about setting up a new research facility. When I arrived in Manchester to start the Ambisonics and Periphony project (before the R&D North moved into the new interim lab facility) one thing that was missing was a listening room. Kingwood Warren had an excellent ITU standard listening room, but there was no such facility in our north base. New Broadcasting House, on Oxford Road was built in the 1970s and I think it would be fair to say parts of it are past their best. Taking a short cut back from the canteen I discovered a decommissioned radio studio in a dark corridor which turned out to have, perhaps not , but good enough acoustics to set up a listening room. I set up the room with 14 speakers for the Ambisonic array; a cube (a square above and a square below) and six in a hexagon at listener head height. I also added two extra speakers in the horizontal plane so I could compare mono, stereo and 5.1 with Ambisonics.

Perhaps the most common way of making Ambisonic recordings is by using a . This is a microphone system that is capable of outputting a B-format signal, and there are a lot of high quality Soundfield recordings available online from sites such as and if you want to explore them.Ìý

One of the things we wanted to look at was how Ambisonics might be integrated into typical ´óÏó´«Ã½ production workflows. The perfect opportunity for this came up with the recording of The Last Night of the Proms, last September. The week before the beginning of the Proms season we slung a Soundfield microphone up above the conductor's position in the Royal Albert Hall. On the last night we came back with an external soundcard and a laptop to record the B-format signal from the Soundfield mic. The Proms is recorded for the ´óÏó´«Ã½ by a specialist outside broadcast company called . I worked with SIS to get hold of all the close microphone audio tracks of the Proms (this filled a 300 gigabyte hard drive!). l converted these signals into B-format by mathematically placing them in a soundfield and mixed them with the original signal, that we recorded to recreate the sound of the event.

In addition to the recording of live events the ´óÏó´«Ã½ produces a lot of radio drama where the final programme is artificially created by mixing recordings of actors performing along with recordings and archive sound effects. To assess how Ambisonics might work in this type of production I made contact with a producer of Radio 4 Drama who was about to record The Wonderful Wizard of Oz. This drama was recorded in a radio drama studio (a fairly dead space) where Actors perform to aÌýstereo microphone. We set up the Soundfield microphone in the studio and the actors performed around it. The sound effects were given to me afterwards as mono and stereo audio files. I produced this in a similar way to the Proms mix, but using a combination of B format ambience and mono effects mathematically placed in the sound-field. Another thing I experimented with here was using of B format signals. Using B format impulse responses of reverberant spaces to add ambience to the dry B format recordings of the actors.

We also conducted a number of listening tests toÌýassessÌýsubjectively some of the work we did. Chris Baume, who had been working atÌýKingswood Warren on the project, did some listening tests investigating how important periphony is to the listening experience, while I focused on comparing the audience's enjoyment of Ambisonics with their enjoyment of stereo and 5.1.

That pretty much brings us up to the present. I'm currently analysing the results of the listening tests and writing up the whole project into an AES paper that Chris and I will be . While this project has shown what first order Ambisonics might do to for the ´óÏó´«Ã½, it has also shown that there is more research to be done. To paraphrase Donald Rumsfeld, this project has turned quite a few unknown unknowns into known unknowns, and I look forward to turning them into known knowns in the future.

Thank you very much for all your comments on the previous Ambisonics research post. We've found them really useful in producing today's post so hopefully some of your questions have been answered. Please do leave comments below to continue the discussion.

Revisiting Audience Behaviour and Media

Post categories: ,Ìý,Ìý

Adrian Woolard | 10:00 UK time, Monday, 29 March 2010

´óÏó´«Ã½ R&D are looking for researchers and professionals interested in updating a critical internal study by a great colleague of ours - Dr Guy Winter (now ex-´óÏó´«Ã½) who in 2004 reviewed a wealth of evidence to propose a model of people's behaviour with media and media technology.

Its purpose was to understand what people do with media and why.

The perspective was essentially psychological, and focussed on the individual embedded within the context of wider social life. Underlying the model was the assertion that under the turbulence we see in everyday surface behaviour lies a consistent and stable set of human goals and drives.

By identifying these deep behaviours, we could understand the range of observed media behaviours and predict how we might exploit them via future products and services.

We will republish the original study shortly in the "Publications" area of our web site, however in summary, Dr Winter produced 5 core cognitive goals and 7 emotional drives:

Cognitive Goals

  • Relax. The need to switch off, mark a change from one activity to another, or simply escape from it all. Low levels of motivation and attention, and a common behaviour.
  • Stimulate. The desire to be interested and engaged, yet without much effort. Low levels of motivation and attention but a possibility to increase this. A common behaviour.
  • Challenge. An interest in media that questions personally held views and opinions, or offers an intellectual or physical challenge. High engagement and attention; highly motivated but quite rare.
  • Inform. The desire to be informed in order to make informed decisions. Refers to both specific and general information, but both are highly motivated and goal driven.
  • Discover. A basic hard-wired urge to discover and learn. Highly motivated and engaged activity, though less common.

Emotional Drives

  • Stability. The urge to add structure to complex everyday life, and cope with normal life stresses to provide anchors, watersheds/transitions and reinstate personal control over events. Refers mostly to small scale day-to-day behaviour, but also in the bigger sense of global 'security'.
  • Isolation & Company. The drive to use media as a source of company when alone, but also as mechanism to isolate and cocoon. The result of individual's desire to control their interactions with the world and the people in it.
  • Aesthetism & Pleasure. The seeking of simple pleasure, happiness and a sense of well-being.
  • Self-realisation & Identity. The development of personality as a consequence of experience to refine understanding of oneself, and ones thoughts and attitudes.
  • Contextualisation. Testing and distinguishing personality and identity by exposure to the wider world. Understanding key commonality and difference, and adding context to the individual.
  • Socialisation. Recognising the human as a social animal with an urge to be in company, share moments and feelings and be part of a group. Understanding the strong tendency to identify and change personality to suit the group's key traits, as may be evident through media.
  • Self-actualisation. Desire to grow emotionally, cognitively, and socially and to develop self esteem and sense of worth. Commonly seen as contribution and creation activity.

It was a different way of understanding of our audience and complimenting the drive the ´óÏó´«Ã½ has to work for its audience.

´óÏó´«Ã½ used this model as a method of analysing the ´óÏó´«Ã½'s Internet services and as a tool for understanding how behaviour is changing, why it is doing so, and what we can do to exploit it for a couple of years but with many changes and people working with it having left, it is in need of an update.

We want to understand what has changed, are new behaviours relevant in and around mass media? Have YouTube, iPlayer, Facebook, Twitter, Wii changed audiences dramatically or have they just evolved their core behaviours?

For those interested, this challenge is also being discussed on the knowledge exchange network set up to support research between media professionals and academics.

We would welcome your thoughts and ideas on how to undertake this new study, so please drop us a line in the comments below, or get in touch via the 'Contact Us' area of the website

cheers
Maxine Glancy, Adrian Woolard

Weeknotes #7 (26/03/10)

Post categories: ,Ìý,Ìý

Chris Godbert | 11:50 UK time, Friday, 26 March 2010

Monday begins with some of the engineers and I going through the Microblogging ingest chain with Glen. The foundations are now in place but it's complex and there are still some big unknowns; storage is the main concern and needs further investigation.

Sam's back from a successful meeting in Brussels. He was presenting our work package update at the annual review meeting for , the EU research project we are working on. Any temporary breaks in the drilling next door are now seamlessly filled with hammering; seems everyone in the building is suffering.

We've moved our stand-ups to the main project board, which works better and stops us getting bogged down in too much detail. Glen and I learn about the new hosting infrastructure in Centre House and decide it makes sense to try and move our servers across. Tris heads off to W12 gathering project ideas and selling some of our older projects.

Chris B has got music clips into Coventry using HTML5 audio, Chris N is just trying to work through some proxy issues to get the links to live. Theo's working most of the day on designs for Coventry - he prints them out, sticks them on the wall and declares them done. We'll see...

Chris J and Dominic come in to present the Resolver project to the team and some of the software engineers from Audio & Music. Tris and I leave convinced that there are opportunities for wider Resolver applications. We've agreed a plan to get the latest version of installed on our servers. This is good news as our instance has some bugs.

Thursday: Tris and Theo are working on a screen cast for Mythology; Theo spends some time in the voice booth (well, a meeting room) recording the voiceover followed by lots of editing in the afternoon. There should be a blog post sometime next week when all will be revealed. Duncan's making progress on his spike for Microblogging, he's got his scheduler working with Seans application filter. The ingest chain has been running for 40hrs now and has consumed approximately 10m messages. The engineers get an in-depth look at the Scala code driving it. We've got a few visitors next week so we run through Vickys slides and make sure our demos are running OK. Chris N has a workaround for the music clips in Coventry so we don't need to fiddle with proxies. We've also decided on some project tracking software; we'll trial it first and maybe later write up how we get on with it.

In summary: This phase of Coventry is nearly done and Mythology and Resolver have been wrapped up, but there's quite a lot of planning work that needs to happen over the next couple of weeks. More people in the office this week, nearly up to full strength. Drilling has been replaced by hammering and sadly our bookcase is still nowhere to be seen.

A Touch Less Remote: Part 3 of 6

Post categories: ,Ìý,Ìý

Vicky Spengler Vicky Spengler | 10:22 UK time, Friday, 26 March 2010

The ´óÏó´«Ã½ R&D Prototyping team has been investigating how multi-touch software could support television viewing in the future. This article, written by Dominic Tinley from R&D Prototyping describes the two prototypes we built and how we decided on the features for each of them.

Creating an easy way for viewers to decide what to watch is a problem which designers have tried tackling in many different ways since the dawn of radio and television. In our first prototype we wanted to explore how a multi-touch device could enable multiple users to collaborate on creating an evening's television viewing schedule in a simple and intuitive way.

The decision processes that even a single viewer undertakes when choosing what to watch are incredibly complex, and yet they are processes that millions of people take every day without giving them a second thought.

Deciding what to watch can be as simple as pressing the number for your favourite channel on your remote control, liking what you see, and watching it. The challenge was therefore to create something as simple and intuitive as pressing a single button but that would add a complexity and richness for viewers who wanted to broaden their interests and try something different.

Our first iterations imagined that each user would bring to the table a 'bucket' of programmes they wanted to watch that would be populated automatically based on personal preferences and past viewing habits. Users would then have to make conscious choices of which programmes to drag on to a timeline, and the software would assist in the negotiations between multiple people.

Initial testing of this idea failed our first criteria, ease of use. While it would be helpful when all viewers were in an active planning mode it wouldn't work in the more common situation of opportunistic viewing where people switch on the television to see what's on.

So we introduced an algorithm that takes the contents of each user's 'bucket' of programmes and prioritises these to slot them in to the available viewing time. A user can tweak these preferences at a macro or micro level with a single gesture. At a macro level a user can give more or less influence to a particular source of recommendations, and at a micro level a user can increase or decrease the weighting given to a particular show.

schedule_demo.jpg

The result is an interface that presents multiple users with a television schedule that should meet their combined needs with almost no intervention but where a series of small tweaks will optimise this for their respective and combined moods on that occasion.

schedule-demo-still.jpg

Having created a system for users to choose what to watch we turned our attention to how people might interact with a programmes using multi-touch. We considered the growing trend of viewers chatting along with television using social networking tools such as Twitter but rejected these on the basis of ergonomics. In short it's uncomfortable and impractical to constantly change your gaze between a vertical television screen and horizontal multi-touch screen and far more practical to chat along using a laptop or mobile phone.

apprentice_game.jpg

The multi-touch table becomes far more effective for large gestural activities that don't involve constant change such as infrequent votes dotted throughout a programme. Using the ´óÏó´«Ã½'s Strictly Social web tools and Apprentice predictor game as inspiration we developed a model for multiple users to play along with a programme and compare scores at the end.

In the next blog post we'll be looking at hardware and software behind the prototypes we've developed.

Links

Strictly Social Ìý /strictlycomedancing/play/strictly_social/about.shtml

Apprentice Predictor Ìý /apprentice/about/predictor.shtml

Weeknotes #6 (19/03/10)

Post categories: ,Ìý,Ìý

Tristan Ferne | 15:24 UK time, Friday, 19 March 2010

It’s a lovely clear blue Monday morning in London. But we’re struggling a little with the Microblogging project and we think we could usefully spend some time getting a clear and common understanding of the requirements and data sources. There’s a man drilling into the office now, but then I think we turn a corner and agree that there are really two distinct projects within Microblogging; it separates neatly (as it should) into a reusable ingest chain and a user-facing application built on top.

We need to get better at managing our flow of work, there’s a little gap in April before some big projects kick off so we’re working out how to fill it. Possibly with some process and outreach work, I know that doesn’t sound that exciting but we have some ideas.

On Tuesday afternoon it’s the final presentation of Resolver to Audio & Music, everyone is agreed it’s a great piece of work and it seems to answer all our questions. Afterwards we think about how this might continue, some possible internal routes and some more open and wider routes. Lots to ponder.

Bookcase update: We’ve now received a floor plan showing the proposed location of said bookcase which requires our approval.

The Microblogging ingest chain ran overnight on Wednesday with no problems, just a couple of connections dropped by the proxy but all recoverable. It’s getting approximately 100 messages a second and we consumed 3 million messages overall. This is giving us some really interesting scaling and messaging work.

Thursday morning kicks off with a good discussion with everyone about our new project ideas; we cross off some of the legacy ones off and Chris B adds a new one. We’ve set a deadline of Easter for new ideas. We’ll then spend a couple of weeks assessing and discussing them, iterating down to a prioritised list we’re happy with. We’re also thinking about our because it’s not really working. We decide we’re going to trial an online equivalent for task management (any suggestions in the comments please), probably a very simple system with a backlog, tasks in progress and tasks done. Cookies in the office today. And there’s an incident with a microwave.

Coventry now has the full project team of two engineers and one designer on it, we’ve got the code framework in place and we’re sorting out the page layout and design. I’m sorting terms and conditions with the data sources and I think we’re on track to make this one live in a couple of weeks. The only dip in this project right now is that we really want it to include some playable music but we’re not sure how right now. At the end of the day Chris G has a conference call with R&D North about an idea they’re proposing for the upcoming election. It’s really interesting but we’re too busy to help at such short notice.

So this week; a few people are out of the office using up their annual leave, sporadic drilling in the neighbouring lift shaft, more clarity around the Microblogging project, really starting to get somewhere with Coventry, a general trend to early starts in the morning and Sean has wrapped up the Presence project. Good progress.

A Touch Less Remote: Part 2 of 6

Post categories: ,Ìý,Ìý

Vicky Spengler Vicky Spengler | 15:34 UK time, Thursday, 18 March 2010

The ´óÏó´«Ã½ R&D Prototyping team has been investigating how multi-touch software could support television viewing in the future. You can read an overview of the project in A Touch Less Remote: Part 1 . This article, written by Maxine Glancy from ´óÏó´«Ã½ R&D Audience Experience considers the existing role of remote control devices in the home and how these prototypes could change these scenarios.

There has been rapid development of new technology in recent years that has significantly changed the way people interact with media. Multichannel television and broadband internet access are now more common than not in UK homes and mobile internet access is becoming ever more popular.

But despite these advances remote technology has, in general, not changed all that much. This makes remote devices a fascinating area for research as they play such a major role in media interactions. In looking at how remote devices may change we have considered the more widespread use of multi-touch devices as one likely possibility.

Our particular frame of reference is how audiences access ´óÏó´«Ã½ content and services. If we can understand design trends and market trends in the area of remote control devices and systems then we can have a positive impact on their future development. We can also respond more quickly to creating media that supports any new modes of interaction.

There are many different types of households but there are some media-related features and activities that appear in most. These include quality time being spent congregated around a central media point, negotiations between members of the household over media content, the use of devices that support social activities and the desire or ability to blend media content from various sources.

What one might call 'traditional' remote control appliances are still very much found in homes. People may like them or loathe them but there is very little choice but to use them. In most cases people use the controls that came with each device as while universal controllers are available very few people have them as they are too complicated to use and, to a lesser extent, too costly.

Remote controls have remained remote. They have remained material objects with limited physicality due to their limited input and output methods. Meanwhile new devices like touch screen phones have introduced people to whole new ways of interacting with technology through new actions and gestures.

While the average number of remote controls in each home is increasing as people have more devices that can be controlled they are still most likely to be found at high quality 'media points' where media is consumed, usually the living room in family homes. So with this project we were really keen to explore what would happen if there was a major shift in the way these devices worked. We were particularly interested to look at negotiated viewing and social game play. Multi-touch allows us to explore very different scenarios of remote control usage.

All remote control mechanisms can be classified according to a set of attributes that describe their varying characteristics. These include physicality (how physical you need to be with the device) distribution (are the control cues on the remote control or on the device itself), proximity (how close it is to the device being controlled), materiality (how big, small, real or virtual it is) and people (how many users it supports at one time).

A basic television remote control is held in the hand and does not require much physical effort, will often require menus to be viewed on a separate device (the television screen) while buttons are pressed on the keypad, and will not support multiple users at one time.

Our prototype multi-touch device is quite different in that it's not held in the hand and does require some physical effort as you have to move around a bit to use it, does not require menus to be viewed on a separate device as you touch the menus directly on the screen, and will support multiple users at one time.

Using multi-touch can challenge current remote devices and show how they can be interactive in themselves as well as passive controllers of other technology. We feel it is important to try out new ideas to challenge the familiar, traditional approaches and these demos are an opportunity to provide evidence that different approaches might be better.

You only have to look at the rapid take up of the Wii Remote which has been so readily accepted by the public. Its barriers to use are low because it is simple, intuitive, and maps easily on to some activities and users' natural gestures in those situations.

In the next blog post Dominic Tinley from R&D Prototyping will outline the two prototypes we decided to develop.

R&D at Maker Faire UK 2010

Post categories: ,Ìý,Ìý,Ìý,Ìý

Ant Miller Ant Miller | 17:39 UK time, Wednesday, 17 March 2010

We've probably mentioned this once or twice already, but the R&D team* went to at the in Newcastle this weekend just gone.Ìý The event, on the opening weekend of the , and the , brought together hackers, makers and engineering hobbyists and creators from across UK and europe, and the wider world, for a 2 day festival of making stuff.

Makerfaire_2010_47_500.JPG

This is the second time we went along- last year we took the ´óÏó´«Ã½ Weatherbot- a mash up of remote control tanks, RFID technology, and a giant map of the UK, which pulled in local weather reports, plus a host of demos of existing R&D projects.Ìý This year we were even more ambitious, with rapid-prototyped remotes, light field cameras, and the first integrated demo of surround video with ambisonic sound (as featured in an earlier blog post).

We even took along our tapeless cameras and laptop based edit station, and produced this short film, outlining our efforts at the event.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.


If last years event is anything to go by there will be plenty more coverage of the Faire around the internets in the coming weeks, but we hope this at least gives you a flavour of our weekend.Ìý UPDATE: ´óÏó´«Ã½ 5 live 'Pods & Blogs' are already live with their rundown.

Thanks to all our colleagues North and South who helped us prepare the demos this year, to the staff of the Centre for Life for hosting the event, our friends at O'Reilly UK for inviting us back, and to the good people of Newcastle, and from miles around, who came down to say hi and see what we're up to.

* The team this year were Tony Churnside, Ian Forrester, Matthew Shotton, and James P Barrett, from the North lab, and Jigna Chandaria, Max Leonard and Ant Miller from the South Lab, plus Mo McRoberts helping out from the ´óÏó´«Ã½ Backstage community on Saturday.Ìý We're grateful to Salma Alam for the beautiful specially commisioned music for our sound track- all rights reserved.

Weeknotes #5 (12/03/10)

Post categories: ,Ìý,Ìý

Chris Godbert | 16:19 UK time, Friday, 12 March 2010

Monday

It's going to be a quiet week in the office. Lots of people are out at a mix of conferences and training courses so there's just four of us at the stand-up. Tris disappears at lunchtime to get the train up to Manchester; he's visiting R&D North Lab tomorrow.

I've finally written up my notes from the Mythology Retrospective - need to share these with the wider team and book a Signatures retro. We've been slack about doing these and need to get back into the habit.

Tuesday

Duncan's looking at scheduling stuff for Microblogging, but it's clear we need to understand the UX plans in more detail to make sure there's nothing too complex in there. Sam has extended the Tree of Life LIMO demo to include a prototype send-chapter-link feature.

Tris has a busy day in R&D North; some demos of subtitle searching, ambisonics and time-synced data and a visits to the Red Button team to see some Wii prototypes.

We decide we would definitely like an office dashboard like this one: .

Wednesday

We've identified five technical spikes we need to complete on Microblogging and are clear on what the first iteration of use case 2 will be. Hopefully it all fits with Glen's underlying infrastructure.

Tristan and George have a good meeting with the Music team about Coventry, it looks like it could fit into their plans really well and we're pleased with this. We're slightly underwhelmed by our second big demo of the day, well, you can't win them all. We're really pleased that we've been given the go-ahead to share some of our recent projects via this blog - stayed tuned.

Vicky puts up the first of a series of blog posts about multi-touch. Watch out for more.

Duncan is having a quick look at to see if we could use it to visualise our Microblogging data. Chris B has been modifying the algorithm he's using to normalise music trending data which should mean our charts make more sense.

Sam is continuing preparation for the P2P-Next meeting in Brussels next week; looks like this will take up most of his week.

Thursday

Michael S is in the office today because we were hoping to have a mini hackday around project POAF but unfortunately not enough people turn up, so instead we start to scope it out. It seems do-able but it's probably a bit bigger than a hackday.

Yet another person arrives in the office to measure up space for our new bookcase: that's the fifth and counting, how hard can it be?

We've got renewed impetus on Coventry and the tech spike has shown that most of it is possible. Next week we're going to work out exactly what we're going to deliver this month. We're thinking it will be a single-screen interface highlighting content that is trending, but there's some interesting issues in the design (in its broadest sense) to be addressed about how we represent and communicate it.

We finish the day off by sticking some existing project ideas up on the board, it's a bit sparse but we start to fill it up and in the process get some new ones too.

Friday

Chris N is back in today and is catching up on where we've got to on Coventry. He's been on a course at our Wood Norton training centre for a few days, learning about broadcast technologies.

Year end is looming so I've spent the morning making sure our finances are up to date; we've got a meeting about it with Caroline later.

This week has seen lots of people out of the office as they plan future work, or work with colleagues in other areas - all good stuff, but it's made it hard to make progress on our work. Fingers crossed we'll be back to normal next week.

Ambisonics and Periphony [part 1]

Post categories: ,Ìý,Ìý,Ìý,Ìý

Anthony Churnside Anthony Churnside | 16:47 UK time, Thursday, 11 March 2010

I'm currently working for R&D in the North Lab which I've blogged a bit about here. I'm working in Production Magic, Graham Thomas' section, on the future of surround sound, and I thought that it might be interesting to write a little about the project. In the media sector audio is often seen as the underdog to video. This is even true in some parts of the ´óÏó´«Ã½, where we produce much more audio than video content (we don't make silent TV!). R&D has strong audio team, led by Andrew Mason, who's talked a bit about Ìýin a video here.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Two of my collegues, Chris Pike and Chris Baume, and I decided that we would propose some areas of audio research into which we felt ´óÏó´«Ã½ R&D should be investing some more resources. One of our proposals was research into Ambisonics and periphony.Ìý

The majority of the audio that the ´óÏó´«Ã½ creates is stereo. The two exceptions to this are Radio 5 Live, which, for now, is broadcast in mono, and ´óÏó´«Ã½ HD, which has a mixture of stereo and 5.1 surround.Ìý

5.1 surround is one of the current multichannel surround standards. As a result of extensive testing in the 1990s the recommends a set of speaker positions for a 5-channel surround, with 3 at the front and 2 at the back, at specific angles.

There are a number of disadvantages to this way of recording surround sound. One of the major issues is compatibility with formats with a different number of channels. The sound engineer has check compatibility with mono, stereo and 5.1. In the future the engineer may have to also check with 7.1, 22.2 and whatever other discrete channel surround system that may come next. That would require a lot of time and a room with enough speakers to cover every possible set up.

Another issue faced by an organisation like the ´óÏó´«Ã½ is how we archive our material. Theoretically, if we archived the stereo, 5.1 and 7.1 mixes of a piece of audio it would take 8 times the amount of space than just the stereo recording. These ITU standards were borne out from a lot of research into which angles gave the best sound, and are essential when setting up a studio or listening room. However, I would be surprised if many of our audience had their own ITU 5.1 set up, and the talks I've had with friends in the computer games industry suggest most of their customers who listen in 5.1 don't follow ITU's recommendation, preferring to use a square, perhaps because that set up fits best around their furniture. While games users may not be representative of the ´óÏó´«Ã½'s audiences, we shouldn't assume our 5.1 listeners are using an ITU recommended set up.

A possible alternative to these discrete channel formats is a system called Ambisonics. This system was developed in the 1970s and has had a cult following since but has yet to break into the mainstream, being of interest mainly to academics and select audio engineers. The fundamental idea behind Ambisonics is to attempt to represent a sound-field at a single point in space.

Without going into too much detail it is an extension of the , but capturing audio from three perpendicular figure of eight microphones all positioned at the same point in space. When combined with an omnidirectional microphone these four signals are know as B-format. This signal represents the three-dimensional sound-field.Ìý

Since the 1970s development of the system has lead to High Order Ambisonics which provides higher resolution in the localisation of sources within the sound-field, at the cost of needing more channels to represent the same recording.

So how might this technology help solve some of the problems described above? A major potential advantage of Ambisonics is its lack of dependency on speaker position. Unlike 5.1, the audio channels being carried in an Ambisonic signal do not map directly onto speakers. The number of speakers and the way they've been set-up by the listener is not as important and the same signal can be decoded to any speaker array. This flexibility would allow one common set of signals to be sent to everyone, and they would be able to decode it to suit their listening environment, regardless of the way they've chosen to set up their sound system. This also has obvious advantages from an archival point of view, and unlike stereo, 5.1 and 7.1, keeping the Ambisonics recordings could potentially help future-proof the archive. In my next post I'll talk about what we've done so far, and what we might do with Ambisonics in the future.

A Touch Less Remote: Part 1 of 6

Post categories: ,Ìý,Ìý

Vicky Spengler Vicky Spengler | 12:52 UK time, Wednesday, 10 March 2010

The ´óÏó´«Ã½ R&D Prototyping team has been developing its own multi-touch devices and applications to investigate how the technology could support television viewing in the future.

Multi-touch devices allow users to apply multiple finger gestures simultaneously to control the interface. Touch screen multi-touch devices allow users to directly manipulate the interface.

The project kicked off last year when ´óÏó´«Ã½ Children's approached us about building a multi-touch table for developing educational multi-player games. They had some specific requirements for their audience that couldn't be met by any of the devices already available on the market.

Testing the first prototype with children highlighted a number of hardware design issues, and as a result a more child friendly prototype is now being developed. This frees up the first prototype table to be used for some other experiments around remote control usage in the home.

The purpose of this work is to familiarise ourselves with this emerging technology, and to develop concepts and design interfaces to present users with new ways to access ´óÏó´«Ã½ content and services.

The strengths of the multi-touch technology we are using for this project are: Firstly, it supports many touch points, therefore multiple users can perform tasks simultaneously. Secondly, the screen size supports group activities that share the same interface. Finally, it allows users to directly manipulate the content displayed on the surface, making the technology less visible and enabling more natural interaction.

In recent projects we have explored second screen activity, using mobile as a companion to the TV. Where mobile is suited to personal tasks that require fine motor skills like text entry, the multi-touch surface supports bigger gestures and shared experiences like multi-player games.

Our aim with this project has been to use the multi-touch system for its strengths, imagining a future where multi-touch complements existing hardware in the home environment.

We're going to write a series of posts about the different areas of this project that will include overviews of the ideas, design and technology and evaluation of the finished product.

In the next blog post Maxine Glancy from R&D Audience Experience will explain the context for this remote technology research.

Weeknotes #4 (05/03/10)

Post categories: ,Ìý,Ìý

Tristan Ferne | 16:05 UK time, Friday, 5 March 2010

Monday

Almost everyone is in the office this morning and we have a possibly over-long stand up to start the week. George hands out some lovely new pink R&D lanyards - we are now appropriately branded. The final report for Music Resolver is delivered in paper form and, in another tree-related delivery, I receive a copy of a journal for which I wrote an introduction a while ago and had completely forgotten about. That means I have several hours of reading ahead of me this week.

Vicky and Theo meet with Maxine from R&D’s audience experience team to evaluate the Multitouch work so far and talk about future collaborations. Chris G, Theo and I spend an hour shuffling and rewriting bits of paper for our Mythology presentation next week.

Dave is continuing work on the Microblogging project. We have three user interface approaches which he’s been developing, he is adding content to the visuals now and by end of the week we should have a presentation to demonstrate the key selling points.

Tuesday

We try to work out how to watch Mark Thompson’s announcement on the ´óÏó´«Ã½â€™s future in our office, give up and head to the 8th floor where they’re showing it. Later in the morning Sam gives us a sneak preview of his talk later in the week on P2PNext, HTML5 and time-synchronisation. This week he’s mainly preparing for an EC review of the project in Brussels next week. And we plan out the first iteration of Coventry now Chris N and Chris B are back in, we’ve got a good direction now and they start on a new revision of the data model.

Akua buys cakes for Chris’s birthday. We eat those while going through this week’s work and next quarter’s priorities, we’ve got lots of project ideas and we’re trying to identify some themes. Tony from the Archive project is in the office at the end of the day and Chris G shows off the Mythology demo.

Wednesday

Chris G and I revive some work we’ve been putting off, its aim is to describe what the team aspires to do - for both a common team understanding and for communicating to others. We’re having some problems with connectivity on our incubator servers which means we have to work round things and re-prioritise some of our tasks. Glen is having to play sysadmin a lot.

George has a useful meeting with strategy people from the World Service to talk about how one of our recent projects might be used on a new service they are planning. Later Peter B from R&D comes in to talk about a proposed recommendation service, I think we made some constructive suggestions that should help to shape it.

The microblogging chain from Glen and Duncan is now sucking in data and we’re going to leave it going for further analysis later. Duncan continues working on message queues, trying out AMQP, websockets and Rabbit MQ. ´óÏó´«Ã½ on phase 1 of this is done so we hand it over to Audio & Music in the afternoon and it’s over to them to make suggestions for phase 2.

Thursday

Vicky puts our future project themes up on a wall so anyone can add project ideas to it. We want new ideas, suggestions for improvement in the themes and a sanity check for our suggestions. Theo is working late this evening putting the finishing touches to our Mythology presentation which now looks pretty good and Sean is finishing of his ejabberd module for Presence so that it will be fully compatible with various ´óÏó´«Ã½ systems.

Friday

I start the day writing speaker notes for our presentation on Mythology while Theo is still tweaking it. And Coventry is starting to look good. Chris N has finally managed to deploy it to our servers - we’re really struggling from not having a sysadmin, he lost about two days to it this week. Chris B has already got it looking pretty nice and has found some issues with some of our data sources and it feels like we’re learning something and getting somewhere. Right now I’ve just finished reading the Resolver report and I’m about to hit publish on this…

R&D (South Lab)- Video Update on the Move

Post categories: ,Ìý,Ìý

Ant Miller Ant Miller | 14:00 UK time, Thursday, 4 March 2010

In late February 2010 the ´óÏó´«Ã½ R&D South Laboratory relocated from Kingswood Warren in Surrey to Centre House at the heart of the ´óÏó´«Ã½'s west London campus.Ìý As the second of three groups of staff move across we visit the new site to see how people are settling in, how the office spaces and labs are coming together, and to see the ever growing data centre.Ìý

We return again as the last of the staff arrive and get together for the first all hands meeing in the department's new HQ.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit ´óÏó´«Ã½ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.


For the first time we are releasing this video on the website simultaneously with the blog.Ìý If you're a regular blog visitor but have had trouble accessing the videos from abroad this platform should provide access for you.Ìý The video is on the "R&D South Lab, Move and Opening" page.

More from this blog...

´óÏó´«Ã½ iD

´óÏó´«Ã½ navigation

´óÏó´«Ã½ © 2014 The ´óÏó´«Ã½ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.