大象传媒

Archives for December 2009

大象传媒 Trust visit to R&D- 3D Demo

Post categories: ,听,听,听,听

Graham Thomas Graham Thomas | 15:00 UK time, Monday, 28 December 2009

Earlier this year the R&D department hosted the 大象传媒 Trust at our Kingswood Warren facility, and during their stay with us Dr Graham Thomas deomstrated some of the 3D work done in the production area.听 In the short film below Graham shows some of the interesting innovation he and his team have introduced to broadcast production over the last few years.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Making things Vanish- The Truematte Technology

Post categories: ,听,听,听,听,听,听

Graham Thomas Graham Thomas | 09:00 UK time, Saturday, 26 December 2009

In the second of the "Christmas Lectures" from R&D this year, Quentin Cooper meets Graham Thomas, and is introduced to the incredible retro reflective cloth invented at 大象传媒 R&D which has transformed the use of on set and on location.


In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Navigating Audio- An Experimental Spiral

Post categories: ,听,听,听,听,听,听

Alia Sheikh Alia Sheikh | 14:21 UK time, Thursday, 24 December 2009

In the first of our "R&D Christmas Lectures", Quentin Cooper explores the amazing Audio Navigation tool described in the Collaborative Archive Navigation post last week.听 Andrew McParland, head of the Audience Experience section, explains the genesis of this experimental way of exploring sound, and how it could potentially be made use of in future. We have published a white paper that explains this work in more detail.


In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Jazz Shorts- New technology in an HD Music Show

Post categories: ,听,听,听,听,听,听

Ant Miller Ant Miller | 10:00 UK time, Tuesday, 22 December 2009

[Editor- the following is an article by James Allan telling the tale of the production of Jazz Shorts, a special music show commisioned for the 大象传媒 HD channel.听 R&D provided the Ingex tapeless recording system, and used the ambisonics sound recording technology too, and the show was produced and directed by Nik Pinks, an R&D engineer.]

jazz_shorts_team.JPG
On a dreary Sunday afternoon in west London I'm sharing a small corner table in a pub with Nicholas Pinks, Robert Dunbar, and David Holland. They are the of a new pilot 大象传媒 HD programme called "Jazz Shorts", and along with engineers and researchers from the 大象传媒 R&D department, they are responsible for a landmark in the history of the 大象传媒 HD channel.听



"Jazz Shorts" is of key significance for the 大象传媒 HD channel, and for two reasons in particular. Up until now, programmes commissioned for the 大象传媒 HD channel have mostly been byproducts of other 大象传媒 SD channels.听 "Jazz Shorts" on the other hand is one of the first programmes to be commissioned solely for 大象传媒 HD. It marks a clear new stage in the development of program making at the 大象传媒.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.


Approaching the project with the target to make a dedicated HD programme allows for a lot greater focus of how the potential of the medium can be used. David Holland was keen to stress that thinking in HD meant that the team's aim was to immerse the viewer in the surroundings, to put them amongst the audience, to feel intricacies of performance.

"Jazz music is wrought with delicacy"

"We want to bring the viewer the performance as it is live, HD allows for that intimacy."

The second reason "Jazz Shorts" is a landmark production is because of what lies underneath its skin. For this is the first truly tapeless broadcast of its kind in the world. The department have been developing the technology (INGEX) for some time now. Nicholas Pinks, director and producer of Jazz Shorts works as part of the R&D team who has been at the core of the project.

ingex_at_jazz_shorts.jpg

Using , the change over a traditional workflow is dramatic. The whole concept is to integrate the separate parts of the recording and editorial process. The relevance of this to the future of the HD channel goes beyond simple time and money saving ventures. It is the brainchild of 大象传媒 R&D, the heart of the technology the corporation bases itself upon, designed to create a much improved production workflow with HD at heart.

However, Ingex isn't the only groundbreaking technology involved in the production.听 The R&D team also utilised their equipment- this is a form of audio recording that was explored back in the 1960s, but has been returned to in the 大象传媒's labs as it may have great potential for use with new audio formats, such as surround sound.听 You probably won't notice it in the final show, but part of that soundtrack was captured on ambisonic microphone equipment.

Speaking to Nik I asked how he felt about "Jazz Shorts" being a test bed, a "trojan horse" of technology creeping inside the confines of 大象传媒 HD:

"We got commission based on joint merits. I believe the HD channel wanted more dedicated music, and I also feel this programme worked very well with demonstrating future technology which is obviously due to the efforts of 大象传媒 R&D. I think the pilot of Jazz Shorts set out what it aimed to achieve, a tapeless production, shot to the 大象传媒 HD's stringent standards, with a very good story, and ultimately an enjoyable and informative show for the audience."

After all, the idea for the show was not conceived in the technology labs of the 大象传媒 but the favored location of inspiration for a TV producer - the local pub. Nik, Robert, and David came up with the idea after a live gig at the George IV in Chiswick. Watching the audience interact with the stage was of key influence. The question was whether they could replicate this atmosphere through the medium of television.

There is another anomaly with the production of Jazz Shorts. The three people I am sharing the table with on this rainy afternoon have all graduated from university within three years. In fact, much of the crew was made of university friends. When you take into consideration the pioneering nature of the production, it strikes me as a little odd that such great risks would be taken by employing novice crew members. Robert Dunbar (Co-Founder of Sixth Stage Productions, and Producer of Jazz Shorts) explains to me why they assembled the team this way.

"We wanted people who can work together as a team, most of the crew had background experience in the theatre or live performance. We needed musicians rather than sound technicians."

This in turn makes perfect sense when regarding the artistic merits of the programme, yet the underlying fact remains that if you cannot operate the technology, you will not have a broadcast. For a small company, the financial constraint of an experienced crew is a burden that can decide the programs it produces. Teamwork and artistic flair are great skills to have onboard until you reach a point.

"There were obviously people involved who had a much greater experience, we had some of the 大象传媒 R&D team, and Chris Price from DV Solutions, but the crew had to be based around people who were creative and had a passion for the work. Everyone working on the team was desperate for it to be a success."



If you were to peer into the world of the production team of Jazz Shorts during the three days surrounding the programme you would certainly find moments less than comfortable, some would hurt to watch. It was far from the stellar process envisaged by the team, but these things never usually are. If anything were to be proven from "Jazz Shorts" It would be that the 大象传媒 are committed to HD, but for the Robert, Nik, and David it is much more personal. What they have set out to prove is that you do not have to be a big company to make big steps forward.听 And for 大象传媒 R&D, it's been a great opportunity to show that the latest technologies to emerge from the labs can be used by productions large and small to make compelling and engaging programmes.听




Matthew Postgate in Conversation with Quentin Cooper, Part 3 of 3

Post categories: ,听,听,听,听

Matthew Postgate Matthew Postgate | 12:00 UK time, Monday, 21 December 2009

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.



[Editor: A few weeks ago we recorded a series of short films around R&D with , host of "Material World" and "Connect" on Radio 4, and science and engineering "journalist at large".听 We shot quite a few mini documentaries with Quentin, looking at various recent, and not so recent elements of the work at R&D, interviewing key staff and playing with various perculiar bits of kit.
In this final part of an in depth three part interview with the controller of R&D, Matthew Postgate, Quentin and Matthew discuss the current and soon to be departed base of R&D, Kingswood Warren. The reasons for the move and the future place of R&D in London and the 大象传媒's future Northern base at Media City are explored, and they also look at the culture and the people who make R&D tick.听 The discussion closes with an exploration of the future relevance of R&D, and the fundamental value of the dept to the 大象传媒 and the wider industry.]

Collaborative Archive Navigation

So, imagine you're in the business of making and broadcasting content. You've been doing this for a while now and you're getting pretty good at it. People like your material, they get attached to it. As productions get more expensive and the relative cost of recording media drops, you start to hang on to them. Fast forward a few decades, those tapes are really piling up now, and you're still making content, and now you're sitting on top of something that can be reasonably called 'An Archive'. At the same time powerful computers are becoming cheaper and more ubiquitous, no longer just lab curios.

Accordingly, some bright spark insists that you should store content as a precise digital copy, exactly as broadcast. You can do a lot with 1s and 0s. However, give it a few more years those 1s and 0s too are beginning to pile up, plus you have a copy of every broadcast rather than just every programme, repeats and all. Shows may have gone out with different trails, or worse, minor changes in the edit for obscure reasons. Worse yet, schedules are capricious beasts which aren't always adhered to, so you might not have recorded the start or end of the show. Anyone who had a VCR remembers this problem.

On top of that, your information about the programme comes from the broadcast stream itself. This is seriously minimal- the same stuff your Freeview box gets- broadcast time, channel, title and brief description. If you need to look something up, and you don't know exactly when it went out, you might be trawling through a lot of content. Pity those who are just looking for a clip; a snippet of a documentary, something you need to reuse from a radio broadcast, a news bite that has suddenly become relevant.

The logical extension is that you start to think about digitising all your piles of tapes too - you (probably) don't run the risk of chopping the end off your programmes, but now you have no automatic metadata at all.

Ok, busted. What we're actually talking about here is metadata. . Don't worry, it's going to be OK.

The Audience Experience research section has been thinking about different types and sources of metadata and why some might be more useful than others. In the past, we investigated unobtrusively embedding metadata, without harming compatibility with the MP3 specification and players.


mark-kermode-radio-show-in-id3v2-tag-player.jpg
MP3 of the Mark Kermode radio show with embedded chapters, text and images.


We have also investigated the possibility of inferring metadata from the content itself. Sticking with the audio example, traditionally an is used to help navigate around audio content - we've turned that audio data into insane striped multi-coloured visualisations,


vine_visualised.jpgVisualisation of the Jeremy Vine show

It turned out that the colours provided some useful signposts. Speech separated clearly from music, we could often distinguish between male and female speakers, telephone interviews showed up as low bandwidth dark spots. We used these visualisations as a boot-strap for a human-controlled music annotation tool. While we couldn't always use the visualisation to tell what was happening in the audio at any given point, we could use them to tell when something different was afoot. We could look at all sorts of other audio features , possibly finding other novel ways to navigate through sound.

Metadata can tell you what your content is. Metadata can tell you what the ingredients of your content are. If we drill down to a single unit of content - let's say a song - it can tell you . All of this seems irrelevant until you consider the problems that it can solve- the thousands of hours of digital content, and the person searching for that specific clip.

When applying this to our giant digital Archive Of Content - two things become clear:
  • It would be useful to add metadata against the timeline of the content (thing X happened at time Y),
  • It would be a mammoth task to add this information from scratch, even if you knew at the outset exactly what kind of information is useful and what isn't.

Luckily we don't have to create metadata from scratch if we can get it from somewhere else - especially from elsewhere in the production chain. We could collect different kinds of metadata, from information about locations and sets to full cast and crew lists- all sorts of useful stuff that could help you dig up that piece of content again.

There's no reason to stop there either. Once someone has retrieved the content from your archive, they could also add their own annotations. For example, a production assistant looking for a particular clip might leave some metadata behind, making it easier to find again The next person seeking that clip would find a trail of breadcrumbs leading right to it. The archive should also apply any metadata belonging to a piece of content to its repeats. However, repeats aren't always identical to the original broadcast, so there's the possibility that the metadata for them is wrong.

Every metadata timestamp might be out by a few seconds if the programme was edited slightly, different trails accompanied it, or the channel was running late. Rather than just cursing "the system", the user could actually provide an effective fix with a few clicks- thus improving the quality of the archive overall.

It's all about taking advantage of the resources that you have. Not only do you need to make intelligent and innovative use of the metadata itself, the trick is to think laterally and capture it whenever you can too. In addition to the now accepted ways of filtering, searching and digging using metadata, you need to provide clear, easy and quick ways to allow people to improve it. Even something as seemingly mundane as a few clicks to mark the start of an interview in a programme enriches the value of the recording itself.

The best part? It's not anything that you wouldn't be doing anyway, if you were researching in the archive. You'd still need to make notes about where the things you were looking for start and end. Harnessing the power of a large network of users means that the more the system is used, the more usable it becomes.





Matthew Postgate in Conversation with Quentin Cooper, Part 2 of 3

Post categories: ,听,听,听,听

Matthew Postgate Matthew Postgate | 09:00 UK time, Wednesday, 16 December 2009

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

[Editor: A few weeks ago we recorded a series of short films around R&D with , host of "Material World" and "Connect" on Radio 4, and science and engineering "journalist at large".听 We shot quite a few mini documentaries with Quentin, looking at various recent, and not so recent elements of the work at R&D, interviewing key staff and playing with various perculiar bits of kit.
This is the second part of an in depth three part interview with the controller of R&D, Matthew Postgate.听 The discussion explores the range of projects that the department has worked on over the year- from focussed development to blue sky research, and from the open, shared, collaborative domain to more closely guarded work.]

Archives Workshop- Monday 30th November

Post categories: ,听

Richard Wright Richard Wright | 12:00 UK time, Friday, 11 December 2009

As mentioned in Ant Miller's post of 4 December, on Monday the 30th of November 大象传媒 R&D hosted an invite only Archive Workshop, with presentations from academia and industry.听 The head of the Archive Research section, , sends this report:

archive_workshop_30112009.jpeg Archives are changing to meet the demands of an on-line world. These changes have been enabled by the application of new technology. The first 大象传媒 Archives Workshop was held on Monday 30th November at Kingswood Warren to explore these changes and the enabling technologies. There were sixteen presentations from academia, industry and from within the 大象传媒. Presentations covered both technology for archives and the applications of archives, with many of the presentations including on-line demonstrations. Technology presentations included three on the research about various aspects of visual search and image recognition at , and universities, one on the use of fingerprinting to identify video content from , a presentation on quality checking of content, MXF file format and open standards from and a presentation on the use of techniques such as video segmentation and audio recognition to identify events in programmes by .

Applications of archive content in academic research were described in presentations on the project by Glasgow Caledonian University, and by , plus the implementation of the BUFVC archive of broadcast content by , who also described their implementation of the on-line Path茅 news film archive. A community application on a communication support system for older people with dementia using archive content was presented by .

Daniel Teruggi described , the French National AV archive and explained the EU collaborative project on digital preservation: it is not safe to assume that content is safe once it is digitised! There was a presentation about the s experiences in digitising content and the gems of old programmes and clips waiting to be discovered.

大象传媒 presentations included a description of the Proteus content workflow management system used in radio; important now that archives need to be integrated into such workflows for TV as well as radio. There was a presentation of an R&D project that links the 大象传媒 archives archive database, called Infax, with an on-line content store that R&D has developed for storing digital TV broadcasts and browse quality copies of the 大象传媒 archive (as the archive is digitised), with the result that programme makers searching the 大象传媒 archive database can now see the content is their browsers.

The workshop ended with a look back at the Archives Hackday that took place at the start of November at Kingswood Warren, where 大象传媒 computer programmers explored new ideas on the use of archive content.

The workshop was a very busy day, attended by about sixty people from the 大象传媒 and other organisation with an interest in archives. It was a good opportunity for people normally busy with their operational work in the archives to see and hear about the new developments and to have a chance to meet and discuss the future of archives with researchers and innovative organisations.

Read the rest of this entry

Mooso - a game with a purpose

Post categories: ,听,听,听,听

Tristan Ferne | 12:47 UK time, Thursday, 10 December 2009

The R&D Prototyping team, in association with Radio Labs, recently launched a new public prototype called . Mooso is a game to play while listening to 大象传媒 6 Music. Sign up at , then listen to 6 Music and tag the songs that are played. If you match other players鈥 tags then you score points. You can read more about it on the the 大象传媒 Radio Labs blog. In this rather long post we explain some of the development process for Mooso, some of the technology behind it and we look at the problems and choices we had to make.

The evolution of an idea

The core idea for Mooso came out of an ideas session a couple of years ago. We wanted to build a web application for discovering new music that was inspired by games and fun.

Mooso is a . GWAPs are a concept first developed by of Carnegie Mellon University and the first mainstream example was the , which was subsequently licensed by Google as the . The principle behind these games is to create enjoyable and fun experiences for people that also do useful work as a by-product of their design. You can play several examples of GWAPs at including image-based, text-based and music-based games.

Typically these games match up a number of players over the internet and get them to independently describe some aspect of an image, text or music that is provided, if the players descriptions match then they get points. And those bits of independently entered and matched data can then be collated and used to aid search and navigation of the things that were described. This concept inspired us and we spent some time thinking about how we could use it for music discovery but came to a bit of a dead end. We wanted to build a music-based GWAP as a way of gathering tags and metadata but at the time we didn't have any on-demand music on bbc.co.uk, not even clips, so we couldn鈥檛 build a completely analogous game to the Image Labeler. What we do have however is live radio, and the eureka moment came when we realised we could develop a game of this type which ran synchronously with a radio station, using each song played on the radio as the basis for a round in the game.

So we took that idea and developed it, drafting some possible rules and then playing the game on paper to see if it would work - you just need a few people, a radio, a countdown timer and some paper and pens. When the radio starts playing a song everyone starts writing down tags, while keeping them hidden. After two minutes the round ends and you start the scoring. Go round the table in turn, each player reading out their tags, if anyone else wrote down your tag then you and they get a point and then cross it off, otherwise just cross it off. Then when everyone has either matched or not matched all their tags you can total up the scores for that round. Anyway, it was pretty fun to play and the idea looked good. We put together a small team and built a quick prototype over a couple of weeks, this worked OK and we played the game internally a bit but it wasn鈥檛 in any way scalable or robust enough to put on the web in public. So the idea was shelved temporarily while other things got developed and time passed.

The original Mooso design

Then earlier this year we picked the project up again and turned it into a .

How it works

Mooso is built in , it鈥檚 one of the technologies that our team use for prototypes, in this case providing enough robustness and performance combined with speed of development. The main challenge in building the game is the real-time nature of it, which is an interesting trend on the web in itself. We decided to use , a standardised Instant Messaging (IM) protocol, to power the game as it would give us a scalable way to receive real-time submissions from players and send them instant updates as they play. We use the Jabber server because it is very stable and we have some experience of using it in the 大象传媒.

The Mooso play page

As Jabber is an IM protocol we need some way of integrating that into the game鈥檚 website. If you鈥檙e playing on the website then the pop-up Play page (shown above) communicates to our eJabberd server using a Flash bridge (based on the ) which allows a direct connection to be made over the standard Jabber ports. If you鈥檙e behind a firewall (e.g. if you鈥檙e playing at work, which we wouldn鈥檛 condone obviously) we provide a Javascript fallback (based on the ) which connects to the eJabberd server using HTTP binding or long polling over the standard port 80. Around 8 out of 10 people connect using the Flash bridge. Using Jabber also meant that we can let people play over instant messenger (if they have a Jabber or Google Mail account) with very little extra code - see the .

The game is run by a number of Ruby scripts which communicate with each other internally over Jabber as well, using the . One script manages player communication and the start and end of each round. Another script manages timing information and sends out timer messages, like when there are 30 seconds left in the round. A third script listens for new tracks to be played from the 6 Music hard disk playout system. This is what triggers the start of a new round, although not all tracks that 6 Music plays are played out like this. So live sessions, for instance, do not trigger a new round. And another script handles scoring, which is the first point at which we have any communication between the game and the database - deliberately done in order to de-couple the real-time gameplay from the website serving. Once scoring is done the database and site are updated to reflect the players鈥 new scores and the round which was just played. This architecture means the game mechanics are modular and should Mooso become very popular we can move processor-intensive functions like scoring onto dedicated hardware.

Playing the game

Fairly obviously, the game requires several people to be playing simultaneously for it to be any fun, which could be a problem. We try to help remedy this situation by generating 鈥済host鈥 sessions for when there are few players online. We can only do this when a round involves a song that has been played in a previous round. At the end of the round, before scoring, we select several previous sessions for that song when available (where a session is tags from a player during a round) and add them to the pool of real players鈥 sessions to be matched and scored. There鈥檚 a caveat that these ghost sessions can鈥檛 be from any of the real players in that round so you can鈥檛 end up playing yourself! A player also doesn鈥檛 get any post-hoc points if they happen to be one of the ghost sessions as that becomes too complex, so only the current 鈥渞eal鈥 players in a round score any points.

The rules we currently have in place mean that you get a single point if you enter any tags at all in a round, 5 points for every tag you match with other players and 10 points for every artist/band you match. We determine if a tag is an artist by matching it against our MusicBrainz-powered database of artists. Only by people playing the game will we discover if these rules work and we imagine that they may need to evolve over time to fine-tune the game for maximum enjoyment and usefulness. The tag matching is done with a varying threshold. Below a certain number of players you only have to match one other person but as the number of simultaneous players increases that threshold increases so you might have to match 2 or 3 or more players if there are many people playing.

This brings up some interesting questions around what kind of tags we will get from the game. By requiring players to match tags we are, hopefully, validating that those tags are in some way relevant or descriptive to the song. And we also want to avoid people spamming or abusing the game, hence the varying threshold. But by setting this threshold are we penalising those players who have specialist knowledge and enter obscure but valuable tags? Are we just going to get the lowest common denominator tags? We could start banning tags which have been entered too often (i.e. rock and indie) and we could also consider awarding points based on the of each tag - i.e. you get more points for tags that are rarer and potentially more interesting.

It鈥檚 also worth thinking about whether this kind of tagging is different to other music sites that use tagging. I guess the motivations for people tagging music on places like are partly for their own use (which is why you often see the tag 鈥溾) and partly for the community. Mooso is potentially different as it has a particular audience (the 6 Music listeners), the players aren鈥檛 entering tags directly for their benefit, and they don鈥檛 even necessarily like the song that is being played. But they are, hopefully, having fun. Will these motivations make a difference to the qualities of the tags? We don't know but we hope to investigate.

The tagging data that we do capture from the game is used to create links inside the Mooso site so matching tags and artists are used to create links between and on the site creating an interconnecting web of music, songs, tags and bands. Internally we are actually storing all the entered tags, whether they match or not, and we hope to analyse these in more depth at some point. We are also planning to release the dataset of matching tags and artists under the which will allow other people and organisations to benefit from this data.

Matthew Postgate in Conversation with Quentin Cooper, Part 1 of 3

Post categories: ,听,听,听

Matthew Postgate Matthew Postgate | 12:00 UK time, Wednesday, 9 December 2009

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.



[Editor: A few weeks ago we recorded a series of short films around R&D with , host of "Material World" and "Connect" on Radio 4, and science and engineering "journalist at large".听 We shot quite a few mini documentaries with Quentin, looking at various recent, and not so recent elements of the work at R&D, interviewing key staff and playing with various perculiar bits of kit.

We'll be slotting these films into the blog posts here over the next couple of months, and to start the ball rolling we've the first section of an in depth three part interview with the controller of R&D, Matthew Postgate.听 In this first section they discuss the general role of R&D in modern broadcasting, what the department does for the 大象传媒 and what we've done over the years]


R&D on Radio 5 Live

Post categories: ,听,听,听

Ant Miller Ant Miller | 17:00 UK time, Tuesday, 8 December 2009

Last week Radio 5live's Pods and Blogs team called in at R&D to talk to some of our researchers.听 Jerry Kramskoy who's leading our exploration of mobile devices and Richard Salmon who's our display technology expert both spoke with Jamillah Knowles.听 The report went out in the wee small hours of last night on 5Live, and is on their blog here and available as a podcast here in case you missed it.听 There are other reports from the Webbies and discussing online privacy, but right there in the middle are our engineers!

大象传媒PodsandBlogsFuturegazing1260291346266.jpeg

R&D Round up- Friday 4th December

Post categories: ,听,听,听,听

Ant Miller Ant Miller | 16:00 UK time, Friday, 4 December 2009

There have been a few events and developments in the last week that warrant a bit of a note in this blog, but we haven't had the time to pull together their own post.听 So by way of a catch up- here's the highlights of this week and a bit:

We're Hiring!
R&D is currently in the proces of recruiting graduate or equivalently experienced engineers for our graduate training program.听 Details are available on our careers page, and applications will be accepted up to the 18th of January.听 It is a cracking opportunity to forge a career in the North West or London area right at the heart of media technology.听 Competition is strong, and you maybe surprised at the background of previous succesful candidates- we need skills in a wide range of disciplines, and a blend of rigorous analytical capabilities along with innovative creativity.听 Engineering is not the only fruit!

Archive Workshop
Monday the 30th of November Kingswood Warren hosted an invite only Archive Workshop, with presentations from academia and industry.听 To name but a few of the contributors; Paul McConkey of Cambridge Imaging (they worked on the ITN/Pathe preservation project), ANdrew Zisserman of the at Oxford University, Simon Factor of , and Daniel Teruggi of . We hope to get a more detailed post from the organisers shortly.

archive_workshop_30112009.jpeg

BeeBCamp3
R&D staff were active in last week's BeeBCamp3, the third (you guesed didn't you) of a series of Barcamp formt meetings held at the 大象传媒 to bring people together to explore how we can work together better in the future.听 This one had quite a few 'externals' (the rather scarey term we give to non-大象传媒 people at 大象传媒 internal events) and was also a dual site event, with session in London and Manchester.听 See the for a run down of the day.

BeeBCamp_1259871681214.jpeg

ArcHak video and prototype available

We've edited down a mini documentary on the archive hack day of a few weeks back, along with the raw footage of all the final hacks and it's here for your viewing pleasure.听 The main doc goes to about the 8 minute mark- after that its the full footage of the final presentations.听 Be warned- we had a nightmare getting projectors and display screen working at the end of the day (they'd behaved impecably all morning for lightening talks) so the visuals are patchy, and the lighting and sound are very much rough and ready for the last bit.

In order to see this content you need to have both Javascript enabled and Flash installed. Visit 大象传媒 Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.


And Brendan Quinn, our colleague here in R&D, has posted his archak .听 The details are in the video and on the blog, but in a nutshell he and Simon Delafond daisy chained a load of data sets across the 大象传媒 internal catalogues, yahoo, wikipedia and DBpedia, and piped it into the Memoryshare interface to produce a dark twist of navigating events by time!

deathlines_demo.jpg
Radio Labs Launch "Mooso"
Radio Labs is our sister blog, run by our collagues in Audio and Music and also the Prototyping Team (ok so it's complicated...) and they've just announced the launch of Mooso- an interactive game where your ability to tag musical styles in a popular way earns you points.

mooso_logo.jpg

As Chris Bowley says:听 We've just launched the latest Radio Labs prototype: . It's a game you play while listening to 6 Music, in which you enter tags and suggest similar artists to describe the current track. If what you enter matches what other players enter you get points. We give more points for matching similar artists than tags and you also get a point just for playing.

Should be popular with the Children of the Zone!听 [Edit- The system runs weekdays from 7am to 7pm, so sadly we weren't able to try it out over the weekend, but will do when we get a chance in the week!]

Maggie Philbin Came to see us!

Yes THAT Maggie Philbin!听 She came down this week with to look at our soon to be vacated base.听 We'll post up a link to the Click piece when it comes up, but in the mean time she's written some very nice things about us on .听 Thank you Maggie, it was a delight to meet you at last.

Recent HD trial with P2P-Next project

Post categories: ,听

George Wright George Wright | 09:10 UK time, Friday, 4 December 2009

Hi all.This is my first post on the new 大象传媒 Research and Development blog. My team, Prototyping, builds new prototypes and trials new products and services across all digital platforms. Since our move into R&D from our previous department, we've been busy with a number of interesting experiments, one of which I'd like to share here.

As part of 大象传媒 R&D's work on the , we shared the most recent video released under a CC licence by the . We made this available in standard definition and HD, to test the most recent trial from the P2P-Next project. There's an overview of this available, but if you just want to cut to the chase, download the , restart Firefox, and then go to 听 and chose either 480p (standard definition), or if you have a fast Internet connection and a very fast PC, the 1080p flavour.

This is clearly an early release, and is not an indication that the 大象传媒 (or any other partners) are jettisoning any existing very successful consumer facing propositions in favour of P2P-Next, but is part of 大象传媒 R&D's commitment to early experiments with new forms of distribution. This trial is Windows only and Firefox specific at the moment - sorry. Focusing on one platform means we can test some of the specifics about the peer to peer side without worrying about platform differences. Clearly, as part of our commitment to this project, cross platform development, and free/ open source code, other platforms for the P2P-Next project will be delivered. As part of this limited trial, though, that's unlikely. This is a small and focused trial aiming at gathering statistics and finding performance issues in the first release of the HD P2P streaming experiment which our partners on the project have delivered.

There's more information about the and

We're keen to hear your comments - if they're related to your experiences with the trial please contact the Living Lab team post them below whilst we work out why the (non 大象传媒) email address for feedback is bouncing (thanks, Tom Fanning!)

Distribution Core Technologies Section and DVB-T2

Post categories: ,听,听

Andrew Murphy Andrew Murphy | 18:00 UK time, Thursday, 3 December 2009

Hello. I'm Andrew Murphy, a Senior Research Engineer in the "Distribution Core Technologies" section here at 大象传媒 R&D. Our section carries out research into the technologies that underpin the distribution of the 大象传媒's TV, radio and interactive services both now and in the future.

Over this and subsequent posts I'll aim to tell you more about the work of the various teams in the section in areas such as high definition radio cameras, video coding, white spaces and the future of radio. I'm going to concentrate on DVB-T2 now, however, as yesterday marked the technical launch of Freeview HD which uses DVB-T2 as its physical layer.

You may have seen from Ant's post that our section's work on DVB-T2 was recognised by an RTS Innovation Award. As part of the T2 team, I was lucky enough to be at the awards ceremony and thought I'd give some background into the work that was recognised by the award.


t2_team_rts.jpg


is a new transmission standard (so you will need a new T2-compatible receiver to decode it) that typically gives 50% more capacity than the current digital terrestrial television standard used in the UK, DVB-T. It achieves this through a combination of more advanced modulation techniques, improved error correction coding and time interleaving as well as more efficient signalling and allocation of pilots. More details can be found .

The timescale for the development of DVB-T2 has been incredibly tight. In just three years it's gone from an initial study mission and call for technologies through to a published and the manufacture of silicon and imminent launch of consumer set top boxes. This process has been backed up by countless simulations, verification work, testing and detailed field trials.

That short summary of course doesn't do justice to the huge amount of effort put in by us and numerous other companies from all around the world who have worked to make DVB-T2 a reality.

Our section at R&D is split into two teams overseen by section lead, Nick Wells, who is also chair of the T2 group within the DVB Technical Module.

The hardware development team is led by Justin Mitchell. Justin posted last year about the development of the world's first end-to-end DVB-T2 modulator/demodulator chain. Justin and his team are focussing on adding features to our modulator and demodulator both of which have been licensed to manufacturers as a way of encouraging the availability of T2 equipment and getting value back from work we would have needed to carry out anyway.

The T2 specification team (of which I'm a member) is led by Chris Nokes. This covers our contributions to the DVB-T2 working group, inputs to the T2-related specifications, T2 field trials and work within the (DTG). Our team is also working closely with manufacturers to provide feedback on the performance of their DVB-T2 receivers which we test in our labs.

Personally, I've been helping to develop realistic test streams for T2 Multiple Physical Layer Pipes (PLPs). Multiple PLPs are an advanced feature of DVB-T2 that enable service-specific robustness. So, for example, a single T2 transmission could contain a mixture of high definition services aimed at household TVs fed by roof-top aerials as well as some low-bit rate, more rugged, services aimed at portable receivers.

There are no plans to use multiple PLPs in the UK at present (the DVB-T2 channel capacity will all go towards high definition TV services) but we had to be sure that all the set-top boxes supported multiple PLPs at launch in case the UK had a requirement for them in the future. It's only by making test streams such as this available that it's possible for manufacturers to validate that their receivers would work with future multiple PLP services.

Alongside this I've also been chairing a sub-group within DVB-T2 to verify the T2 Modulator Interface standard (). Whereas DVB-T transmissions are completely defined by the MPEG transport stream that they carry, the flexibility of T2 and the inclusion of multiple PLPs in the standard means that a transport stream is not necessarily an unambiguous description of the on-air T2 signal. Why does this matter? Well, for synchronised Single Frequency Network (SFN) operation, every transmitter must output the same signal at a precisely defined time instant. The T2-MI allows this to happen by telling the modulator exactly what to transmit when. The interoperability of T2-MI was successfully demonstrated at a plug-fest at the end of October where the first T2 SFN on a wire was created using equipment from different manufacturers fed by a single source of T2-MI.

The RTS Innovation Award recognised the hard work of everyone in my section working on DVB-T2 but there are many more people in other sections at R&D and around the 大象传媒 and in industry working hard to make sure that the launch of Freeview HD goes as planned and that the new transmissions integrate correctly with the existing DVB-T transmissions.

Watch this space...

Loss is not where you find it

Post categories:

Richard Wright Richard Wright | 18:00 UK time, Wednesday, 2 December 2009

My laptop up and died two weeks ago: came to a dead halt, every time, in the boot procedure.ROM diagnostics said "HDD read error".听听 As I'm supposed to be a 大象传媒 R&D and archive expert in digital preservation, this situation is embarrassing.And as I've been using computers since 1965 (really) I should know what I'm doing, unless my approaching geriatric status has caused me to lose the plot.


bsod_flickr_Justin_Marty.jpg
Blue Screen of Death- the result of data loss?听 CC Image from Flickr User Justin Marty

There is a plot: system complexity conspiring to make data inaccessible.It was no coincidence, I am sure, that my first complete disc failure in 17 years come withn two months of the conversion of my laptop's hard drive to full encryption.Lost laptops and compromised personal details are a national problem.The contents of my own laptop would bore anyone else silly, but I'm sure there are all sorts of laptops carrying private and confidential details that deserve full protection.


I just hope that the new encryption systems that now sits on millions of UK hard drives do indeed give protection, because compromise of data is only one risk.Few of us have data whose loss would compromise national security or even embarrass our employers, but all of us have data that we would hate to lose - as I found out at Denver airport when I was about two start two weeks' work in the USA, and had nothing to work with.


I also hope the UK's major (or not so major) IT departments are collecting statistics on computer failures where encryption is implicated.I know in my case they did not, as my efforts to find a way to diagnose the problem while 8000 miles from base led to various changes, and so my dead laptop is logged as 'system rebuild required', not as 'death by encryption'.It is only through statistics that we can understand incidence of failures and their types, and thereby understand the real risks posed by digital technology.Without knowledge of the risks, we can only speculate about where to place our collective efforts - and budgets - that fall in the general area of 'digital preservation'.


Of course, my dead laptop is but one data point, as I am myself, as to that.But I have more: several times recently I've come across major examples of 'loss of data' - and as with my hard drive, it wasn't the data itself that was lost, it was the complexity above the data that got its knickers twisted and so ceased to function.听听
  1. The has a panel, , that is looking at asset management systems.A senior engineer of one of the first such systems to specialise in broadcasting came to Geneva in July, to talk about how a broadcaster 'going tapeless' should go about moving into digital asset management. He mentioned entire collections of online content disappearing, owing to corruption of the database - because an asset management system is 'just' a lot of files, and a database with information about those files.听听 He'd personally experienced that situation twice, and in each case there were backups to revert to, to rebuild the collection and get back into business - after up to two weeks of travail.
  2. Our 大象传媒 collaborative project had a workshop at the beginning of October, where we asked about examples of loss (because I'm trying to collect evidence in order to establish risk - that's what I do).Again, a fully competent IT company working with a major Spanish broadcaster described a database corruption, of an asset management system, and in that case 80% of the material was recovered from backups (taking several days) - and 20% had to be re-ingested, which took several weeks (part time).
  3. All of which should remind us of the 大象传媒's online picture store Elvis, which crashed some years ago and again it was essentially database corruption, compounded by backups that largely failed to work.There was one very effective backup - Lisa, daughter of Elvis - but that held the video-quality scans and not the full-quality scans (as needed by Radio Times and all the other 大象传媒 print publication that Elvis also supports).Something like 100k high-resolution images were lost.


There is a common thread so far - the bits still exist, unaltered, on storage media - but the complexity sitting between the user and the bits has 'ceased to be' in some fashion, and so the whole thing is a dead parrot (and called a storage failure, though it is anything but).


This thread leads to an even greater problems: systems that haven't crashed, but still won't find things, because they are in some way inadequate.We're all now aware of metadata and its purposes, but - just as with data itself - there has to be effective technology using the metadata, or again the results is a 'digital dead parrot'.


You may not know that I'm a prize-winning poet.I was somewhat surprised to learn this myself, but indeed my entry into a competition in Ariel, the 大象传媒's in house paper, won second prize, and they were so pleased that they asked me to make a podcast.As with many print publications, this one also has an online version, where it sticks extras, like my podcast.The problem is, there is no search engine on the online version, or indeed ANY other search technology.There's a list or recent or popular pages, but once content falls off that list, it falls away completely, and becomes as inaccessible as the data on my dead hard drive.As with the hard drive, the data is still there, but inaccessible.The PDF's of the print version are indexed by a 大象传媒 search engine, but the online pages are not.The result is an inaccessible poem: even the person who posted my podcast can no longer find it!


An internal 大象传媒 publication is a tiny issue compared to bbc.co.uk itself, the 大象传媒's world-class media website.大象传媒 policy is to hold the text from bbc.co.uk in a sort-of archive, but reasons of space/budget/complexity mean that the audio and video content on bbc.co.uk is not archived.The justification is: all that audio and video goes out on radio and TV, and so gets archived separately.Has the validity of that statement been checked?How much audiovisual content is NOT also broadcast?I wish I knew!The business case to build a real archive (something with comprehensive capture, and access) was chopped and chopped until it was reduced entirely to a 90-day legal requirements system, with just a couple of access points.Meanwhile, anybody who does want to see 大象传媒 content that has been taken down from bbc.co.uk has to go to , where they do monthly (or thereabouts) scans of the entire internet, and make it available to all through their .


So there are a half-dozen examples, ranging from my laptop to bbc.co.uk, where data can no longer be found because, essentially, of failure or inadequacy of the system sitting between the user and the data.The robust solution to failure is to simplify that technology layer - and unfortunately IT systems are moving in the opposite direction.I fully expect an epidemic of data loss, in direct consequence of the mass installation of encryption on company hard drives.I hope I'm wrong.

More from this blog...

大象传媒 iD

大象传媒 navigation

大象传媒 漏 2014 The 大象传媒 is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.