Nicholas Humfrey has wrote a using the RDf
data feeds provided. It is built on top of the
library.Although its not exactly a prototype, its a useful library which could enable many more future prototypes.
In its current state, the library allows you to write this: $episode =
´óÏó´«Ã½_Programmes_Programme::find('b00p4h42'); $broadcasts =
$episode->broadcasts();
To get the broadcast times of Series 3,
Episode 1 of Gavin and Stacey.
Simple but will be useful, just the way we like it.
is an international day of blogging to celebrate the
achievements of women in technology and science. Raising the profile of woman already doing a stella job in the industry, will hopefully spur others to consider a career in technology.
The first Ada Lovelace Day was held on 24th march 2009 and was a huge
success. It attracted nearly 2000 signatories to the pledge and 2000
more people who signed up on Facebook. Over 1200 people added their post
URL to the Ada Lovelace Day 2009 mash-up. The day itself was covered by
´óÏó´«Ã½ News Channel, ´óÏó´«Ã½.co.uk, Radio 5 Live, The Guardian, The Telegraph,
The Metro, Computer Weekly, and VNUnet, as well as hundreds of blogs
worldwide.
The 2010 Ada Lovelace Day will again be held on 24th March and the
target is to get 3072 people to sign the pledge and blog about their
tech heroine. The new site has just gone live and you can now get in early on the action .
are a concept we thought about doing over the last few years through Backstage. We would take everything and anything which was licensed in a way we could use it under the backstage licence, zip it up and just dump it on a web server for you all to unzip and explore.
However three problems crept up, one finding data which we could clearly put out as a dump, two removing all reference to personal data or/and people (anonymised) and thirdly putting it somewhere sensible.
For example, we had tried to get a selection of the web traffic logs out, but at 2+gig per month I believe it was. It would have been a small nightmare even hosting or moving them anywhere like archive.org. And thats after having to remove all the secret and private information. Slearned about last year when it gave away a dump of data for research. Obviously we would never risk our/your data in this way.
About this time last year, it was decided to try experimenting with raw data stacks via a XML Database (existdb) using data which was already public. You can find them under the . The Tweetstore is a good example of what were trying to achieve with Data dumps. Generally it archives all tweets which the official ´óÏó´«Ã½ twitter user create. By there-selves, its not that interesting but the value is in what patterns you can pull out over time. With good analysis it would be possible to find keywords which attract followers for example.
We're interested in peoples view on data dumps, are they useful or its not worth looking at unless its a nice clean API? Also what do people think of a hybrid model like we have done with the XML Database? Is it still too abstract for use?
Graham Plumb wrote a piece for the Internet blog just recently about the difficult area of content protection for HD Freeview. As you can imagine, there has been quite a few comments attached to the blog post but there has also been even more on .
From Kieran Kunhya
I like the way Ofcom have totally missed the point about Linux/Open Source presuming it refers to STBs running Linux.
Mo McRoberts follows that up with a reality check,
The reality is, STB manufacturers don't really have the luxury of being able to:
a) ignore the licensing terms of the open source DVB stacks; b) reverse-engineer the decoding tables; c) obtain the tables from the ´óÏó´«Ã½ but breach the non-disclosure terms; or d) release a box which doesn't support FVHD
...even if they wanted to.
Ian Stirling suggests another way,
There is a third alternative. B) obtain the decoded tables from a third party in a country where this decryption is not illegal.
I am unsure of the legality of this. It would of course imply that the device would need an internet connection
Steffan Davies ponders generally,
Quite why HD is so intrinsically different from standard DVB-T that it needs to be encumbered in this way is beyond me.
Mo McRoberts replies,
Even if you were to, hypothetically, accept that it was somehow different, a lot of the content being talked about as "needing protection" is imported: premium US TV shows, films and the like. The copy-protection regime being talked about here doesn't exist in the US (the FCC specifically prohibited it), and so if they're going to circulate illicitly, chances are they'll come from the US--rendering the whole thing moot.
This bizarre view that programmes almost don't exist until they're aired in the UK (and that consumers won't be aware of them) is played out by the Ofcom consultation document, which talks specifically about content being aired for the first time _in this UK_ -- which will, in general, be at an absolute bare minimum a day or two after it was screened in the US. It's almost as though Ofcom (and the ´óÏó´«Ã½, and distributors) believe the illicit file-sharing is bound by geographical restrictions, though that's so crazy it can't possibly be true...
Frank Wales replies with,
Are you suggesting that these organizations don't fully understand the media landscape they're presiding over? Why, that's...inconceivable!
I'm sure there was a missing smile after that last comment. Well we certainly like to think we do a reasonable good job of learning and understanding the media landscape. Actually things like the recent rumour that Digital Revolution now called Virtual Revolution will go out on iPlayer afterwards internationally, could if true be very interesting and show we got a better understanding of the media landscape that most imagine.
Frank also points out, how this type of thing has played out previously in the music industry and in that case, how much of nightmare it was for their image.
I just wonder if the ´óÏó´«Ã½ realize how Freeview HD content restriction could become a PR Nightmare Construction Kit for their tabloid foes.
Once someone makes available code to defeat it, how could prosecutions ensue without risking raging headlines like:
 "´óÏó´«Ã½ prosecutes licence-fee payer for watching Doctor Who"
Moving away from ways around the problem, Christopher Woods writes this interesting paragraph about the transparency of delivery.
It's a weird mindset people have these days. I myself have downloaded a show if I missed it on the iPlayer. I wish there was an accurate means by which I could submit views of programming - think Nielsen, but for the benefit of rightsholders. The iPlayer could then just become a torrent tracker, and you download your vids from that in an open standard and the little app which runs in your system tray notes when you open a video, how long you watch it for etc. This could all be managed A typical internal thought process might go, "why should I be beholden to the 7 day / 2 month / xyz day availability period? I didn't want to watch it when it first aired, but I want to watch it now and it's still not available on DVD, so I'll download it. I will NOT be dictated to as to when I want to consume my media."
And it's that point - "MY media" - where I think this battle's already lost. I think the trend is to consider it as 'my [favourite] TV' - no more waiting for months for the boxsets, I want to watch this in HD on my little media streaming box under my TV in high def and I want to watch at my convenience... So out comes the torrent client, down loads the H.264 MKV file and down one sits in front of the box to enjoy the show. Why isn't this possible already without being forced to circumvent the usual channels of distribution?
I don't know how many people would be in favour of something which watches and reports back how much time they spent watching a piece of media but its a interesting idea because it actually cuts away at the root of the issue. Generally a reason for drm is because content producers need to proof people are buying/watching/paying attention to the content, but if it was reversed so we gave away that data for free, there would be less need for locks on the content at all? I expect the likes of Boxee are sure to exploit or explore this in more depth. Who would have thought audioscrobbling/last.fm would take off which tracks every piece of music your listening to?
As usual Backstage will be there to give you the voices you don't usually hear... You can also join the conversation by joining or following us on Identi.ca and twitter.
Believe it or not its been 5 years. 5 years and a few days ago Backstage.bbc.co.uk as its officially called was launched to ´óÏó´«Ã½ staff and a small pool of the early adopting public. 5 months later in May, it was officially launched to the public at by . At the time it was mainly access to the feeds under the backstage licence which was the big issue. This also makes Backstage the longest running public blog the ´óÏó´«Ã½ has to date.
So to celebrate we have launched the new look ´óÏó´«Ã½ Backstage Blog which is more fitting with the rest of the ´óÏó´«Ã½'s guidelines (which we had the pleasure of breaking for years). We have also launched a new directional page at , which includes details for people who happen to come across backstage by chance. You will also notice a link to our learning projects Open Lab (coming soon) and . Both are less focused on code developers but they also have data and services which the community will enjoy using. The Data Art team also have a if your interested in information visualisations.
We have aggregated the 2 different blogging systems from the last 5 years into this one. As you'd expect we lost quite a bit data when porting between system, but fear not its all been archived and we will fix links, author names, comments, meta tags over the course of time.
We still own *.welcomebackstage.com which you may see referred to on some links such as the . This is our array of playground servers, which are servers which allow us to experiment beyond the comfortable home of the ´óÏó´«Ã½ servers. The services on these servers tend to be early public alphas and betas. They do go down quite a bit due to the experimental nature of the prototypes/services.
Finally, Backstage is facing a new year all grown up but fresh faced. There's a number of new things we would like to try this year, so subscribe this blog
After years of people pleading with the government to release its data for the people of the UK to use, comes . Rory Cellan-Jones in his post Free at Last, has a review of the impact of the site and is left wondering what wonderful mashups we will get from the data. Surprisingly most people who have wrote about the site are optimistic that it will have a lot of impact on future data plans like releasing Ordinate-Survey mapping and geo data. But its not all cheers, some people are saying there isn't enough data by a long way.
Not a sheep,
The are really excited by the news that "Web founder Tim Berners-Lee unveils a UK government website that aims to make public sector data freely available. "
I have visited the a few times and the most common message that I see is "Your search returned no records". I get this if I look for individual data and even if I just click on ... Impressive? Not really, no.
While Andy G keeps it positive with ...
Unlocking innovation | data.gov.uk on your marks, get set, mash up!
Tim Trent seems to but wonders how many of the mashups will provide public use...
Seriously, how often do we need a dentist urgently when we also happen to have an iphone? And how facile is the idea of a gps location finder.
"Mavis, my tooth hurts."
"Quick, Arthur, turn on your iphone."
"I don't have the app!"
"Arthur, how many times have I told you not to leave home without the UK NHS Dentist database firmly in your pocket?"
This is a conceit, not a useful application.
I loved reading this ...
I tuned into Radio 4's Today program - normally bastion of great Radio - and was very disappointed to hear an odd piece which implied civil servants were battling to avoid releasing the data, and that the ordinance survey data might not get published. While I am sure that there are some who are, it's the very opposite of what I have seen.
Why does data.gov.uk matter? It matters because:
Open data encourages transparency in government. I see that as a very-good-thing.
The datasets will stimulate innovation in services - from mapping accident black spots to finding cross-service opportunities.
Data.gov.uk will be a nursery for a new generation of semantic-web software developers. If the community isn't where the next Google comes from (it might well be!), it will at least nurture a pool of developers who will bring great data processing and visualisation skills to business.
Supporting a digital Britain. The initiative provides a first step in helping to UK catch up and over take countries like Australia and others who are a long way down the track. Knowledge-based services are a big part of the future.
Wait a second where have we heard this before? Oh yes backstage.bbc.co.uk....
One of the best talks Backstage heard last year was by Matt Mason of the Pirates Dilemma. Luckily the video has been posted to the Thinking Digital for anyone to watch and enjoy. Its quite a long talk but worth watching to the end The examples he uses are specially crafterd and will capture the imagination of most of the early adopter community.
The Mayor of London, Boris Johnson, will on Thursday launch a website hosting hundreds of sets of data - including previously unreleased information - about the capital, as part of a new scheme intended to encourage people to create "mashups" of data to boost the city's transparency and accountability.
Channel 4 will also be offering up to £200,000 through its 4ip fund to help develop the most innovative uses of the data.
To announce the site, Johnson will take part in a live linkup on Thursday to the Consumer Electronics Show in Las Vegas with President Barack Obama's chief technology officer Aneesh Chopra, who has overseen the development of the US government's "data.gov" project, which aims to put all US government data onto the web for others to use.
Great news for the developers on backstage, but we do wonder why this is only happening within the London area? Surely every city and town in the UK should be releasing this data for their citizen's. Without a doubt, yes. Although to be fair, there is a lot of effort going into making cities such as Manchester, a digital data store too.
The only fear people have now is will the site be open enough? Top down project's tend to go the way of Excel Spreadsheets and PDF documents instead of XML and Json APIs.
Some of the guys at what they see and find interesting. I'm not saying the coverage is going to be better that or even but its not bad. You can also follow the Blueroom guys at ping.fm/user/bbcblueroom.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.