´óÏó´«Ã½

Project Origin: Securing trust in a complex media landscape

In an age when we often wonder if we can believe what we see, how can we help our audiences ensure the content they consume is trustworthy?

Laura EllisHead of Technology, Forecasting ´óÏó´«Ã½
Published: 4 March 2020

Disinformation is proliferating and advances in computing are providing opportunities to create new levels of deception. A piece of audio or video or an image which suggests it is something it is not is dangerous enough but how much more so if it purports to be from a trusted source of news like the ´óÏó´«Ã½? 

Traditionally our routes to broadcast or publication have been secure, a transmitter or a web server, but once material goes beyond our own channels and is shared on social media it is vulnerable to manipulation while still retaining the appearance of being bona fide content.

Of course provenance is not a unique problem to the media industry.

  • For decades, art dealers, whiskey producers, diamond brokers have sought technological ways to detect fakery and protect their valuable assets.
  • What, we wondered, could we do to secure the provenance and demonstrate the authenticity (and therefore reliability) of our own most precious asset: our content?

A team drawn from the ´óÏó´«Ã½'s Technology Strategy and Architecture and Research & Development departments is now working, with a range of external technology and media partners, on a way of indelibly ‘marking’ content at the point it is published so that it can be identified wherever it ends up in that vast ecosystem we call the Internet.

Further detection techniques which would show where ‘marked’ content has been manipulated could then be added into the process. The idea is that these signals would be readable by both machine, so that automated actions can be taken to flag or even remove suspect content and by humans, journalists and our audiences. We’ve called this work, which is still at an early stage, .

The technology needed to make it work is complex and multi-faceted, drawing on techniques such as watermarking, hashing and fingerprinting. A key challenge is that any signal needs to be robust enough to survive the many non-malicious things that can happen to a piece of content such as compression, resizing and so on.

Other issues include editorial considerations such as deciding which content to mark:

  • If we only mark potentially ’sensitive’ content does this create problems when a ’signal’ cannot be found in other content, leaving it less valued because it is seen not to be genuine?
  • What will marking content do to our workflows in terms of added effort and complexity?

The project the ´óÏó´«Ã½ is doing in this area sits alongside other work we and others are doing in the disinformation space. Examples are:

  • the range of strong editorial content we are creating about the dangers of disinformation such as the ‘Beyond Fake News’ strand
  • the work we are doing on media literacy and the partnerships we are building to collaborate with other media and technology organisations.

The ´óÏó´«Ã½ is a member of the global partnership on AI’s media integrity steering group which last year launched the  with Facebook, AWS and others.

The Project Origin team currently aims to test its first solutions sometime this summer, building on these as we develop and strengthen our partnerships in this area. The eventual ambition is a system which is simple to use, transparent and with open standards that can be widely adopted for public good.