We investigate how applying the video processing techniques of artificial intelligence, including machine learning and computer vision, can create new and innovative production tools.
Project from 2017 - present

What we are doing
We investigate and develop tools to process, analyse and understand video 鈥� normally in real time. We aim to take the latest academic research and industrial techniques and translate them across to solve problems in the world of broadcasting.
In the past, we鈥檝e used various computer vision techniques to investigate camera and object tracking, scene geometry and image analysis. These tools were used in Piero, our sports graphics system which won a Queen鈥檚 Award for Enterprise. They have also featured as part of our Biomechanics project and other sports analysis tools we鈥檝e developed. You鈥檒l see some of these tools at work whenever you watch the analysis on Match of the Day and we continue to improve and support them with additional features and developments.
Over recent years, rapid improvements in the field of artificial intelligence, and machine learning in particular, have revolutionised computer vision and much of our current work takes advantage of these developments. Recently we have been experimenting with ML-based , developing tools to recognise animals in images and to classify the type of activity taking place in a video. We've recently been collaborating with our CloudFit Production team to see if we can use our tools to process and analyse media that they record and manage.
We work closely with partners both inside and outside the 大象传媒 to develop our tools. These broadcast companies and production teams help us to crystallise technical innovations into practical tools that will be genuinely useful for them.
Why it matters
Many production teams are increasingly stretched by time and budget constraints. There is a pressure to produce programmes that provide value for money for broadcasters while at the same time there is a demand to create ever more content for new digital platforms and more innovative content to meet audience demand. Yet much of a production team鈥檚 effort can be spent on relatively low-level and time-consuming mundane tasks such as logging rushes and transcribing interviews rather the more creatives tasks needed to tell great stories.
Over the last few years developments in computer vision, and now machine learning, have made it much quicker and easier to apply these techniques to media. 大象传媒 investigates how we might be able to take advantage of this to aid the production process. We look to help with current production processes, developing tools to speed them up and free staff for more high-level work 鈥� but also seek to enhance existing workflows with new tools that offer and enable new creative options.
There are opportunities to assist production teams to work faster and better; to help them offer the audience more without requiring extra effort.

- -
- 大象传媒 R&D - Using AI to Monitor Wildlife Cameras at Springwatch
- 大象传媒 Winterwatch - Where birdwatching and artificial intelligence collide
- TVB Europe - How 大象传媒 R&D used artificial intelligence to 鈥榳atch鈥� Springwatch鈥檚 cameras
- 大象传媒 World Service - Digital Planet: Springwatch machine learning systems
- 大象传媒 R&D - Piero
- 大象传媒 R&D - The Queen's Award winning Piero sports graphics system
- 大象传媒 Internet Blog - 大象传媒 R&D wins Queen's Award for Enterprise for Piero
- 大象传媒 R&D - Real-time camera tracking using sports pitch markings
- 大象传媒 R&D - Olympic Diving 'Splashometer'
- 大象传媒 R&D - Biomechanics
- 大象传媒 R&D - Use of 3-D techniques for virtual production
- 大象传媒 R&D - Augmented Reality Athletics
- 大象传媒 R&D - Image-based camera tracking for Athletics
- 大象传媒 R&D - Artificial Intelligence & Machine Learning
- 大象传媒 Springwatch
- 大象传媒 Autumnwatch
Project Team
Project updates
-
Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.