3D modelling of live scenes for applications including sports analysis
Project from 2005 - 2008
What we are doing
The goal of the iview project was to develop a free-viewpoint system that allows the capture and interactive replay of live events using multiple cameras. The technology uses multi-camera images as an input and is based on algorithms for 3D reconstruction, texture mapping and view-synthesis at interactive refresh rates. Although the techniques could be used for many applications, the project focused on sports scenarios (football and rugby). It can be viewed as a more advanced version of the single-camera 3D reconstruction system developed for the Piero project.
The iview project was funded by the DTI Technology Programme. It started at end of 2005 and ran for three years. It built on work we carried out in the EU-funded Origami project, which focused on 3D capture and reconstruction of indoor scenes. Subsequently we went on to look at live 3D reconstruction in the TSB-funded i3DLive project, and are currently working on the EU-funded project which extends these techniques towards capturing content that can be edited to create new animations for applications such as computer games.
How it works
The capture system uses a number of time synchronised cameras. The minimum number of cameras is about four, but for good quality results a higher number is required. We considered different configurations using both broadcast cameras installed for normal match coverage and additional cameras.
The processing module computes a 3D model of the scene. The method used is known as ‘shape from silhouette’ and involves segmenting the players from the background and calculating the intersection of the resulting silhouettes from each camera. This requires the position, orientation and field-of-view of each camera to be known, which is derived by analysing the pitch lines in each image.
The replay module renders the captured scene in real-time using the computed 3D model and the original camera images using view-dependent texture mapping techniques, which wrap the appropriate parts of camera images around the 3D model and blend them together to give the best appearance.
The entire system can potentially operate in real-time, although in the timescale of the project it was only possible to produce an off-line implementation. That means the images are captured and the processing is run at a later stage. The replay module is designed to work at interactive rates.
The main publications that came out of the work were as follows:
J. Kilner, J. Starck, A. Hilton, O. Grau . . Proceedings of 6th International Conference on 3–D Digital Imaging and Modeling (3DIM’07), August 21–23, 2007, Montréal, Québec, Canada.
J.-Y. Guillemaut, A. Hilton, J. Starck, J. Kilner, O. Grau . Proceedings of 6th International Conference on 3–D Digital Imaging and Modeling (3DIM’07), August 21–23, 2007, Montréal, Québec, Canada.
O. Grau, et al. Proceedings of 3DTV 2007 conference, Kos, Greece, May 2007.
O. Grau, A. Hilton, J. Kilner, G. Miller, T. Sargeant, J. Starck Proceedings of IBC 2006, 7-11 September 2006, Amsterdam, NL.
G.A. Thomas . Proceedings of 3rd European Conference on Visual Media Production (CVMP), London, Nov. 2006.
J. J. M. Kilner, J. R. Starck, A. Hilton. A Comparative Study of Free Viewpoint Video Techniques for Sports Events. Proceedings of 3rd European Conference on Visual Media Production (CVMP), London, Nov. 2006.
G. Miller, J. Starck, A. Hilton Projective Surface Refinement for Free-Viewpoint Video. Proceedings of 3rd European Conference on Visual Media Production (CVMP), London, Nov. 2006.
Project Team
Project Partners
-
Based at the University of Surrey
-
Broadcast equipment manufacturer; previously called Snell & Wilcox
-
Suppliers of ball tracking technology
-
Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.