Multi-view video using unsynchronized cameras

Involved trainers: 
Name: 
Frédéric Devernay
Mail: 
Laboratory: 
INRIA
Summary: 

Multi-view video capture systems consist of several cameras (from two to dozens), capturing the same live 3D scene from different angles. Examples of such systems are stereoscopic camera rigs used for shooting 3D movies, and multi-view capture systems such as the GRIMAGE platform at Inria. Usually, these systems require perfectly synchronized cameras, so that there is no time difference between the images taken from the various viewpoints. However, having synchronized cameras is expensive, difficult, and sometimes impossible, for example when using consumer cameras or mobile devices. For this reason, we propose to do capture on an unsynchronized multi-view video setup [7,8], and to synchronize cameras after the capture, using sound [1] and/or images [2,3,4,5,6,9]. After all videos have been synchronized with sub-frame precision, we can apply retiming techniques, which consist in synthesizing videos, interpolated from the original video frames, which are perfectly synchronized.