In free-viewpoint video (FVV), multiple video streams are used to re-render a time-varying scene from arbitrary viewpoints. Free navigation with respect to time and space in streams of visual data might represent the next major step in terms of interactivity, allowing for instant virtual replays and freeze-and-rotate effects. In our work on free-viewpoint video we focus on representations and methodologies to capture and process dynamic scenes from sparsely arranged video cameras.
In this talk I will give an overview of the recent free-viewpoint video activities at ETH Zurich. As a fundamental primitive for FVV we employ dynamic point samples by generalizing 2D video pixels towards 3D irregular point samples. A special focus will be the real-time 3D video technology which is successfully employed in the blue-c collaborative virtual reality system. blue-c combines the advantages of a CAVE™-like projection environment with simultaneous and real-time 3D video capturing and processing of the user. As a major technical achievement, users can now become part of the visualized scene while keeping visual contact. Furthermore, an MPEG-compliant FVV coding framework is presented which is capable of streaming and displaying pre-recorded multi-view video data from arbitrary viewpoints. In its finest deployment, this coding framework provides multi-resolution, multi-rate and view-dependent decoding.