Abstract

The most significant problem in generating virtual views from a limited number of video camera views is handling areas that have become dis-occluded by shifting the virtual view away from the camera view. We propose using temporal information to address this problem, based on the notion that dis-occluded areas may have been seen by some camera in some previous frames. We formulate the problem as one of estimating the underlying state of the object in a stochastic dynamical system, given a sequence of observations. We apply the formulation to improving the visual quality of virtual views generated from a single “color plus depth” camera, and show that our algorithm achieves better results than depth image based rendering using standard inpainting.