A Multi-View Approach to Motion and Stereo
This paper presents a new approach to computing dense depth and motion estimates from multiple images. Rather than computing a single depth or motion map from such a collection, we associate motion or depth estimates with multiple images in the collection. This has the advantage that the depth or motion of regions occluded in one image will still be represented in some other image. Thus, tasks such as novel view interpolation or motion-compensated prediction can be solved with greater fidelity. It also enables us to reason about occlusions by comparing estimates across multiple images. Furthermore, the natural variation in appearance between different images can be captured. To formulate motion and structure recovery, we cast the problem as a global optimization over the unknown motion or depth maps, and use robust smoothness constraints (i.e. regularization or Bayesian prior models) to constrain the space of possible solutions. We develop and evaluate some motion and depth estimation algorithms based on this framework.