Re-rendering from a Sparse Set of Images
- Ko Nishino ,
- Katsushi Ikeuchi ,
- Zhengyou Zhang
DU-CS-05-12 |
ACM Transactions on Graphics
We present a method to accomplish photorealistic rendering of real-world objects from their sparsely sampled appearance variation. Using a 3D model and a small set of images of an object, we recover all the necessary photometric information for subsequent rendering from arbitrary viewpoints and under novel lighting conditions. We first extract the diffuse reflection component from the input images as a texture map, and then use the residual images to simultaneously recover the specular reflection parameter and the illumination distribution. The simultaneous estimation of the specular reflection parameter and the illumination distribution is achieved by formulating the specular reflection mechanism as a 2D convolution on the surface of a hemisphere. We then run an iterative algorithm to deconvolve it. Rendering from novel viewpoints and under novel illumination distributions can be accomplished using the estimated three components. Unlike previous approaches, we require less input images and we do not assume anything to be known about the three photometric attributes, namely the diffuse and specular reflection parameters and the lighting condition.