Previous work on image-based rendering suggests that there is a tradeoff between the number of images and the amount of geometry required for anti-aliased rendering. For instance, plenoptic sampling theory indicates that visually acceptable rendering can be achieved when the input images are undersampled, if sufficient depth information is available for all the pixels. In this paper, we propose a novel vision reconstruction approach, rendering-driven depth recovery, to recover the amount of geometry that is necessary for anti-aliased rendering. Our approach contrasts conventional stereo reconstruction in that we do not intend to accurately reconstruct the depth for each and every single pixel, leading to a very efficient reconstruction algorithm. Our algorithm uses a block-based multi-layer depth representation, and searches in the depth space based on the causality criterion, by detecting double images. Experiments show that rendering systems using our rendering driven depth recovery algorithm can synthesize satisfactory novel views efficiently by using ‘just enough geometry’ recovered from undersampled input images.