Large Scale Scene Matching for Graphics and Vision


February 10, 2009


James Hays


Carnegie Mellon University


The complexity of the visual world makes it difficult for computer vision to understand images and for computer graphics to synthesize visual content. The traditional computer vision or computer graphics pipeline mitigates this complexity with a bottom up, divide and conquer strategy (e.g. segmenting then classifying, assembling part-based models, or using scanning window detectors).
In this talk I will discuss research that is fundamentally different, enabled by the observation that while the space of images is infinite, the space of “scenes” might not be astronomically large. With access to imagery on an Internet scale, for most images there exist numerous semantically and structurally similar scenes. My thesis research is focused on exploiting and refining large scale scene matching to short circuit the typical computer vision and graphics pipelines for tasks such as scene completion, image geolocation, object detection, and high interest photo selection.


James Hays

James Hays received his B.S. in Computer Science from Georgia Institute of Technology in 2003. He is a Ph.D. student in Carnegie Mellon University’s Computer Science Department and is advised by Alexei A. Efros. His research interested are in computer vision and computer graphics, focusing on image understanding and manipulation leveraging massive amounts of data. His research has been supported by a National Science Foundation Graduate Research Fellowship. He will graduate this Summer.