[CV] [Papers] [Research] [Activités professionnelles] [Bio]
I am a principal researcher in the Cloud and AI group at Microsoft where I work on Mixed Reality and AI. My core research interests are in computer vision but I am also work on topics in robotics, computer graphics and machine learning. I received my M.S. and Ph.D. from the UNC Chapel Hill in 2005 and 2009 respectively. I was an area chair for 3DV 2016, ICCV 2017, 3DV 2018 and 3DV 2019, a program co-chair for 3DV 2017, and serve as an area editor for the Computer Vision and Image Understanding (CVIU) Journal.
To see more, please visit my homepage.
We have developed a new 6-DoF camera localization technique that conceals the content of the query image when localization is performed in a cloud-based service. This is a follow up of our previous work on privacy preserving camera localization where we devised a way to conceal the 3D point cloud map which is used for localization.
How can we avoid disclosing confidential information about the captured 3D scene, and yet allow reliable camera pose estimation? This paper proposes the first solution to what we call privacy preserving image-based localization using new ideas from geometry.
We show, for the first time, that SfM point clouds and features retain enough information to reveal scene appearance and compromise privacy. We present a privacy attack that reconstructs color images of the scene.
We propose SGM-Forest, an efficient extension to the SGM stereo matching algorithm, that uses a random decision forest classifier to fuse multiple disparity map proposals, each of which is obtained by solving an independent 1D scanline optimization problem. SGM-Forest is consistently more accurate than SGM and its performance generalize very well to new datasets.
We propose a single-shot CNN architecture for simultaneously detecting objects in an RGB image and predicting their 6D poses. Unlike other CNN approaches, our method does not require additional processing stages for coarse detection and pose refinement. It can also detect multiple objects in a single pass. Our method can be configured to run at 50--90 fps and is significantly faster than competing methods.