KinectFusion: real-time dynamic 3D surface reconstruction and interaction

  • Shahram Izadi ,
  • Richard A. Newcombe ,
  • David Kim ,
  • Otmar Hilliges ,
  • David Molyneaux ,
  • Steve Hodges ,
  • Pushmeet Kohli ,
  • Jamie Shotton ,
  • Andrew J. Davison ,
  • Andrew Fitzgibbon

ACM SIGGRAPH 2011 Talks (SIGGRAPH '11) |

Published by ACM

We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.

KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera

KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. We show uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface…