Learning to be a Depth Camera for Close-Range Human Capture and Interaction

  • Sean Fanello
  • Cem Keskin
  • Shahram Izadi
  • Pushmeet Kohli
  • David Kim
  • Antonio Criminisi
  • Sing Bing Kang
  • Tim Paek

ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2014 | , Vol 33

CVPR 2014 Best Demo Honorable Mention Award

View Publication | View Publication

We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of humancomputer interaction and capture scenarios. Experiments show an accuracy that outperforms a conventional light fall-off baseline, and is comparable to high-quality consumer depth cameras, but with a dramatically reduced cost, power consumption, and form-factor.

Learning to be a Depth Camera for Close-Range Human Capture and Interaction

We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of human computer interaction and capture scenarios. Experiments show an accuracy that outperforms a conventional light fall-off baseline, and is comparable to high-quality consumer depth cameras, but with a dramatically reduced cost, power consumption, and form-factor.