Acquisition, reconstruction, transmission, and display of live 3D content

Established: June 1, 2012

We are developing a system for the acquisition, transmission and display of real-time 3D digital content. Our goal is to enable live, immersive 3D communications and entertainment experiences. Our strategy is to acquire the 3D signal within a cubical volume. We represent this signal using high resolution colored voxels, and have created algorithms for acquisition, encoding, decoding, streaming, and display of this voxel data designed specifically for modern massively data-parallel GPUs. This work evolved into project Holoporation.

Qsmall

Project milestones

2012 Q3: Synthetic Data Analysis for Immersive Telepresence System (Acquisition -> Surface-Reconstruction -> Rendering).

2012 Q4: Real Data for Immersive Telepresence System (collaboration with SBG Kent)

2013 Q1-2 (RIG 1): Real Time System for 3D Teleportation (8 RGBs, collaboration with Cha Zhang (opens in new tab) and Zhengyou Zhang (opens in new tab))

2013 Q4: Rig 1 + Voxel Streaming (3D content is remotely rendered)

2014 Q2 (RIG 2): 8 RGBs + 1 Kinect Depth (Video (opens in new tab)) (collaboration with Phil Chou (opens in new tab) on derivation of “Bayesian Fusion”)

  •     Mini-cave demo in building 99.

2014 Q3 – 2015 Q2 (RIG 3): 4 Pods, (1RGB + 2Monos)/Pod (Video (opens in new tab)) (Sergio Orts-Escalano (opens in new tab) internship, Qin Cai (opens in new tab) joined the project)

2015 Q3 – Q4 (RIG 4): 8 Pods, (1RGB + 2Monos)/Pod. I3D group (opens in new tab) effort for full body capture and “remote streaming” HoloLens rendering.

  •     DTR (Disruptive Technology Review) 2015 w/ Microsoft Senior Leadership Team.
  •     First Time Two-Way Voxel Teleportation
  •     Project is renamed “Holoportation”.