Capture human holograms for a range of devices

Whether in immersive or holographic headsets, on mobile phones, or on 2D screens, the holograms we capture from real people and performances will engage your audiences in authentic, impactful ways.

See how mixed reality has come to life

Recent projects showcase holographic video in a variety of engaging experiences.


Hear what others are saying


How we do it

We are experts at capturing holographic video, advancing capture technology and pioneering its applications since 2010.

LEARN MORE ABOUT OUR RESEARCH

Let’s get started

Mixed Reality Capture Studios are located in San Francisco and at Microsoft headquarters in Redmond, Washington – just a short drive from Seattle. You can also work with us through our licensed partner Dimension Studios in London and Metastage in Los Angeles. Get in touch with us to start.

CONTACT US

|

We produce video holograms, which look like video from any given viewpoint but exist volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.

Volumetric video, also known as holographic video, looks like video from any given viewpoint but exists volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.

We’re able to play back these holograms on a wide range of devices, from traditional 2D platforms like desktop and phones, holographic headsets like HoloLens, and Windows Mixed Reality immersive headsets.

We capture performances on our stage using 106 cameras, then use computer vision algorithms to create textured 3D surfaces of whatever is in view. We further process to provide some consistency in the meshes over time as well as compress the holographic video for easier transmission and viewing.

We usually shoot @ 30 fps, though are capable of 60fps. We can interpolate to generate higher frame rates to, for example, match framerates of immersive devices.

Our San Francisco and partner facilities can capture very long takes, on the order of an hour. Our Redmond facility uses a different recording approach that is limited to 3-4 minutes per shot, with downtime in between takes to store data.

We often use uniform lighting so it’s easier to re-light the capture in post. But its not a limitation of the system, we can capture uneven or colored lighting as well.

We usually have a stage supervisor, technician, creative director, and producer on set. We also recommend makeup/hair, wardrobe, set or prop designer, audio engineer, animal trainer, etc., based on the specific needs and complexity of the shoot.

Yes, we have 8 channels of shotgun microphones equi-distant around performer. We can also support custom mic configurations like lavalier, boom, etc.

Outdoor shooting can work in some scenarios, but highest quality will come indoors where we can control the infrared lighting environment.

With our current 106 camera system, we output 10GB/sec of raw footage.

On shoot days, we usually turn around initial selects for review within 24 hours. Timeframes for final delivery depend on the content and duration of the take, as well as specific needs of the client and project.

Output is typically in the range of 40K triangles and 2K texture per character for a VR device, down to 10K triangles and 1K texture for mobile devices. We can deliver raw .obj/.png files, or a compressed format delivered as a streamable .mp4 file.

.OBJ/PNGs can be read by many digital content creation apps. Compressed .mp4s can be read by a .dll and/or plugin we provide.

We can compress down to rates typical for HD video. We use h.264 for the mp4 files and can compress to client’s specifications. For example, demo captures released with HoloLens ran between 7Mbps-12Mbps. Higher resolution and/or uncompressed formats would be on the order of several hundred MB for a 30sec clip.

Apparent accuracy depends on several factors such as the number of polygons, texture resolution, viewing distance, and playback device. We can adjust resolution while preserving detail, and usually work with clients to balance performance with quality for their specific scenario and device.

Yes, we frequently capture, insert, swap and/or remove props from the scene, depending on needs of the performance and scenario.

This technology has been in development since 2010, and we’ve captured thousands of human and animal performances over a very wide range of action, costumes, and props. We have a good understanding of where challenges will be, and ways to minimize them to achieve creative goals.