Create holograms from real life
Immersive virtual reality
Interactive 2D experiences
WWE teams up with Littlstar
Blade Runner 2049: Memory Lab
Peyton Manning and Gatorade
Step 1: Planning your vision
Step 2: Your capture session
Step 3: Post-production and delivery
We produce video holograms, which look like video from any given viewpoint but exist volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.
Volumetric video, also known as holographic video, looks like video from any given viewpoint but exists volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.
We’re able to play back these holograms on a wide range of devices, from traditional 2D platforms like desktop and phones, holographic headsets like HoloLens, and Windows Mixed Reality immersive headsets.
We capture performances on our stage using 106 cameras, then use computer vision algorithms to create textured 3D surfaces of whatever is in view. We further process to provide some consistency in the meshes over time as well as compress the holographic video for easier transmission and viewing.
We usually shoot @ 30 fps, though are capable of 60fps. We can interpolate to generate higher frame rates to, for example, match framerates of immersive devices.
Our San Francisco and partner facilities can capture very long takes, on the order of an hour. Our Redmond facility uses a different recording approach that is limited to 3-4 minutes per shot, with downtime in between takes to store data.
We often use uniform lighting so it’s easier to re-light the capture in post. But its not a limitation of the system, we can capture uneven or colored lighting as well.
We usually have a stage supervisor, technician, creative director, and producer on set. We also recommend makeup/hair, wardrobe, set or prop designer, audio engineer, animal trainer, etc., based on the specific needs and complexity of the shoot.
Yes, we have 8 channels of shotgun microphones equi-distant around performer. We can also support custom mic configurations like lavalier, boom, etc.
Outdoor shooting can work in some scenarios, but highest quality will come indoors where we can control the infrared lighting environment.
With our current 106 camera system, we output 10GB/sec of raw footage.
On shoot days, we usually turn around initial selects for review within 24 hours. Timeframes for final delivery depend on the content and duration of the take, as well as specific needs of the client and project.
Output is typically in the range of 40K triangles and 2K texture per character for a VR device, down to 10K triangles and 1K texture for mobile devices. We can deliver raw .obj/.png files, or a compressed format delivered as a streamable .mp4 file.
.OBJ/PNGs can be read by many digital content creation apps. Compressed .mp4s can be read by a .dll and/or plugin we provide.
We can compress down to rates typical for HD video. We use h.264 for the mp4 files and can compress to client’s specifications. For example, demo captures released with HoloLens ran between 7Mbps-12Mbps. Higher resolution and/or uncompressed formats would be on the order of several hundred MB for a 30sec clip.
Apparent accuracy depends on several factors such as the number of polygons, texture resolution, viewing distance, and playback device. We can adjust resolution while preserving detail, and usually work with clients to balance performance with quality for their specific scenario and device.
Yes, we frequently capture, insert, swap and/or remove props from the scene, depending on needs of the performance and scenario.
This technology has been in development since 2010, and we’ve captured thousands of human and animal performances over a very wide range of action, costumes, and props. We have a good understanding of where challenges will be, and ways to minimize them to achieve creative goals.