Hear what others are saying
Microsoft empowers creators with volumetric video technology
Microsoft Mixed Reality Capture Studios create holograms to educate and entertain. Creative Director, Jason Waskey, explains our mission.
This is an incredible use of volumetric videos in fashion
Epic explores Balenciaga's virtual fashion experience, Afterworld, starring digital models by Dimension Studio.
You can visit a cutting-edge volumetric video capture studio in Los Angeles
Metastage opens Microsoft-powered volumetric capture studio in Los Angeles.
Things you need to know about the versatility of volumetric videos
SK Telecom’s Jump Studio discusses how holograms are used in immersive media experiences.
You can now virtually visit locations with holographic tour guides
Center for the Creative Arts (COCA) features holographic tour guides captured by Avatar Dimension.
You won’t believe how easy it is to create holograms
BBC's North America reporter films his own hologram.
Why brands are turning to augmented reality solutions for their campaigns
Dimension puts tiny Tinie on a Whopper in Burger King AR campaign.
Check out this amazing app that increases fan engagement
Jadu captures artists at Metastage and uses an augmented reality app for major social media hits.
Why holograms are effective even in traditional video formats
Holographic dancer performs in Jump Studio’s “Taepyeongmu, Dance of Great Peace”.
State-of-the-art volumetric studio near Washington DC captures holograms
Avatar Dimension becomes the first East Coast licensee of Microsoft Mixed Reality Capture Studios.
You can now make holograms in San Francisco using Microsoft technology
Variety Magazine visits a Microsoft Mixed Reality Capture Studio.
See how immersive technology can level up musical performances with virtual reality
VRScout highlights Dimension's digital humans at Sundance New Frontiers.
This is how easy it is to take holographic models beyond the runway
The New York Times hits the runway with Ashley Graham's hologram, captured by Metastage.
See where K-pop stars get their holograms created in Seoul
SK Telecom opens Jump Studio, Microsoft's first volumetric video partner in Asia.
Why businesses need to future proof their content using volumetric videos
Tim Zenk of Avatar Dimension discusses the power of volumetric captures with the VR/AR Association.
Holograms on the go
Place holograms into real-world settings, blending the digital with the physical. From holographic headsets like Microsoft HoloLens to augmented reality on your favorite mobile device, you can experience performances in your physical space from every perspective using mobile friendly holographic video.Watch now
Engage audiences with performers that feel more real. Integrate holograms with your immersive experiences on virtual reality headsets, phones, tablets, consoles, and desktops.Watch now
Step 1: Planning your vision
Before your session, our creative and technical experts will work with you to plan the details of your shoot and help you realize your vision. Good pre-production is critical to getting the best results from your captures, and our vast experience producing volumetric content will help you avoid pitfalls.
Step 2: Your capture session
If you’ve been on a video shoot, our capture process will feel familiar. We offer guidance when directing your sessions, capture post-production metadata, and generate preview footage to guide selects. We can connect you with hair, wardrobe, and audio professionals all well versed with what it takes to get a great hologram produced.
Step 3: Post-production and delivery
Volumetric studio in Berlin leveraging our processing pipeline in Azure.
Help build WebAR experiences for every mobile device.
End to end platform distribution of AR + VR content.
Our team produces raw content (holographic video), tools to work with that content in a post-production environment, and the means to play that content back on a wide variety of devices.
Holographic video, also known as volumetric video, looks like video from any given viewpoint but exists volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.
We capture performances on our stage using many cameras, then use computer vision algorithms to create a textured 3D mesh per frame. We further process that data to provide some consistency in the meshes over time, which we then compress into a file format that is playable on a wide variety of cross-platform devices.
We usually shoot @ 30 fps, though we can capture at both higher and lower frames rates if desired. For playback, it is possible to interpolate to generate higher frame rates to, for example, match framerates of immersive devices.
We can capture very long takes, on the order of an hour.
We've captured as many as 20 people in a single scene before, but don't recommend that as a best practice. We find that we can shoot two people at the same time comfortably, and up to four with careful staging (and some technical limitations with resolution). We're happy to discuss your specific needs during pre-production to arrive at a satisfactory solution for your project.
We adjust the size of our capture volume by moving camera towers closer in or further away. Moving the cameras closer to the subject results in higher resolution result, but a smaller capture volume. Moving them further away reduces resolution. We generally shoot at 8ft as our maximum diameter and 4.5ft as our minimum diameter. Our max height is 10ft. Please discuss your project with us even if it appears that our 8ft diameter might be too small for your needs.
We often use uniform lighting so it’s easier to re-light the capture in post. We are able to support a much wider variety of lighting scenarios though, including very low levels of light and colored gels. Part of our pre-production process will help establish what will work best for your particular project.
Our system is very similar to a standard video shoot, with many of the same roles needed for a smooth production. We provide production support with camera operators, producers, and technical directors. To get the best from your shoot day we also recommend makeup/hair, wardrobe, set or prop designer, audio engineer, animal trainer, etc., based on the specific needs and complexity of the shoot.
Our system is set up to capture audio. We have 8 channels of shotgun microphones equi-distant around performer. We can also support custom mic configurations like lavalier, boom, etc. While we are capable of capturing basic audio, and will provide you with a synced scratch track, your team will be responsible for audio post-production and sweetening. We’ll walk you through the process as part of pre-production.
With our premium 106 camera system, we output 600GB/min of raw footage. Our end result compresses that data down to something comparable to HD video at 15-30Mbps.
On shoot days, we usually turn around initial selects for review within 24 hours. Timeframes for final delivery depend on the content and duration of the take, as well as specific needs of the client and project.
Output is typically in the range of 20K triangles and 2K texture per character for a VR device, down to 10K triangles and 1K texture for mobile devices, which we provide as a custom streamable .MP4 file.
We've created plug-in support for our compressed MP4 file for Unity, Unreal, and native support for Windows, ARKit for IOS and SceneKit for Android just to name a few. We're truly cross-platform compatible; if you're not sure if we support what you need, just reach out! There’s a good chance we’re working on it. We can also provide our OBJs and PNGs, which can be read by many digital content creation apps.
We can compress down to rates typical for HD video. We use h.264 for the MP4 files and can compress to client's specifications. For example, demo captures released with HoloLens ran between 7Mbps-12Mbps. Higher resolution and/or uncompressed formats would be on the order of several hundred MB for a 30sec clip.
Apparent accuracy depends on several factors such as the number of polygons, texture resolution, viewing distance, and playback device. We can adjust resolution while preserving detail, and usually work with clients to balance performance with quality for their specific scenario and device.
Yes, we frequently capture and/or remove props from the scene, depending on needs of the performance and scenario. In addition, we provide some post-production tools, workflow support and best practices for removing props and/or adding CG props as needed after capture.
This technology has been in development since 2010, and we’ve captured thousands of human and animal performances over a very wide range of action, costumes, and props. We have a good understanding of where challenges will be, and ways to minimize them to achieve creative goals.