Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

Giant steps and liberating spaces – Virtual Reality is making cool moves

March 22, 2019 | By Microsoft blog editor

Cool innovations are happening in how virtual reality researchers are resolving natural locomotion challenges and how they relate to story space, as well as in liberating users from the small, object-free player settings of today, to allow them to safely roam the real world.

Virtual Reality (VR) has become familiar to many people in past years. Users can put on a head-mounted display (HMD) and immediately be transferred to an imaginary world where everything they see and hear has been authored to engulf them in an immersive experience. Until recently, tracking technologies in VR required the user to remain within a room where sensors could track their location and movement. Furthermore, all physical obstacles had to be removed from this assigned space in order to assure the safety of the user who is blind to the physical world.

These restrictions required every virtual world displayed to be mapped to the same room. While non-immersive video games might move the player around the game world, doing the same in VR can generate motion sickness. When visual motion disagrees with lack of vestibular stimulation there is motion sickness. Designers of virtual experiences had resorted to imaginative—and sometimes quite unnatural—ways to virtually move the user while remaining confined in the room, such as teleportation to new locations, walking in place and moving using hand motions.

Compressing distances by enlarging the user

In a new work to appear at CHI 2019, Microsoft researchers, Mar Gonzalez-Franco, Eyal Ofek with collaboration with Professor Anthony Steed of UCL and Parastoo Abtahi of Stanford University, looked at different methods to improve natural locomotion in VR. One way they hit upon to overcome the space limitations was to change the scales of space or motion. Researchers found inspiration in famous fairy tales such as Alice in Wonderland and Seven-League Boots that led to innovative navigation techniques (see Figure 1) that would allow a whole world to be compressed and users able to visit a whole city by walking around their living room.

Figure 1: I’m a giant. By scaling down the city the user can walk naturally, while every step is 10x larger relative to the city. The sensation is as if the user is a giant Gulliver waking in a Lilliputian world.

To achieve super-speed gains when a user walks in the virtual world, steps are stretched following a known technique called Seven-League Boots. This way, users feel as if the virtual world flies around them fast, as they move at a natural pace. However, the researchers realized that under large speeding factors (greater than 3x), they would start to lose control of the motion and overshoot, slowing up the locomotion process and increasing the user’s frustrations.

Instead of speeding, the researchers then also looked at scaling the users size relative to the virtual world (see Figure 2, center panel.) In contrast to the speed gain method, this technique uses a 1:1 mapping between the user’s motion and the virtual avatar, enabling natural control even when the scale change is as big as 30x. Users, walking in a smaller environment, look down as though they are a giant surveying a toy city. Such a point of view might be great for traveling between large landmarks, however the distance to the environment makes it hard for the user to detect small details, such as identifying a specific door down a road. A second method, eye-level scaling, shrinks the world around the user’s head, generating the impression that the user’s view is positioned at a height of a normal person in the virtual world. While visible motion looks fast relative to the small world, it maintains natural control and enables a natural view of the world ground level details.

Figure 2: The virtual city as seen by a user walking in its main street at a regular scale (left). The user walking down the same street while scaled to a giant (center). When researchers maintained the user’s point of view at ground level, users achieved a natural fast motion with good control.

Once the user arrives at their destination and wants to go through the door of VR wonderland, they can be scaled back to 1:1.

This form of locomotion for virtual reality experiences has many advantages. It is natural and requires no training, increases immersivity (compared to teleportation), and it avoids motion sickness.


VRoamer – Setting virtual reality free across whole building spaces

Rather than confining players to smallish rooms cleared of objects such as chairs and other obstacles to ensure safety, wouldn’t it be cool if the player could leave the room completely and wander through the real world, safely, with the objects and space configurations they encountered incorporated cleverly into the game world and story?

Microsoft researchers Christian Holz, Andy Wilson, and Eyal Ofek very much thought so and set out to break free VR experiences from the small settings of today. Together with former research intern Lung-Pan Cheng, the researchers built a VR system that operates in natural and uncontrolled environments that will be presented at IEEE VR 2019 in Osaka Japan. Their system, VRoamer, enables the user to explore large virtual realities and worlds, all while walking around an entire building—through rooms, office spaces, along the halls, or in between the chairs and tables in a cafeteria—all without the system’s prior knowledge of the physical building space.

VRoamer uses inside-out 6D tracking through cameras inside the VR headset to track the user’s motions through the building, while it analyzes the 3D structures of the user’s surroundings in real-time using a depth camera mounted on the headset. VRoamer’s real-time tracking discovers walkable paths in the building, objects that occlude the user’s path and pose potential hazards, as well as people moving around the user. This tracking is what allows VRoamer to construct a virtual reality for the user to walk around enormous virtual spaces while avoiding obstacles (see Figure 3.)

Figure 3: In current virtual reality systems, users wander around a large virtual world (left) while being physically confined to a small empty room (center). Microsoft Research has now developed VRoamer, a VR system that enables users to physically walk in a large natural building environment while keeping the user safe and maintaining story.

One of the researchers’ biggest challenges was that VR users must not be ignorant of the world around them, because otherwise they would be prone to hit objects such as walls, doors, furniture, other people or pets moving around them in the uncontrolled environment. To maintain the user’s safety, every possible obstacle had to be represented in the virtual world in a way that would allow them to be avoided.

However, presenting such obstacles must fit the narrative of the virtual world being explored by the user. Therefore, the Microsoft researchers designed VRoamer to represent discovered obstacles such as dining tables or office plants to appear as a control station or as a large boulder, for example, depending on whether the user is currently situated in a spaceship or in a jungle experience. VRoamer creates a consistent and continuous virtual narrative for the user and maps game content to the physical environment surrounding the user on-the-fly, maintaining both the immersive nature of the story and the safety of the player and people around them.

An experience or a game map is often composed of items and locations that are crucial for the progress of the story, for example, some object has to be collected at one place and be used in another location. Across these locations, the game map sprawls with content that gives the game its atmosphere, such as a dungeon, forest or a spaceship corridor, and lets the user wander toward the next objective. VRoamer strives to show the user an equivalent narrative, regardless of the user’s physical environment. In the developed system, the player walks in a dungeon composed of corridors and underground rooms. At any point in the game, the system tracks the geometry of the physical world in front of the user and construct possible paths to take. When the user arrives at a clearing in the physical world, VRoamer will construct a specific narrative location, fitting this clearing, there and display it to the user. On the other hand, if the physical environment is cluttered, VRoamer will guide the user to other places where the story action can progress (see Figure 4.) While story locations are similar for all players of the game, their layout and the shape of the paths between them may change between users according to the architecture around them.

Figure 4: A VRoamer user walks within a virtual reality game while physically wandering in an uncontrolled populated office environment (left). The game design is composed of a set of locations such as the depicted throne room, which progress the story of the user’s experience. Between those locations, VRoamer constructs the virtual game environment on-the-fly, leading the user between real world obstacles and fitting the game style (right).

The user can avoid moving obstacles such as people or pets by the positioning of virtual objects in the game world—such as skeletons roaming the dungeon, or spikes and trap doors that spring from the floor to divert the user from any potential collisions (see Figure 5.)

Figure 5: A person moving to stand in the user’s way might be represented as spikes popping out of the floor and block the player’s path, or other scripted events such as virtual characters (i.e. the skeleton) that move to protect the user by blocking his way.

The researchers hope this work will be one of many that will enable the use of general authored VR experiences in many different environments.

Up Next

Artificial intelligence, Computer vision

Holograms, spatial anchors and the future of computer vision with Dr. Marc Pollefeys

On today’s podcast, Dr. Pollefeys brings us up to speed on the latest in computer vision research, including his innovative work with Azure Spatial Anchors, tells us how devices like Kinect and HoloLens may have cut their teeth in gaming, but turned out to be game changers for both research and industrial applications, and explains how, while it’s still early days now, in the future, you’re much more likely to put your computer on your head than on your desk or your lap.

Microsoft blog editor

Artificial intelligence, Computer vision, Graphics and multimedia

Microsoft HoloLens facilitates computer vision research by providing access to raw image sensor streams with Research Mode

Microsoft HoloLens is the world’s first self-contained holographic computer. Remarkably, in Research Mode, available in the newest release of Windows 10 for HoloLens, it’s also a potent computer vision research device. Application code can not only access video and audio streams but can also at the same time leverage the results of built-in computer vision […]

Marc Pollefeys

Partner Director of Science

Computer vision, Graphics and multimedia, Human-computer interaction

Typing in the Virtual Office Environment: Examining the Effects of Keyboard and Hand Representation in VR

Much of the hype around Virtual Reality, or VR, has focused on immersive gaming and entertainment, but the technology also has profound implications for enhancing the way we work. Imagine the perfect office environment: multiple large displays, proper lighting, free from distractions and isolated from outside disturbances. VR can be used to generate the ideal virtual office—even from the confines of an airplane seat or a tiny touchdown space.

Eyal Ofek

Senior Researcher