Touching the Virtual: How Microsoft Research is Making Virtual Reality Tangible

Published

“Ask yourself how you really want to interact with virtual objects? The simple answer is that we want to handle them as if they are real, just reach out a hand to grasp them, pick them up, feel what they’re made of, and do all that in a natural way that requires no learning.” said Mike Sinclair, principal researcher at Microsoft Research’s Redmond, WA labs.

Sinclair is part of a team of talented researchers at Microsoft who each day strive to advance perhaps one of the most challenging areas of research and development in virtual reality, to realize and deliver truly immersive and convincing tactile experiences to users in VR.

They’re making some amazing progress.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

While VR and Augmented Reality (AR) have progressed dramatically in the past 30 years, with head-mounted units delivering fantastic visual worlds and 3D audio when we reach out our hand to touch any virtual object, the illusion is shattered; our hand ends up grasping air.

But compared with visual and audio rendering capabilities of consumer devices, today’s tactile offerings are mostly limited to buzz – vibrations generated by internal motors or actuators nested inside controllers. Researchers continue to strive to achieve realistic rendering of different tactile sensations. And consumers wait in hopeful anticipation for these experiences to become available.

There are many reasons why haptic is such a hurdle. Anyone who’s been to the movies understands that the eye and the ear can be tricked; film after all, at 24 frames per second, isn’t true motion. But haptic is different and represents a challenge many orders of magnitude larger in complexity. Some of the challenges lie in the area of hardware. Laboratory prototypes such as exoskeletons and other hand mounted devices tend to be cumbersome, both to fit to individual users as well as to don and remove. Many current prototype devices simulate only a specific sensation, such as texture, heat, or weight and may not be versatile enough to attract users. Complex mechanics can render a device too expensive, too big or too fragile to be viable as a consumer product.

The Microsoft Research team – Mike Sinclair, Christian Holz, Eyal Ofek, Hrvoje Benko, Ed Cutrell, and Meredith Ringel Morris – have been exploring ways existing technology can generate a wide range of haptic sensations that can fit within hand-held VR controllers, similar in look and feel to those currently used by consumers, enabling users to touch and grasp virtual objects, feel the sliding of finger tips along surfaces and more. Their dream: today’s users interacting with the virtual digital world, more naturally, and in more ways than ever before.

“What you really want is the impression of virtual shapes when you interact with objects in VR, not just the binary feedback you get from current devices. Our controllers render such shapes, giving your fingers and hands continuous and dynamic haptic feedback while you interact.” – Christian Holz

CLAW

The first of the new haptic controllers developed by the team is the CLAW.  The CLAW extends the concept of a VR controller to a multifunctional haptic tool, using a single motor. At first glance, it looks very similar to your standard VR controller. A closer look reveals a unique motorized arm that rotates the index finger relative to the palm to simulate force feedback. It was first realized with the aid of Inrak Choy, an intern from Stanford University.

Figure 1. (Left) The CLAW’s configuration and components. (Right) The CLAW as it grabs a virtual object, and touches a virtual surface.

The CLAW acts as a multi-purpose controller that contains both the expected functionality of VR controllers (thumb buttons and joysticks, 6DOF control, index finger trigger) as well as enabling a variety of haptic renderings for the most commonly expected hand interactions: grasping objects, touching virtual surfaces and receiving force feedback.

But a unique characteristic of the CLAW is its ability to adapt haptic rendering by sensing differences in the user’s grasp and the situational context of the virtual scene. As a user holds a thumb against the tip of the finger, the device simulates a grasping operation: the closing of the fingers around a virtual object is met with a resistive force, generating a sense that the object lies between the index finger and the thumb. A force sensor embedded in the index finger rest and changing the motor’s response profiles enables the sensing of objects of different materials, from full rigid wooden block to an elastic sponge.

If the user holds the thumb away from a grasp pose, for example on the handle, shaping the palm instead in a pointing gesture, the controller delivers touch sensations. Moving the tip of the finger toward a surface of a virtual objects generates a resistance that pushes the finger back and prevents the finger from penetrating the virtual surface. Furthermore, a voice coil mounted under the tip of the index finger delivers small vibrations generated by surface texture as the finger slides along a virtual surface.

Sensing the force applied by the user can also help to interact with virtual objects. Pushing a slider allows the experienced friction to signal preferable states or pressure can change attributes of a paint brush or pen in a drawing program. The CLAW would represent a singular achievement all on its own but it is only one of several innovations in the haptic controller space that the team is sharing with the world.

Haptic Wheel

To further explore the possibilities of rendering the tactile experience of friction and material properties under the index finger thumb, another new haptic controller prototype, the Haptic Wheel, also known as the Haptic Revolver, was developed by the researchers working closely with Eric Whitmire, an intern from the University of Washington. The Haptic Wheel uses an actuated wheel that moves up and down to render touch contact with a virtual surface and spins to render shear forces and motion as the user slides along a virtual surface.

The controller is designed so that the index finger rests in a groove along the wheel’s axis. When a user touches a virtual surface, the wheel rises to contact the fingertip, its rotation simulating the friction of the fingertip with the virtual surface based on the speed of the hand motion along the surface.

The users who got to test the Haptic Wheel reported a convincing haptic sensation corresponding to visual hand motion. Even when they moved their hand freely around a virtual surface while the rendered wheel motion was limited to horizontal direction, users reported the haptic feedback to be quite realistic.

Figure 2: (Left) When a user hovers over the blue surface, the rendering engine places the appropriate wheel surface under the finger and begins to track the nearby edge of the black surface. (Center) As the user approaches the edge, the rendering engine positions the wheel so that the edge approaches the finger. (Right) While hovering over the smaller black surface, the rendering engine adjusts the gain of the wheel so that the two edges are rendered correctly.

The device’s wheel is interchangeable and can hold a variety of physical haptic elements including ridges, textures, or custom shapes that provide different sensations to the user as they explore a virtual environment. Because the device is spatially tracked, these haptic elements are spatially registered with the virtual environment. That is, as the user explores a virtual environment, the rendering engine delivers the appropriate haptic element underneath the finger. For example, in a virtual card game, when a user touches a card, a poker chip, or the table, the device rotates the wheel to render the appropriate texture underneath the fingertip. As the user slides along one of these surfaces, the wheel moves underneath the finger to render shear forces and motion.

The Haptic Wheel can be generalized to fit many applications. Applications can also use custom wheels with the necessary haptic features. For example, a virtual petting zoo game might use a wheel containing various textures while a virtual cockpit environment might use a wheel with input elements such as buttons and switches. The Haptic Wheel demonstrates that convincing fingertip sensations can be generated by a simple device that is easy to grasp and use.

Figure 3: (Left) The controller’s wheel is used to render different textures in a card table game. The wheel used in this demo consists of two regions of soft felt, a hard plastic ridge, and a small section of paper. When the user touches an object in the scene, the appropriate texture is placed underneath the fingertip. (Center) A painting and sculpting demo that highlights the ability to render shapes and sense the force applied to the wheel. The wheel used in this demo consists of a raised nub and a ridge to simulate holding tools. The user presses on the wheel to activate the tool. The model can be explored by touch. (Right) A wheel containing several physical UI elements enables to give haptic feedback to a mixer board application. When a user touches a virtual UI element, not only do they feel the shape of a similar physical element, but they can physical interact with the widget.

Haptic Links

Another elusive milestone in haptics has been the use of both hands in a VR or an AR application. Say carrying a box with both hands, sensing its size. Or using two-handed devices, experiencing tension on an archer’s bow or on the slide of a trombone. Even turning a wrench on a virtual pipe.

Figure 4. Objects explored in the user evaluation. Left to right, top to bottom: a Power tool, a string tension changing bow, a sliding trombone, and maracas. Overlay added in post-production for visualization.

Haptic Links, developed with Microsoft Research by Evan Strasnick, an intern from Stanford University, is comprised of several types of connectors capable of rendering variable stiffness between two handheld commercial VR controllers. A Haptic Link can dynamically alter the forces perceived between the user’s hands to support rendering of a variety of two-handed objects and interactions. They can rigidly lock controllers in an arbitrary configuration, to make the controllers feel and behave like a two-handed tool (Figure 4). They can constrain specific degrees of freedom or directions of motion between the controllers, such as when turning a crank or pulling a lever. They can even set stiffness along a continuous range, to render friction, viscosity, or tension. In these ways, Haptic Links augment existing handheld controllers with realistic mechanical constraints, making interaction and game play in VR scenarios more immersive and tangible.

In envisioning Haptic Links, the Microsoft Research team began with numerous design considerations, including stiffness when actuated, flexibility when relaxed, resolution of stiffness, weight, bulk, moment of inertia, actuation speed, power consumption, noise, and range of motion. The ideal Haptic Link would be virtually undetectable to a user prior to actuation, but could stiffen quickly, strongly, and precisely as needed.

Exploration yielded three prototype Haptic Links, all capable of allowing and halting the 6-DOF motion of handheld VR controllers. Each design had tradeoffs and advantages over the others, making them best suited for specific applications. The Microsoft Research team sees designers of VR experiences choosing the Haptic Link that best meets their needs, allowing users to quickly attach the recommended Haptic Link to their controllers prior to entering a virtual world.

Figure 5. Haptic Links: Layer-Hinge (Left), Chain (Center) and Ratchet-Hinge (Right).

The Chain prototype (Figure 5, center), utilizes a highly articulated chain composed of ball-and-socket elements. A strong cable is threaded through the length of the chain and tethered to a linear actuator on each end. With the linear actuators extended, the chain is kept loose so that the user can arbitrarily move the controllers in 3D space. When the linear actuators retract the cable, the ball-and-socket elements are compressed into one another, increasing the friction at each joint in the chain. As a result, the entire chain stiffens, fixing the current spatial relationship between the two controllers.

The Layer-Hinge, (Figure 5, Left), uses ball joints to allow rotation of the controllers and a hinge that controls the distance between them. It has the advantage of selectively locking individual degrees of freedom in the motion of the controllers. For example, if the hinge is locked but the ball joints remain free, the controllers can rotate at a fixed distance apart, much like joysticks. Further, the friction of each joint can be controlled with relative precision, allowing the device to render a continuous range of stiffness values in both the hinge and ball joints. The hinge can bet set to different resistive forces depending on the application.

Ratchet-Hinge, (Figure 5, Right) uses similar ball joints beneath the controllers but replaces the hinge with a dual ratchet mechanism capable of independently braking inward or outward motion. With both ratchets engaged, the gear is fixed; with both disengaged, the gear can rotate freely. When one ratchet is disengaged, the gear can freely move in one direction, but not in the opposite direction.
The directionally-selective capabilities enable unique force-feedback interactions, such as the rendering of a midair impassable surface.

Haptic Links demonstrate the potential to improve the haptic rendering of two-handed objects and interactions in VR using inter-controller variable stiffness feedback. The multiple implementations yield different capabilities and advantages for object rendering.

In short, Haptic Links can improve the perceived realism of two-handed objects without significantly detracting from the rendering of normal interactions requiring disjoint controllers.

Canetroller

In keeping with Microsoft’s decades-long devotion to accessibility, the research team early on identified the one community that could particularly benefit from improved haptics – people with low vision and the blind. “Conventional” virtual reality experiences are heavily visual in nature and not accessible to the vision impaired. Working with interns Yuhang Zhao from Cornell University and Cindy Bennett from the University of Washington, Microsoft Research developed the Canetroller prototype to enable people who are skilled white cane users in the real world to transfer their navigation abilities into virtual settings.

Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions.

Canetroller has significant potential in areas such as entertainment, Orientation and Mobility training, and environment preparation. Microsoft Research hopes its work can inspire researchers and designers to design more effective tools to make VR more inclusive to differently abled people all over the world.

Figure 6. (A) A blind user wearing the gear for Microsoft Research’s VR evaluation, including a VR headset and Canetroller, our haptic VR controller. (B) The mechanical elements of Canetroller. (C) Overlays of the virtual scene atop the real scene show how the virtual cane extends past the tip of the Canetroller device and can interact with the virtual trash bin. (D) The use of Canetroller to navigate a virtual street crossing: the inset shows the physical environment, while the rendered image shows the corresponding virtual scene. Note that users did not have any visual feedback when using our VR system. The renderings are shown here for clarity.

Tests in specially designed indoor and outdoor VR scenes showed the Canetroller to be a promising tool enabling low vision and blind people to navigate different virtual spaces, as well as to identify the location and materials of objects in a virtual office or crossing the street at a virtual crosswalk. The Canetroller enables novel scenarios such as new types of Orientation and Mobility training in which people can practice white cane navigation skills virtually in specific settings before travelling to a real-world location. This type of device can also make VR gaming and entertainment more accessible to people who cannot experience a simulation visually.

Taken individually and – especially – together, these innovations in haptic research by the Microsoft Research team deliver both inspiration to other researchers in the space as well as opportunities for immediate commercial application and hope for communities such as the vision impaired for new opportunity and improvements in quality of life. Additionally, the researchers hope that their innovations will encourage adoption of more haptic rendering technology in consumer peripherals. The hope is that the tactile offerings soon will evolve to make the VR & AR offerings even more realistic and immersive. As researcher Eyal Ofek expressed it, “I envision a future where the distinction between virtual content and real-world objects becomes fuzzy, one in which interaction with technology will be as natural as operating in the real world.”

Related publications

Continue reading

See all blog posts