Much of the hype around Virtual Reality, or VR, has focused on immersive gaming and entertainment, but the technology also has profound implications for enhancing the way we work. Imagine the perfect office environment: multiple large displays, proper lighting, free from distractions and isolated from outside disturbances. VR can be used to generate the ideal virtual office—even from the confines of an airplane seat or a tiny touchdown space.
Among most users, a traditional keyboard and large, high-resolution monitor are still the preferred input method for editing long documents, working on spreadsheets, and internet browsing. Yet, many users do not touch-type, and so must constantly shift their gaze from the display to their hands on the keyboard and back again. This method of working creates unique challenges in the virtual office environment, where users can see neither their physical hands nor the traditional or touch-screen keyboard on the desk in front of them.
In our study, we found that while a user’s typing skills transfer seamlessly from the real world to the virtual world, typing speed in a baseline virtual environment is markedly slower than typing in the physical environment: users typed at an average of 60% of their usual rate when working in VR. We attribute this loss of speed to the novelty of the situation as well as to the limitations of the VR display (specifically, lower resolution and latency).
Our team sought to address these challenges by exploring opportunities presented by VR. We found that we can indeed improve the efficiency of keyboard input by manipulating the scene presented to the user. Our results are presented in two papers at IEEE VR 2018, authored by Eyal Ofek and Michel Pahud of Microsoft Research and Jens Grubert, a Professor of Human-Computer Interaction from Coburg University of Applied Sciences and Arts, Germany, Lukas Witzani and Professor Matthias Kranz of Passau University and Per Ola Kristensson from Cambridge University.
The first paper examines the effects of repositioning the virtual keyboard. VR allows us to situate the keyboard wherever we want; for example, placing it closer to the document or object of interest and displaying a graphic representation of the user’s hands in relation to the keyboard (In our tests, we used circles representing the fingertips). While this eliminates the need to constantly shift one’s attention between the keyboard and document, it may also require the user to reposition his or her hands while typing. Will that feel unnatural? Will that affect the quality of our typing?
The second paper explores the impact of changing the representation and display of the user’s hands in the virtual environment. For example, making our hands transparent in the virtual environment might provide an unobstructed view of the keyboard, but will it affect our ability to type?
While moving the location of the displayed keyboard and hands had little impact on typing efficiency with a traditional keyboard, varying the representation of the user’s hands in the virtual environment produced interesting results.
We presented users with four different representations of their hands as they typed in a virtual reality scene (see Figure 3). The first two methods were analogous to traditional input methods, while the third and fourth methods used manipulations only possible in V
- A video of their hands, which is closest to the natural situation of typing without VR.
- A 3D model of their hands, animated according to the tracking of the users’ real hands.
- A 3D model in which most of the users’ palms were transparent, and only the users’ fingertips were displayed, to maximize the visibility of the keyboard
- Only showing the keys being pressed on the keyboard; that is, with hands that are completely transparent.
Surprisingly, the minimalistic model of the transparent hand with only fingertips visible was as easy to use and as efficient as the blended video of the users’ hands. Further, the full 3D model of the hand was not as useful; subtle differences in the model’s motions as well as differences between the look of the model and the actual look of the user’s hand may have generated a dissonance between the user and the model hands. With the full 3D model, the user was able to type, but not as fast and accurate as the above two methods. In fact, the results were comparable to not showing the hands at all.
In our study, we used a high-precision optical tracker and wearable markers on the users’ hands, but we foresee that future systems will track users’ hands from the headset and/or the keyboard. Further, employing inside-out tracking, as in the Microsoft HoloLens and Windows Mixed Reality, eliminates the need for external cameras, lighthouses, or markers, enabling mobile VR office environments that can be used anywhere.
The goal of our work is to inform and engage the research community and enable other researchers to build on our results. We can imagine many possibilities beyond the four we tested; studying new and varied conditions in combination with emerging VR technology will drive the innovations that empower people achieve more.
Videos of our test protocols can be viewed at the links below: