In efforts to develop interaction techniques for virtual environments which are extremely flexible and versatile, manipulation in virtual reality has focused heavily on visual feedback techniques (such as highlighting objects when the selection cursor passes through them) and generic input devices (such as the glove).

Such virtual manipulations lack many qualities of physical manipulation of objects in the real world which users might expect or which users might unconsciously depend upon. For example, in the case of selecting a virtual object using a glove, the user must visually attend to the object (watch for it to become highlighted) before selecting it. But what if the user’s attention is needed elsewhere? What if the user is monitoring an animation and is just trying to pick up a tool?

We believe that designers of virtual environments can take better advantage of human motor, proprioceptive, and haptic capabilities without necessarily giving up flexibility and versatility. In support of this statement, we present our experiences with two systems, the two-handed props interface for neurosurgical visualization [2] and with the Worlds in Miniature (WIM) metaphor [6].