Posted by
The first is a project aimed at surgeons. The introduction of rich, detailed 3-D medical imagery is an invaluable tool for surgeons attempting to navigate through a patient’s body and to precisely locate organs, tumors and foreign bodies to be removed. However, making that imagery interactive in the surgical theater is a challenge: the surgical team must leave the sterile environment in order to directly manipulate a mouse and keyboard. But by designing a Kinect-based gesture interface to a visual navigation system, a scrubbed-up surgeon can use gestures in the air to manipulate the images without contaminating his or her hands. This saves precious time during critical procedures, and allows the surgical team to more easily — and more frequently — use the best available imagery to guide their work. This project has been developed in close cooperation with researchers at Lancaster University and Kings College London, as well as the vascular surgery department at Guy’s & St. Thomas’ Hospital in London, and the neurosurgery department at Addenbrooke’s Hospital in Cambridge.
Spotlight: AI-POWERED EXPERIENCE
The second project, “Kinect in the Dark,” explores how humans can interact with computers when the lights are off — an environment where it is difficult, and often hazardous, to interact with physical objects. Imagine waking up in the middle of the night and using gestures to interact with your alarm clock or phone rather than fumbling around with buttons. Or in a more fanciful direction, what would it be like to play an electronic musical instrument without the benefit of visual feedback or physical manipulation?
These two projects are more examples of how the physical and virtual worlds are blending together in exciting ways that open up many new possiblities.