About
I am a principal researcher at the EPIC (Extended Perception Interaction and Cognition) team at Microsoft Research lab in Redmond, WA. My research interests include Augmented Reality (AR)/Virtual Reality (VR), Haptics, interactive projection mapping and computer vision for human-computer interaction.
Prior to joining Microsoft Research I obtained my Ph.D. at the Hebrew University of Jerusalem and has founded a couple of companies in the area of computer graphics, including a successful drawing and photo editing application (Photon-Paint) and developing the world first time-of-flight video cameras (ZCam).
For more up-to-date information, please see Eyal’s external homepage at: http://eyalofek.org/
Professional activities
- ACM Senior Member
- Specialty Chief Editor of Frontiers in Virtual Reality, for the area of Haptics.
- Assoc. Editor of Frontiers in Virtual Reality.
- Assoc. Editor of IEEE Computer Graphics and Application (CG&A), co-chaired the 19th ACM SIGSPATIAL 2011, and on program committee for several leading conferences.
- Paper Chair: ACM SIGSPATIAL 2011, Chicago
- AC: ACM CHI
- PC: CHI, CVPR, SIGSPATIAL, ISMAR, ISS, Pacific Graphics.
Research Highlights

Enhancing mobile work and productivity with virtual reality webinar
In this webinar, Microsoft Researcher Eyal Ofek presents a summary of research investigating opportunities and challenges for realizing a mobile VR office environment. In particular, you’ll learn how VR can be mixed with standard off-the-shelf equipment (such as tablets, laptops, or desktops) to enable effective, efficient, and ergonomic mobile knowledge work.

Inside AR and VR, a technical tour of the reality spectrum with Dr. Eyal Ofek
Dr. Eyal Ofek is a senior researcher at Microsoft Research and his work deals mainly with, well, reality. Augmented and virtual reality, to be precise. A serial entrepreneur before he came to MSR, Dr. Ofek knows a lot about the “long nose of innovation” and what it takes to bring a revolutionary new technology to a world that’s ready for it. On today’s podcast, Dr. Ofek talks about the unique challenges and opportunities of augmented and virtual reality from both a technical and social perspective; tells us why he believes AR and VR have the potential to be truly revolutionary, particularly for people with disabilities; explains why, while we’re doing pretty well in the virtual worlds of sight and sound, our sense of virtual touch remains a bit more elusive; and reveals how, if he and his colleagues are wildly successful, it won’t be that long before we’re living in a whole new world of extension, expansion, enhancement and equality.

MR for Productivity - MSR Webinar, Dec 2020
As people work from home, new opportunities and challenges arise around mobile office work. On one hand, people may have flexible work hours and may not need to deal with traffic or long commutes. On the other hand, they may need to work at makeshift spaces, with less-than-optimal working conditions while physically separated from co-workers. Virtual reality (VR) has the potential to change the way we work, whether from home or at the office, and help address some of these new challenges. We envision the future office worker to be able to work productively everywhere, solely using portable standard input devices and immersive head-mounted displays. VR has the potential to enable this by allowing users to create working environments of their choice and by relieving them of physical limitations, such as constrained space or noisy environments.

AR & VR in the wild - a talk at Global XR bootcamp, Nov 2020
Virtual Reality (VR) & Augmented reality (AR) pose challenges and opportunities from both a technical and social perspective. We could now have digital, and not physical objects change our understanding of the world around us. It is a unique opportunity to change reality as we sense it. The Microsoft Researchers are looking for new possibilities to extend our abilities when we are not bound by our physical limitations, enabling superhuman abilities on one hand, and leveling the playfield for people with physical limitations. I described efforts to design VR & AR applications that will adjust according to the user’s uncontrolled environment, enabling a continuous use during work and leisure, over the large variance of environments. I also review efforts to the extent the rendering to new capabilities such as haptic rendering.

Accessibility in VR & AR
Virtual reality (VR) is an incredibly exciting way to experience computing, providing users with intuitive and immersive means of interacting with information that attempts to mirror the way we naturally experience the world around us. It also an opportunity to level the plane field of people with physical limitation as they access the non physical virtual world. We published the open source 'SeeingVR' toolkit for people with low vision, and at CHI 2020 we presented an access of blind people to VR experiences.

Haptics in MR - A talk at Frontiers in VR, May 2020
Haptics is an important everyday sense, that enables us physical interaction with the world around us. Immersive environments such as MR have no or very limited haptic rendering today. We are working to advance the state-of-the art of Haptics, both theoretical & by building novel working devices.

Haptic Controllers: How Microsoft is making virtual reality tangible
Researchers at Microsoft strive to advance perhaps one of the most challenging areas of research and development in virtual reality. Mike Sinclair showcases four haptic controllers and discusses their goal to realize and deliver truly immersive and convincing tactile experiences…

Interactive Projection Mapping
The use of projector enable large area augmentation that can be shared by multiple users. We presented IllumiRoom at CES, and released the open source 'RoomAlive' Toolkit, which is a common tool for projection mapping research.

Computer Vision
(In the picture) Stroke Width Transform: a popular text detection feature, Analysis by Synthesis, 2D and 3D completion

Camera Based Interaction
Cameras are cheep and powerful sensors. I am looking in ways that we can leverage of cameras capabilities to compensate for the physical limitations of existing displays and input techniques.