Microsoft Research Podcast

Microsoft Research Podcast

An ongoing series of conversations bringing you right up to the cutting edge of Microsoft Research.

Inside AR and VR, a technical tour of the reality spectrum with Dr. Eyal Ofek

September 25, 2019 | By Microsoft blog editor

Episode 91, September 25, 2019

Dr. Eyal Ofek is a senior researcher at Microsoft Research and his work deals mainly with, well, reality. Augmented and virtual reality, to be precise. A serial entrepreneur before he came to MSR, Dr. Ofek knows a lot about the “long nose of innovation” and what it takes to bring a revolutionary new technology to a world that’s ready for it.

On today’s podcast, Dr. Ofek talks about the unique challenges and opportunities of augmented and virtual reality from both a technical and social perspective; tells us why he believes AR and VR have the potential to be truly revolutionary, particularly for people with disabilities; explains why, while we’re doing pretty well in the virtual worlds of sight and sound, our sense of virtual touch remains a bit more elusive; and reveals how, if he and his colleagues are wildly successful, it won’t be that long before we’re living in a whole new world of extension, expansion, enhancement and equality.

Related:


Transcript

Eyal Ofek: I see an opportunity to generate reality which is not limited to the physical law that reality abides to. Nowadays, all our life, we are in the cage of our body. We see things from our point of view, we can go wherever our body can get us and what our senses can do. And suddenly, we could see things from someone else’s point of view. We could extend our senses. And, as much as it’s life changing for me, it could be even more life changing for people that currently has limitations. In a non-physical world, then there’s less importance of what my physical ability is.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Dr. Eyal Ofek is a senior researcher at Microsoft Research and his work deals mainly with, well, reality. Augmented and virtual reality, to be precise. A serial entrepreneur before he came to MSR, Dr. Ofek knows a lot about the “long nose of innovation” and what it takes to bring a revolutionary new technology to a world that’s ready for it.

On today’s podcast, Dr. Ofek talks about the unique challenges and opportunities of augmented and virtual reality from both a technical and social perspective; tells us why he believes AR and VR have the potential to be truly revolutionary, particularly for people with disabilities; explains why, while we’re doing pretty well in the virtual worlds of sight and sound, our sense of virtual touch remains a bit more elusive; and reveals how, if he and his colleagues are wildly successful, it won’t be that long before we’re living in a whole new world of extension, expansion, enhancement and equality. That and much more on this episode of the Microsoft Research Podcast.

Host: Eyal Ofek, welcome to the podcast!

Eyal Ofek: Hi, great to be here.

Host: You’re a senior researcher at Microsoft Research and your work is centered around “reality”… particularly, as we’ve talked about, augmented reality and virtual reality, or AR and VR, as I will refer to them for the rest of the podcast. And even this idea of mixed reality which we didn’t talk too much about… Before we get started, let’s operationalize those terms.

Eyal Ofek: Right.

Host: Obviously, there’s some overlap. How would you define the different kinds of realities that you’re working on, from a technical perspective, and how are they the same or different from each other?

Eyal Ofek: So, I would say it’s a continuum. There’s no one point. We sense the world by our senses: seeing, hearing, tasting, smelling and so on. And going to augmented reality or virtual… it’s different levels of intervening with those senses. Let’s say that, right now, I’m looking through my eye and I see the reality, but if I will, in part of my field of view right now, put something new there, then this is what people call augmented reality, where you see reality but there’s something, I don’t know, surreal that appears there. If I continue adding more and more content there, I see less and less of reality, and eventually, when all my field of view is filled up with new content, this is what we call virtual reality. But those terms, again, are just points along a continuum and the same can go for audio and other senses.

Host: So, if you’re not experiencing anything additional, then it’s real reality.

Eyal Ofek: Yes.

Host: As far as we can define that…!

Eyal Ofek: Exactly, exactly.

Host: That’s a whole other podcast! Let’s talk about grand aspirations for a second, as we’re getting started here. What is the big idea behind AR and VR? What are you trying to accomplish? What gets you up in the morning?

Eyal Ofek: First, philosophically, we understand the world around us through the senses. It’s a whole new world. We could see how digital, not physical, things could change our reality. This is a whole new concept. Up till now we know, when we use say, digital programs, we use them in a device. We take out the phone, we look at the screen and we see something there. Changing reality around us? That sounds a bit weird, right? But it opens so much possibilities. It’s a bit different than what people maybe experience right now, but I see that as an opportunity because this could change reality in a way that will make it better for me. I could be a superhuman in there, I could understand what’s happening not next to me, with the flick of a finger I can be from this meeting to my workstation, and just opportunities that are beyond what we know in our physical world. We can reach things that we cannot do, we can enable things that we cannot do, even for people that have limitation in a physical world, it might be an opportunity to level the playing field.

Host: Interesting.

Eyal Ofek: We looked at ways that it could affect conversation between people because you suddenly have a lot of information, all the time with you, and it doesn’t have that social effects of getting the phone out while talking and looking, and suddenly you can do a lot of… for example we found out that we can help introverts to have better conversations.

Host: Great application for the software field! Well, I want to come back to that and come back to it heavy a little bit later in the podcast, but even with what you’ve talked about right now, twenty-five years ago, I did not have the smart phone that I could find out anything I wanted to, any time I wanted to, anywhere I wanted to. And so, in a sense, I have a superpower already. It just happens to be in my pocket not in my head right now.

Eyal Ofek: You’re right in the sense that, when we got the internet, we immediately sort of upgraded our ability to know information. The augmented/virtual space here is about getting that information, without any visible device, when you need it. We’re looking at a different way that it can change life of people whether it is working, obviously, and how we can make things more efficient this way, to how we can play, to how we can communicate and interface. Because one thing that you can see today is that all those screens around us basically reduce communication between people. You can see a family sitting around the table and each one is looking at its own phone. What if the information was around them, so at least their line of sight will include other people? And what if the system, it can show you private information, but it will incorporate that other person in a way that you might still keep being immersed but enabled to communicate with other people?

Host: You got your start in computer vision, and you were a bit of a serial entrepreneur before you got your PhD.

Eyal Ofek: Yes.

Host: Tell us about some of the projects you worked on, back in the day, and then talk about the importance of timing especially as it pertains to the little failures that eventually lead to big successes.

Eyal Ofek: Yeah. I had several startups even before doing my PhD, and it’s fun! You could do things which are totally new and be able to show that to the world and enjoy the impact that you see around it. Like, we did a drawing application which includes, also, 3D manipulation way before Photoshop. Or we did a game engine that included a simulation of ray tracing and that was in’94… My enjoyment was when we did things which were really new and pushing the envelope, and that was, maybe, different than what’s needed from the business point of view!

Host: Yeah.

Eyal Ofek: At some point I decided, okay, let’s go all the way for novelty and I joined Microsoft Research.

Host: I want you to drill in on this idea of timing because when we talked before, you mentioned that you were kind of on-point, but too early in the game and even you mentioned a street view camera prior to Google’s…

Eyal Ofek: So, that actually was after I joined Microsoft but you’re right, I mean if I look at my startups, they could have been much more successful if they were a few years later. It’s true for many things that you try to push in, and you have to understand that the fact that you have a prototype working might still have a long way to get into the product. So, for example, we actually did a project where we mapped two cities, San Francisco and Seattle, and again, this one actually wasn’t that early, but it still was a year and a half before the first commercial one.

Host: So, just speculating here ,what do you think then, contributes to the world not being ready for your great idea?

Eyal Ofek: Well, it’s not that the world is not ready… But basically, in general, I know you talked with Bill Buxton in the past, and he has this term of the “long nose of innovation”…

Host: Right.

Eyal Ofek: …where we see different new technologies that came out and, many times you see twenty years before they actually, uh, reach, uh…

Host: Maturity.

Eyal Ofek: Yeah.

(music plays)

Host: You framed AR and VR as a revolution. If we look at this through the lens of other revolutions, why do you think AR and VR qualify as revolutionary?

Eyal Ofek: Okay. Let me explain that, and I can come up with different directions to show what’s the difference between what they enable or what I expect them to do, and what we had before. It’s not about, oh, I can put on a headset and now I’m immersed in a game which is all around me… No, that’s not the revolution… I mean it’s nice and, uh…

Host: Pretty cool.

Eyal Ofek: …yeah but there’s several things that I think are a major difference, which actually, most of them don’t yet see in the application in VR today, or AR for that matter. The first one, coming from technology is, suppose I, right now, want to write an AR/VR application, how do I do that? Right now, what people do is very much like they did in, say, writing a game for Xbox. I design a game. I develop the game, test it, clear all the bugs and so on. And now, once I have it, I ship it as a product. And everybody has to do exactly what I did. Basically, making sure that whatever is the platform that they use to run it is exactly as the platform that I use to build it.

Host: So, it’s standardized.

Eyal Ofek: Yes. And you see that on every software. Now what I see, as an AR/VR, is the fact that the software came out of the computer and now lives in our environment. So, it’s opened up sort of a Pandora’s Box of, how do I write such a software because it should run differently on different people? So, we set out, as a goal, and we started working on that almost ten years ago. And trying to see, how can we make sure that software will be, on one hand, similar for everybody, the same playability, the same story, the same tasks that you need to do? And on the other hand, be dependent on, how is the room around you? Are there other people? Uh…

Host: Your particular environment.

Eyal Ofek: What time of day it is, and so on? So, uh…

Host: That’s a huge technical challenge!

Eyal Ofek: It is.

Host: So that’s a technical side of it.

Eyal Ofek: Yes.

Host: And it’s dependent on all the sensors that are mapping my environment? And so it’s personalized for me?

Eyal Ofek: So, it has, basically, two elements: one is an element of understanding what’s the world…

Host: Mmm-hmm.

Eyal Ofek: …so understanding what’s out there, what are dangers, what are opportunities that we can use? That’s one thing. And on the other hand, understanding what is the application that we try to do. Let’s say I want to tell a story. So, a story will have these three chapters that I need to visit. So, I need to first visit chapter one, chapter two, and then chapter three. I need to put them somewhere in your environment. And so, this analysis of the environment on one side, and this breakup of the narrative that I need to give you on the other side. And then matching between them.

Host: All right. So this is a complete shift from how people have written software and what inputs they’ve used to write the software.

Eyal Ofek: That’s right.

Host: The use of VR and AR has been described as sporadic and episodic.

Eyal Ofek: Yes.

Host: Why is that the case, right now, and what will it take, do you think, in terms of progress in these technologies to move us from sporadic and episodic to a more pervasive adoption?

Eyal Ofek: Yes. Eventually what we want is… I want that if I’m coming in the morning, I will be able to control my workspace, regardless of how it looks like, whether I’m sitting in a bus or that I have a small office with no windows. I want to be able to do whatever I want. I’m not a touch typist. I want, as I type on Word, that I will see my fingers just beneath the line as I type and not where they really are on the table. I want to be able to reach and fetch something which is really far from me. I want to, right now, jump to a conversation we had last week and suddenly I’m in that meeting room and the whiteboard has the writing as we left it last week. And again, I didn’t drive to do that. I want to visit someone in a different site and be able to walk with him in the site but I’m still in my building. You know those people that walk around talking to the air because they have a Bluetooth earpiece? Think of someone that walks around the pavement, but now he’s also wearing glasses and he’s talking, and he, actually, is walking right now in a different continent. All these are possible. However, if you look at what’s happening today is there’s several limitations. One is hardware limitation. The headsets are still not that good, heavy, limited on battery power, connected to big computers, and tracking, up till a few years ago, was limited to a room and nowadays they have inside out cameras that enables you basically to walk around. But even more fundamental is the applications. So, this has to fundamentally be changed to be more flexible. And again, that’s the direction that we try to push.

Host: When we talked about the revolutionary nature of AR and VR, you talked about the technological challenges.

Eyal Ofek: Right.

Host: And I know that there’s another angle to this, which is the social challenges. So, talk about that, unpack what the challenge is there and what we need to be thinking of here.

Eyal Ofek: Okay. So, let me first say, I mostly see things as an opportunity, right? I see an opportunity to generate reality which is not limited to the physical law that reality abides to. Nowadays, all our life, we are in the cage of our body. We see things from our point of view, we can go wherever our body can get us and what our senses can do. And suddenly, we could see things from someone else’s point of view. We could extend our senses. Let’s say, right now, I’m in a car and there’s a traffic jam. What if my point of view could jump from driver to driver, up until the point of trouble, and I see what’s happening there? What if I could reach wherever I want without any effort? And, as much as it’s life changing for me, it could be even more life changing for people that currently has limitations. In a non-physical world, then there’s less importance of what my physical ability is. And it could be a leveling playing field…

Host: Interesting…

Eyal Ofek: …for people. Not just physical, as I said, it could also help people just by guiding them, in the moment, to do things. So just a lot of potentials. For example, we did work in the past that was trying to see if augmentation, in augmented reality, can help people generate better connections. Suppose I’m sitting in the train and there’s someone next to me and we start talking, and we have our means of finding out if there’s any connection. Do you know this? Where did you learn? Where did you live? And so on. And sometimes we end up the conversation and nothing happened. And sometimes we just fall on some commonality between us and the conversation starts flowing really fast. And wouldn’t it be nice if I could start the conversation this way? So, we did an experiment like that. We gave people that never met, and we asked them, what do you want people to know about you? What is things that you want to talk about? What are things that you don’t want to talk about? And then, when they met, we tried to enhance the conversation. We want to see several things. We want to see if people will be annoyed by the fact that they are getting things while they need to talk.

Host: Right.

Eyal Ofek: And would the other person be annoyed by the fact that I’m reading something while he’s talking. And again, we want to see if this will enable the conversation. Will it flow better? would you look for subjects? Would you use what we did?

Host: What did you find?

Eyal Ofek: So, interesting enough, first, we are very blind for what the other person is doing, and we are very bad at discovering if the other person have the whisper in his ear of things to do. Second, it depends on people. There’s people that are great talkers and they don’t need any help, and those people, a) didn’t use our system, and, b) thought that this was sort of, what am I in the kindergarten? Why are you bothering me? While other people that are not fortunate, actually used it and it helped them.

Host: Interesting. So, what I’m hearing is individual preference…

Eyal Ofek: Yeah.

Host: …on how you like to interact with people, what you want people to know, and where you like to focus your reality.

Eyal Ofek: That’s right.

Host: Um… When we talked about the science of doing AR and VR, you mentioned three areas that are important to get it right. We’ve talked a little bit about it, but I want you to drill in on what they are and why they’re super important.

Eyal Ofek: Okay, so the first one – I don’t know if I need even to dive too deep in that – is safety. Obviously, right now, we are changing how people perceive reality, but at the end of the day, they live in reality, and reality can come back and bite them. So…

Host: Literally.

Eyal Ofek: Yes. So they need to be aware if there is any object that they might collide or might fall. And sometimes they’re not aware, even if they don’t have any device. So, first you want to make people safe. And second, now you have a story that you need to give. And I’m saying story; it could be a game, it could be an application just as well. And you want that this information that you give about the world will not break that story. So, for example, if I’m in a spaceship and uh, say my mother comes in with a plate with cookies to give me while I’m playing VR, then she may appear as a floating robot that comes in and says, there’s nourishment and that cookies will appear somewhere on the desk of the spaceship.

Host: Will it be her voice, or will it be a robot voice?

Eyal Ofek: Uh, that’s a good question, right? It could be a different voice but at least when the voice is associated with something you see, you tend to connect that.

Host: So that’s that sort of mixed reality where mom’s actually bringing cookies but it’s a robot.

Eyal Ofek: Right. So, we want to keep the experience untouched by the fact that the whole world is trying to interfere with it.

Host: Okay, so that’s sort of safety and mapping reality to augmented reality or virtual reality.

Eyal Ofek: Right.

Host: And what’s the third one?

Eyal Ofek: A third one is privacy. We want to be able to give a lot of options but, on the other hand, keep all information at your place.

(music plays)

Host: Eyal, if I let my imagination run, I can see how AR and VR could, in essence, democratize our lives by extending, expanding, and equalizing everyone’s experience with reality. You know, sitting in coach class and maybe thinking it’s first class, hey, I have more virtual leg room even if my legs are still blocked. And also, that your older car could seem like a fancy car… Tell us some other areas where the concept of resources and abilities might be equalized by virtual and augmented reality becoming a part of our lives.

Eyal Ofek: Okay. Suddenly we’ve got freedom. We’ve disconnected our self from reality, so what can we gain as a result? On the other hand, we don’t want to totally disconnect reality. We want to be a contributing citizen, and be able to work and interact with people and so on. So, what we say is, what if we could filter out the reality? So, things which are boring could become more interesting, or enlightening, and so on. We have a paper right now called DreamWalker. I walk, every day, from my home to work and it’s the same path, basically. And instead of doing that path, what if I could, say, walk in Manhattan while I do that? So again, safety first, but then we can actually generate a new environment around us. Or, maybe I’m doing a routine job. I need to bring a parcel to some office. And here we can transfer you to maybe a fantasy world where I’m running up with a chalice up the hill, fighting dragons and getting it to the castle! I mean there’s different opportunities. That’s not the direction currently we’re working on. We’re working more on getting people to be more in reality but still having more content, but it’s a possibility.

Host: How does this play out in the Ability world? I know you have some research and some papers in this area.

Eyal Ofek: Yes, so one disability that we’re trying to look at is, maybe, the furthest away that you would associate with virtual or augmented reality and that is a problem with vision. Because you associate virtual reality with such a visual language.

Host: Absolutely!

Eyal Ofek: What if a blind person used it? So, there’s several reasons for that. Let’s talk first about low vision. Low vision is some limitation with your vision that cannot be fixed with glasses. There could be people that have limited field of view, or sensitivity to light or low contrast, or maybe blurry and so on. Now, there’s several opportunities here. When you look at applications that come out for virtual reality, they many times just do things with a lot of atmosphere, a lot of small details and maybe darkened environments and so on. And we wanted to make that, reachable for, uh, uh…

Host: Low vision people.

Eyal Ofek: …people with low vision. So, we can use the fact that this is a world that we know a lot about. So, we can apply filters that will make things look clearer to people, mark where objects begin and end, mark what could be graspable. And when we try that with a set of people that had low vision problems, we could see how to improve their ability to do that. Now if you go even further for people that are totally blind, yes, they cannot see the display but they can hear what’s around them, we can even sense things such as virtual sensing of a cane that hits an object that does not exist there and, as a result, they know that there is an object there and they navigate a virtual environment.

Host: Right, right, right. When you’re talking about the increasing quality of our virtual senses, mostly we’re talking about vision and hearing. But touch has remained more elusive in this arena. So, tell us why, from a technical perspective, touch has been so difficult to simulate and then talk about the progress you’re making in haptics and some of the hot papers and projects you’re working on.

Eyal Ofek: So, first, how did I got to touch? Because, as I say, we started with augmented reality and the fact that applications will look different in different environments because we actually see the environment. Now with virtual reality, I just put myself into a virtual world, I no longer see my environment around here, what do I care about the fact that your environment is different than someone else’s environment?

Host: Right.

Eyal Ofek: And the answer to that is mostly touch. I can walk straight forward and, in your environment, I will be able to do five steps. In someone else’s environment, after two steps, I hit a wall. So, I need to take that into account. And this means both, how to avoid things I don’t want to touch in the environment, as well as how can I touch things which are not there, right? So, if I, right now, go into current applications, I see very nice graphics of virtual worlds around me and I hear 3D audio around, I can hear the birds coming from the right, and then I reach my hand to touch the bird and nothing there. So, we wanted to give the sense of touch. Now, this is hard. And you can see people working on things such as exoskeletons. We tie our self into some robotic suit… It’s very expensive, it never gets out of the lab and what we wanted was something that we can see coming to consumers. So, we said, you know what? Let’s model the objects that we actually are going to hold. So, instead of exoskeleton, we went to controllers because, right now, we also use a controller for Xbox and people that use VR right now have two controllers to both track the hands as well as some buttons to press. What if those controllers could change their shape and be a representative for objects that we grab? And we had a whole set of different devices that enables you to grab objects of different sizes, feel that they’re rigid, or maybe compliant. I could press them, and they will inflate back when I release the pressure. Maybe I can connect the two controllers in a way that, one time they can be rigid, as if I hold a box in my hand and I cannot bring the two hands closer, and then, within a second, the two hands are free and can move as they want. So, we have a whole research direction that tries to see, what can we do, more and more, in a way that will be simple and cheap and robust, so people can put that into a product? And on the other hand, very realistic, as much as possible, for the users. We believe in people using it in real environments. So, real a environment has objects. For example, if I need to touch a virtual wall, maybe there’s some wall around that I can actually touch. The problem? That wall is not where my virtual world is. So, if I can fool you, since you don’t see your real hands, we could bring your hand, your real hand, to the real world while you see your virtual hand touching the virtual wall and that timed experience will convince you that the virtual world is real. And we did several works on that, which we called “haptic retargeting,” which tries to, as much as possible, use what’s around you as a means for haptic, uh…

Host: Experience.

Eyal Ofek: …sensations.

(music plays)

Host: Wow. Of all the research going on at MSR, AR and VR may provide the most Hollywood-ready script for a dystopian sci-fi movie, if I may say.

Eyal Ofek: Yes.

Host: So, as we’ve just said, the goal of your technology is to fool me with digital trompe-l’oeil, as it were…

Eyal Ofek: Right.

Host: …and empower me to believe I’m superhuman.

Eyal Ofek: Yes.

Host: Somewhere, on the spectrum of reality, virtual reality, augmented reality, some place in there, while I maintain a grasp of real reality, because that’s important for me not to kill myself or injure myself…

Eyal Ofek: That’s right.

Host: …what could possibly go wrong? So despite the fact that you say you want me to believe, but you don’t want me to believe too much…

Eyal Ofek: Right.

Host: …is there anything about this that keeps you up at night?

Eyal Ofek: So, so there’s several things that we’re thinking of and I must say that most of them are not unique to AR and VR, in this sense. First, how do I trust what the system gives me, right? How do I trust what I read on social network? How do I trust what I read in a paper? In some sense, AR and VR is easier, because they deal with the world just around me. So, taking off the headset will reveal me the real world…

Host: Immediately.

Eyal Ofek: …no matter how much I try to fool you. So, we are very limited on what we can actually convince you. On the other hand, I will add the opposite problem: we don’t want you to believe us too much. What I mean by that is, we totally trust the technology, right? We shouldn’t. Technology, even if all our input is correct, it might not be a full input and there’s noises and there’s things that it doesn’t know. So, if I, for example, have a machine that tells me, here’s a straight line and put the nails in along the straight line, I should always have in the back of my mind, maybe it’s not a straight line. And that’s another thing that we try to do so people will not be just totally passive, totally trusting the system.

Host: How are you doing that?

Eyal Ofek: Several ways that you can do that. One is, you could involve the person in the decision making. Look around, tell me, and things like that. And also, use a language, like, I think you should put the nail there. What do you say? Even with the language, you can use things that will generate some kind of inner retrospective, throwing the liability on you so you will wake up and say, oh, now I need to really understand what’s happening.

Host: You know something that you said earlier, I just wanted to bring up, because I was just in Florida and we wanted to go see some alligators.

Eyal Ofek: Yes.

Host: And the gators are mating and moving around in March, April, May and not in the middle of the summer, because guess why? It’s too hot!

Eyal Ofek: Right.

Host: So, they’re all underwater. So, we went out for this ride and saw, you know, the nose of one alligator. What if I had a virtual reality set of glasses and I could go gator watching in the wrong season?

Eyal Ofek: I have the thought of, what if I, say, go to Pompeii with my wife, and I put on a headset, which sounds ridiculous! I’m already there. Why do I need to put a headset, right? Or if I put a headset why did I even bother coming to Pompeii? I could have…

Host: Right. Why did you buy the airfare?

Eyal Ofek: …done that from home. But the thing is, it’s the feeling that I have right now of the air flowing, the smell of the food, the people around me. But the people around me, instead of being tourists, will be wearing togas. And the cars around will be chariots, and the buildings will be full, and my wife will be there next to me. And whenever I want, I switch to a full reality, right?

Host: I can totally see that because having been to Pompeii recently, it would be… I’m looking at the ruins and what if the world were built in front of me as it was, and then I could go back and forth… even experience the volcano eruption? I don’t know… Ahh! Maybe not that. All right. Let’s move on. You said we should do a series. I think you’re right. Um. You’re an interesting guy Eyal, in a lot of ways. Tell us your story, what got you into a life of high-tech invention and how did you ultimately end up in a life of research at MSR?

Eyal Ofek: I would say I started very early with computers, and I’m very visual in my nature. I like to draw. I actually did comics and book covers and things like that. So, that’s moved me to graphics, then to computer vision and understanding what’s around, and in some sense, VR and AR also include another love that I have which is interaction. So, suddenly it’s not just nice graphics. We’re actually inside them, and that opens a lot of dreams of what can you do? What can you change that you didn’t have before? Would that be good? Would that be confusing? I started at the beginning saying that we’re sort of living in the cage of our body. What if I could have seen from a different point of view? What if I could look through the back if someone is coming? What if I can look behind the corner to see if someone is running and I’m about to collide? Would I find it confusing? Probably, because I never experienced looking from a different point of view. Maybe I learn to enjoy that and use more….

Host: All right, so how did you end up at MSR?

Eyal Ofek: So, after several start-ups, I thought that maybe the best thing is to go for research and try to think ahead for the world, and when I thought of research, Microsoft Research was a shining diamond that I immediately thought about. It had so many great names that I knew that were working there like Andy Wilson or Bill Buxton or Ken Hinckley. And when I joined in, I actually saw how great it is that you have collaboratives that you can do… and all the works that we’re talking about is basically a collaborative between people like Mar Gonzalez Franco and Mike Sinclaire is our hardware genius, or Christian Holtz and many others. And of course, a lot of amazing interns that come in and work on… So, I contacted Harry Shum. He was the head of vision in the Microsoft lab in Beijing. And I moved over there.

Host: Really?

Eyal Ofek: Yeah! So, I’ve been for two years in the Microsoft Research in Beijing, and then there was a new group that was forming at Microsoft called Virtual Earth…

Host: Yeah.

Eyal Ofek: …and I was lucky enough to be a part of that group and I moved to Redmond. After six years there, I felt like the work that we did was amazing, but the way people experienced it was on a 2D screen. It was obvious that the next step should be immersive!

Host: I love that. I’m waiting! I’m waiting! Well, as we close, I’d like you to give us a vision of the future from your perspective. So if, as you suggest, we’re on the cusp of a revolution, what’s next? How will you, along with emerging researchers, get us to the next level in AR and VR, and what does the world look like if you’re wildly successful?

Eyal Ofek: Oohhh! Good question! And probably a smart person should not talk about the future! But, in general, I see a lot of opportunity. We are changing the signal that we send to our senses, which is the way that we are experiencing the world around us, our life. I’m not sure we still understand everything in this field, and so there’s a lot of opportunities here, both on learning what’s happening around us so we can better represent that and answer that. There’s an effort on how to understand what people want with as little information as possible. When I use augmented reality, the most common mode is nothing. Right now, most of the time, I don’t want any information in my field of view. I like reality! But whenever I need it, I want it, so we do things that I call “contracts” with the system. For example, if I take a look at the phone or the watch, it has a lot of social implication. You will say, oh, he might need to run somewhere, or he might find me boring, right? But what if I will just gaze, and I see, in the corner of the walls and the ceiling, what time is it and how much time I have until my next appointment? You will not know about that, and I don’t see that all the time; only when I do this gesture that the system knows. So, the more we know about the user, the more we just, you know, fit his wishes, or her. And that’s a great challenge. As well as how we can use the fact that we’re not limited with distances, physical limitations, and so on to, maybe, do better communication, better work, and so.

Host: So, it sounds really multi-disciplinary, what you’re talking about. So there’s all kinds of places for a variety of kinds of researchers to be engaging in this work.

Eyal Ofek: Yes. Maybe it’s not even seen like that right now, but I think it’s an exciting world that’s coming in when we only see the tip of the iceberg.

Host: And it’s a virtual iceberg, at that! Eyal Ofek, thank you so much for joining us today.

Eyal Ofek: Thank you. It was fun.

(music plays)

To learn more about Dr. Eyal Ofek and the latest in augmented and virtual reality research, visit Microsoft.com/research

Up Next

Artificial intelligence, Computer vision

Holograms, spatial anchors and the future of computer vision with Dr. Marc Pollefeys

Episode 71, April 10, 2019 - On today’s podcast, Dr. Pollefeys brings us up to speed on the latest in computer vision research, including his innovative work with Azure Spatial Anchors, tells us how devices like Kinect and HoloLens may have cut their teeth in gaming, but turned out to be game changers for both research and industrial applications, and explains how, while it’s still early days now, in the future, you’re much more likely to put your computer on your head than on your desk or your lap.

Microsoft blog editor

Artificial intelligence, Computer vision, Human-computer interaction, Social sciences

When Psychology Meets Technology with Dr. Daniel McDuff

Episode 17, March 28, 2018 - Dr. McDuff talks about why we need computers to understand us, outlines the pros and cons of designing emotionally sentient agents, explains the technology behind CardioLens, a pair of augmented reality glasses that can take your heartrate by looking at your face, and addresses the challenges of maintaining trust and privacy when we’re surrounded by devices that want to know not just what we’re doing, but how we’re feeling.

Microsoft blog editor

Human-computer interaction

Getting Virtual with Dr. Mar Gonzalez Franco

Episode 4, December 13th, 2017 - On today’s episode, neuroscientist and virtual reality researcher, Dr. Mar Gonzalez Franco, talks about her work in VR, explains how avatars can help increase our empathy and reduce our biases via role play, and addresses the misconceptions that exist between the immersive experiences of virtual reality and psychedelic drugs.

Microsoft blog editor