AI and Our Future With Machines with Dr. Eric Horvitz

Published

By , Chief Scientific Officer

photo of Eric Horvitz - Technical Fellow, Managing Director
Eric Horvitz – Technical Fellow, Managing Director

Episode 2, December 4, 2017

When it comes to artificial intelligence, Dr. Eric Horvitz (opens in new tab) is as passionate as he is accomplished. His contributions to the field, and service on the boards of nearly every technical academy and association in the country, have earned him the respect – and awe – of his colleagues, along with the position of Technical Fellow and Managing Director of Microsoft Research. Dr. Horvitz talks about the goal of artificial intelligence, his vision for our collaborative future with machines, what we can learn from the Wright brothers, and how a short stint of “six months, maximum” became an illustrious and, in his words, joyful, 25-year career at Microsoft Research.


Podcast Transcript:

Eric Horvitz: For me it was a very – I remember being very young. It’s kind of funny because I remember exactly where I was when I said, “Yes. You’ll be doing science.” I used that word, Science. I just loved that stuff so much. And I stopped there. I said okay, I know I’ll be doing – I made a commitment. I think it must have been like fourth or fifth grade, and that was like a done deal.

You’re listening to the Microsoft Research podcast. A show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

When it comes to artificial intelligence, Dr. Eric Horvitz is as passionate as he is accomplished. His contributions to the field and service on the boards of nearly every technical academy and association in the country, have earned him the respect – and awe – of his colleagues. Along with the position of Technical Fellow and Managing Director or Microsoft Research. Today, Dr. Horvitz talks about the goal of artificial intelligence, his vision for our collaborative future with machines, what we can learn from the Wright Brothers and how a short stint of 6 months maximum became an illustrious and in his words, “joyful,” 25-year career at Microsoft Research.

That, and much more on this episode of the Microsoft Research podcast.

Host: Do you want to start off with a huge question?

Eric Horvitz: Let’s go huge.

Host: Let’s go huge and then we’ll go micro.

Eric Horvitz: Yeah.

Host: What would you say the ultimate goal of your work is? What’s the goal of AI?

Eric Horvitz: Let me start with my work, my goals. It’s to understand what the heck is going on with minds. How on earth do what we understand to be tangles of neurons, cells, sparking and popping and communicating with each other… how on earth does that lead to this fluid experience? And I think I have a pretty good handle on physics, space and time and force. But questions about mind are a real knock-out. I decided as an undergraduate that as I faced my next step in life, whether it would be education or some sort of a research position as a scientist that it would be mastering some insights or knowledge with effort no doubt about how brains generated minds.

Host: So, talk about that in context of artificial intelligence and what you are actually doing here.

Eric Horvitz: I moved from a focus on actual brains and the PhD, off to understand how brains work early on in graduate school into computer science. It was pretty clear to me that the most promising way to understand intelligence and principles of intelligence, no doubt at the foundations of my own conscious experience would be through computation. And that brought me to computer science and computing and decision science and principles of decision-making. Principles of how systems, agents, could observe, learn, perceive, fuse information to make assessments and take action in the world and even communicate with one another through natural language.

Host: Did you start a neuro science then?

Eric Horvitz: Yes. I did biophysics as an undergraduate and that I thought was good preparation across the sciences for understanding what we knew about the natural sciences, which in the context of all the knowledge there – not that it’s complete – set up a widening and gaping hole when it came to how nervous systems work. We’re still you know drowning in mysteries about that. That’s why it’s so exciting. And that’s why when I do work in AI, artificial intelligence, machine intelligence – there are various phrases for this endeavor – even small pops, little insights go a long way.

Host: Yeah. You said decision science. Is that actually a field?

Eric Horvitz: Yeah it is. We often say the decision sciences. There’s a behavioral component. There are people that study how people make decisions. It’s the psychology of judgment and decision-making. People study this because they want to understand how it is that humans react to advertisements, how do they make sub-optimal decisions in different settings, how can they be fooled, what’s a hallucination or an illusion of judgment analogous to the illusions we have in our visual systems. But then another part of decision science is looking at what’s called formal or normative models. What’s the ideal action one should take under uncertainty when you don’t know exactly what will happen and you don’t know exactly what the influence of your action will be. And maybe you don’t even have enough time to think about all possible actions but have to make a decision because if you wait, you might be in trouble. There’s an interesting idea called – we can actually compute – called the value of information in a decision context. We can actually think through, “Is it worth pausing and collecting more data before I take an action in real-time?”

Host: And you know, as you say it I’m thinking of situations where you want to make a fast decision and it might cost you to wait, but it might cost you way more if you don’t wait and gather that information that you need.

Eric Horvitz: You’ve just written the abstract of my dissertation at Stanford. That’s exactly what I worked on. I think so much of intelligence is how well we do with limited resources. So, I studied this area and I’m still fascinated by it. Thinking and acting under bounded resources for thinking, bounded time, bounded computation, varying time. Sometimes you don’t know how much time you have and you have to have a system we say has an “any time response” or a flexible computing strategy so it’s like it will think and think and think. The patient might do something and you say I got to act now. And in fact, in my dissertation, we had this model of a patient gasping for breath and the problem was either A or B and you had very different strategies with different costs and benefits and you are uncertain and the reasoner is thinking away and huffing and puffing. And the probability of what it is, is changing over time. And here’s the cool part: the system had to know what was the value of thinking longer to come up with a better answer when you knew the patient was crashing?

Host: Right. I mean that’s high stakes.

[Music plays]

Host: You are working toward a future of human AI collaboration where you said these words, “complement, assist, leverage and extend human capabilities.” Talk about more specifically what that future might look like.

Eric Horvitz: Yeah, it’s interesting. There’s a science to this and a very interesting research direction or set of directions and as a feature. I think you can understand it best if I just start with a little bit of the science. So, what I imagine in the kind of work that we’ve been doing on my team and some of our colleagues have been doing with their teams is imagining this notion of how do we build computing systems given our current technology that can do perception and reasoning and decision-making that are designed to complement human minds, human thinking. And that brings us down this very interesting path of looking at all the incredible literature and studies and studies to be done that characterize how limited human beings are. About 125 years or so of rich studies of cognition have revealed biases and blind spots and limitations in all human beings. That’s part of our natural human substrate of our minds. Could we build systems that understand in detail and in context the things that you’ll forget. Your ability to do two things at once and how to help you balance multi-tasking for example. Understanding how you visualize, what your learning challenges might be in any setting. What you consider hard. Why is that math concept hard to get? You can imagine, systems that understand us so deeply someday, that they beautifully complement us. They know when to come forward. They are almost invisible. But they extend us. Now part of the trick there is not just complementarity, it’s coordination. When? How does a mix of initiatives work where a machine will do one thing and then a human takes over? I love some of the work going on by colleagues at Johns Hopkins. You can imagine a computer science department at Johns Hopkins looking at like surgical robots. But what’s interesting in this work, and it really captures work that we do as well in the intellectual space, is you see a human surgeon that’s working – to let’s say to do a certain kind of stitch and it’s a give and take where the robot system is watching in the same way that they’re watching. They are coordinating with each other and it’s a back-and-forth, a give-and-take, where working together, faster progress can be made. The sum is greater than one. So, you see the ability to augment a human being with a system that is not as good, but it’s complementing humans in a particular way by recognizing blind spots.

Host: One of your favorite books is David McCollough’s biography of the Wright Brothers.

Eric Horvitz: Yes, I loved it.

Host: You commented how amazing it was that in 50 summers, the aviation industry went from canvas flapping on a beach to the Boeing 707.

Eric Horvitz: What I love about that book is that a team, in this case, very close brothers, actually it turned into the whole family – had this vision and they dedicated every ounce of their creative effort to thinking it through. I read that book. I savored it and I bought a copy for my larger team. I said please, please, read every word of this book. I did that for several reasons, including the passion, the focus, the end-to-end, the analogy of aerodynamics and principles. Understanding them with the way we’re trying to understand AI. We can go from people and what we understand in our current artifacts that we’re building, which really take it forward into a working system that captures the essence of intelligence. In my mind was the kind of pursuit that the Wright Brothers were doing. And also, and this is the second reason why I liked that book and the analogy of our mission today in AI, to the long-term aspirations for humans to fly. Even in the late 19th century and early 20th century, heavier-than-air flight was considered a flakey, challenging endeavor that probably wouldn’t have pay-off. It seemed almost unreachable in a deep way. And in that context, we had folks who just knew better in their hearts that this was doable and it could be done with… through intellect and perseverance. And I just was talking today to a group of visitors that asked about where we were with AI. I said, look, it’s been a slog, it’s been slow. No matter what you hear about AI, we haven’t gotten systems up to the level of a toddler when it comes to understanding common sense, social relationships, the physics of gravity, containment of liquids like this water here in this glass in front of you. It’s beyond our systems today. And we still don’t know so much about the mysteries of how, for example, toddlers learn so quickly. We don’t understand how people are situated and make it through life and understand one another so well. Models of mind. How do people go from one thing to the other with all their skills? How are all these competencies coordinated? And what is this thing called consciousness, that we use the word consciousness to refer to? Where does that come from? What are these subjective states or these qualia? We have no idea. We have theories and reflections, but they are not really based in any scientific theories just yet. However, is it possible in 50 summers, we have a whole new world? We have big surprises? We understand how minds work?

Host: So let me switch over for a second. We’re still on the book. In one of the most interesting parts of the book to me, an observer of the early efforts of the Wright Brothers said, “We believe in a good God, a bad devil and a hot hell and more than anything else we believe that God did not intend that man should fly.” So what do you say to the modern day technology naysayers? Particularly the brilliant innovative, successful technologists who not only caution that we’re heading into dangerous territory with AI, but say things like literally, “We’re summoning the demon?”

Eric Horvitz: You know, I think in many ways I resonate with comments like that because I know where they are coming from. If we believe we could master principles upon which our own minds operate, to build systems that have abilities like humans and we then can say things like, “Oh, but can we multiply that in a great way and have more powerful versions of these systems that could outwit humans?” You can imagine that line of reasoning raising important questions about the safety of designing systems one day. It turns out that many things that we want to do in the short-term to ensure the safety of AI systems being used in high stakes areas, and keeping them robust and trustworthy, are aligned with where you’d go and how you make the larger, more super intelligent systems safe as well. So, we’re on the right track I think. I think people are thinking about these issues which is a very, very good thing. I think it’s also important, since we don’t really know the future, to reflect a bit about, maybe more than a bit, but to gather together experts and to envision good outcomes and not so good outcomes. So recently, last February, I organized a meeting with a couple of colleagues where we entitled the meeting, “Envisioning and Disrupting Adverse AI Outcomes.” The initial title of that meeting was “Envisioning and Disrupting our Worst-Case AI Nightmares.” And what we did was we asked about 50 computer scientists and other people in affiliated fields to write down exactly what their fears were or these worst-case nightmares. Clearly, and you know not just write them down, but we had kind of a form you filled out. Is it like a lock-in permanent thing? Is it temporary, causes a big disaster? Does it involve lives? Does it involve politics? We had a little form. And then we also said, “Now make sure you fill out exactly the steps, the trajectory that we went on, that got us to that bad state, and make sure it’s bulletproof because we’re going to have a blue team and a red team. The red team is going to defend your case. The blue team is going to tear it apart and they are going to do it live in real time.” And we had six scenarios.

Host: What could possibly go wrong?

Eric Horvitz: That’s what it was all about. But my point is that people talk about worrying and anxiety. It feels better to get active. And if you do have a worry or anxiety, let’s get it crisp and let’s put our minds to it. And I tend to have an optimistic side, mostly. That doesn’t blind me to hearing about negative implications and costs and downsides for AI in the short-term or long-term. But I think we should study this. If there are some failure points and costly outcomes, we want to think through how to solve them in a proactive manner. Now if we can’t, let’s have a good time now and enjoy it while we can. But I’m optimistic that we can.

Host: Well, yeah, and I mean your philosophy that I’m hearing is, “let’s approach it with a positive outlook” and instead of being Luddites that sort of stick wrenches in the machines or the Amish that say the button goes far enough, the zipper goes too far.

Eric Horvitz: No one will stop the march of our curiosity, the science. We’re going to learn more about minds. We’re going to learn more about who it is we are. We are going to learn more about how to build intelligences. That’s just going to happen. The question is can we invest enough and how much we invest as we go, doing it in a careful way and as we go, address not just rough edges that we can see but really think broadly as to what could go wrong along the way?

Host: So, let’s go from the specific to the general. You have said that you believe in technology’s usability to make human life more meaningful. What do you mean by that?

Eric Horvitz: Well there’s been a lot of discussion afoot about the rise of competencies of machines doing intellectual work and robotic work. One of my visions for the future, I don’t know if we’re going to get there, but my optimistic side believes we will, and wants us to. Is that with the rise of a ubiquity of machine intellect in our lives, both in humanlike intelligences, automatic robots – they are helping us as coaches and as support of assistance of various kinds and intelligences in the background and lots of automation everywhere, that against that background, the value of what’s uniquely human will rise to become even more valued than it is today. You know, I was recently listening to a song that was written by a machine. Actually, we ascribe it to, attribute it to Xiaoice. We have people in our China lab that showed how you can have an AI system, use deep learning to learn how to write not just a beautiful song, but poetic lyrics to go with it. When I hear songs, I think I will always want it to be songs written by a human being who has experienced what they are singing about. When I hear a country western song – you know I was just listening to – I don’t know, I put that Tesla on a country western channel and one put a tear in my eye. I was thinking would a machine ever put a tear in my eye? I’m not so sure. I want a human being at the other end. Humans want to connect with human beings as teachers, as doctors, as caregivers, as song writers, as artists. That will never go away. I see it being amplified in the world of automation.

Host: I love that. That in itself makes me so happy that you are actually running this joint…

Eric Horvitz: I guess my position gives me this – my role here I guess – gives me a sense for what’s coming, maybe earlier than some people see it or the trend. So, I get to sit and listen to people celebrating Xiaoice singing this beautiful song. And it is beautiful. I didn’t recognize the lyrics that they were in Chinese. But it really got me thinking that I want Waylon Jennings to have been through this experience and to sing to me about it, not the machine.

[Music plays]

Host: To use classic marketing language, what’s the “value proposition” of doing research as opposed to another place where somebody who was you 25 years ago right now, would look into, “what do I want to do? I’m a best and brightest – to use a cliché – where am I going to go and what am I going to do?” Why would they go into research?

Eric Horvitz: Well, you know, for me, I’ve always been the kind of person that never got to the end of my whys. Why this? Why that? But why? But that doesn’t answer my question. Why would that be the case? I’ve noticed that I got a lot of pleasure out of that. My mind is driven to ask questions. And when I come to an answer that I didn’t expect, I get such a burst of pleasure. So it’s just the way I live. I’m in the exact right spot for having a joyful experience creating. I think one of the deepest values at Microsoft Research and  at other labs, is creativity, creation, coming up with new ideas that have never been thought of before. Combining two sort-of well-known ideas into a whole new innovative combination that leads to a whole new concept. If doing that also brings up answers to whys, it’s fabulous to learn more as you go. Being in a research lab puts you in a world where you are on a life-long learning, crashing wave of life-long learning. You are always kind of surfing this wave of the unknown. And you look to your left and right – it’s kind of funny, I’m visualizing this right now… I’m seeing people that I know. And you see your buddy surfing, too, once in a while wiping out and getting up again. It’s like a big surfing party on the edge of what’s unknown. It’s very, very joyful, exciting and interesting. You know, when I came to Microsoft Research, I said I’d come up here for 6 months maximum. I didn’t really know what Microsoft was doing. They acquired a little start-up we had during my Stanford graduate school days. We were actually trying to take our PhD dissertation work, we thought it was such great stuff, this Bayesian reasoning, and get it out into the world on these new things called PCs, which seemed to be just as powerful as some of the List machines we were using at Stanford. But I came up here and I thought, 6 months and I’m back to going my path to a university, which is also research. And, my comment is that here I am almost 25 years later and I feel like I’m just starting. Like just starting. I feel like okay, I just got here and we have so much more to do. It’s way more than 6 months.

Host: What would a computer science student or somebody in university, whether they are in undergrad or graduate school, what would they be thinking as they are planning their career?

Eric Horvitz: My sense has been – this is what I tried to do as an undergraduate, to try to spend some time with yourself to understand broadly what brings you delight and happiness and what’s kind of exciting to you. I think we all have different sets of aesthetics and different notions about topics and possibilities and our conceptions of jobs that turn us on and give us excitement. For me it was –  I remember being very young. It’s kind of funny. I remember exactly where I was when I said, “Yes. You’ll be doing science.” I used that word Science. I just loved that stuff so much. And I stopped there. I said okay, I know I’ll be doing – I made a commitment. I think it must have been like fourth or fifth grade, and that was like a done deal. But then what do you do? I liked everything. I was excited about biology, chemistry, math, physics. I had incredible books from the Merrick library in New York. The house was always filled with books. God knows what I would be doing today if it would be the internet. I would be having a blast. But my comment is that I felt like it’s best as an undergraduate to take out time to explore around the midpoint of what you think you would probably like. Don’t commit to being too narrow too soon. You heard my story. I came to Stanford, MD/PhD/neurobiology, I was like do I really want an MD? Well, it’s broadening and explore human minds, not just rats someday and maybe there will be a breakthrough and I can actually understand brains, human brains. But I remember as I moved into AI, starting out in neurobiology, I always had this sense of I can’t get there all at once, kind of just take another step closer, another step closer. A lab I find interesting, a mentor, a researcher I found interesting. Go talk to them. Closer, closer. Then I remember finally saying, it was like 1984 and a half and I remember saying, “I’m doing what I want to be doing now and I’m going to go hard on it now.” And it was AI.

[Music plays]

Host: There’s nothing else to say.

Host: To learn more about Dr. Eric Horvitz and the Microsoft Research vision for the future of artificial intelligence, visit Microsoft.com/research.

[End of recording]

Continue reading

See all podcasts