“The next 25 years will be known as the period of time where we started to work with machines in a way that accelerates human thinking and capabilities,” Eric Horvitz says. For decades, Horvitz has been following AI in awe—from his neurobiology laboratory as an undergraduate student to today as Microsoft’s Chief Scientific Officer.
In this episode, Horvitz explains why generative AI is a tool most likely to contribute to human flourishing—that is, achieving our deepest desires. Human and AI collaboration will ultimately propel all critical industries, including business, economics, medicine, education, engineering, and law.
Horvitz is the latest guest on Microsoft’s WorkLab podcast, in which host Molly Wood has conversations with economists, technologists, and researchers who explore the data and insights about the work trends you need to know today—from how to use AI effectively to what it takes to thrive in our new world of work.
Three big takeaways from the conversation:
Horvitz believes that AI can supercharge human thinking along multiple dimensions. Instead of focusing on potential dangers, he says we should think about how right now may be the “early glimmers of a time where we take our machines to a whole new level that influences society in a deeply positive way.”
AI is perhaps the world’s best “ideas processor.” Think: a word processor but on a greater scale. “Human intellect is still one of the biggest mysteries of all time to our leading scientists who study human cognition,” Horvitz says. “In fact, our creativity and ability to synthesize new ideas from existing ideas is what really makes us human. The machines we’re building now can help us process ideas faster and in richer contexts to achieve the world that we desire.”
As founder and chair of Microsoft’s Aether Committee, which is devoted to the ethical and responsible development of generative AI, Horvitz and his colleagues worked with Satya Nadella and senior leaders of the company to develop Microsoft’s six AI principles: fairness, inclusiveness, accountability, privacy and security, reliability and safety, and transparency. “They’ve stood the test of time, and they will continue to stand the test of time,” he says.
WorkLab is a place for experts to share their insights and opinions. As students of the future of work, Microsoft values inputs from a diverse set of voices. That said, the opinions and findings of the experts we interview are their own and do not reflect Microsoft’s own research or opinions.
Follow the show on Apple Podcasts, Spotify, or wherever you get your podcasts.
Here’s a transcript of the episode 6 conversation.
MOLLY WOOD: This is WorkLab, the podcast from Microsoft. I’m your host, Molly Wood. On WorkLab, we hear from experts about the future of work, from how to use generative AI effectively to what it takes to thrive in our new world of work.
ERIC HORVITZ: I believe deeply that these machines can supercharge human thinking, and that where we are now with this technology will be recognizable 500 years from now.
MOLLY WOOD: Eric Horvitz has been at Microsoft for 30 years and is currently the company’s first Chief Scientific Officer, where he works on initiatives at the frontier of the sciences. Previously, he was director of Microsoft Research Worldwide. Eric believes in long-term thinking when it comes to generative AI’s enormous promise to enrich our lives. He first became awestruck by its possibilities as an undergraduate student in his neurobiology lab. Nowadays, he’s fascinated by AI’s potential impact on virtually every critical field: business, healthcare, and education just to name a few. Eric, thank you so much for joining me.
ERIC HORVITZ: It’s great to be here, Molly.
MOLLY WOOD: Alright, let’s start this technology conversation with people, because you’ve written a lot about putting humans at the center of generative AI development. How do you see humans flourishing alongside generative AI?
ERIC HORVITZ: It’s going back a couple of decades. Early on in my career I—and I think this came from being excited in both human cognition and its foundations, and in the machines we’re building—became deeply interested in how machines and humans would collaborate, how they would work together, how could machines support human cognition? How could machines extend the powers of human cognition? By understanding how we think and where we’re going with our thinking in a way that it really would be a human-AI collaboration—with humans at the center and celebrating the primacy of human agency, human creativity. And that’s grown as a field of people interested in that topic, with various methods being developed, and various points of view. I believe deeply that these machines can supercharge human thinking along multiple dimensions. And that where we are now with this technology will be recognizable 500 years from now. The next 25 years will be even named something—I’m not sure what name we will ascribe to the period of time, but it’ll be a period of time where we started to work with machines in a very new way. And in a way that really accelerates how we think and how we design, and the things we can do in the world. And I think we can really aim this towards a new level of human flourishing. It’s interesting how we don’t think a lot about—when we think about AI and concerns, it’s often about status quo and what we might lose and what dangers we might face, at least in the popular literature and in the press. We don’t think deeply about the prospect that this might be the early glimmers of a time where we take our machines to a whole new level that really would influence society in such a deeply positive way.
MOLLY WOOD: I saw that you were among the people at the White House earlier this year to talk with President Biden about opportunities, and potentially risks. I know you can’t share specifics about that meeting, but can you give us a sense of how you felt coming out of it, how the vibe was, I guess, for lack of a better way to put that? [Laughs]
ERIC HORVITZ: The vibe at the White House is one of deep interest. Like, what’s this all mean for people in society? What are the rough edges we need to worry about with new kinds of applications with new uses? Will there be a digital divide analogy called the AI divide if we don’t get everybody on the same page? There is, of course, from the point of view of governments the sense of protecting citizens from a disruptive technology that might be used in ways that we don’t understand yet. At the same time, there’s an overriding sense that Americans—of course, the world, but we’re talking about the White House—but Americans should be benefiting from this technology, and how can we promote the use of these technologies in ways that will enhance the lives of people throughout the world? When it comes to this country’s leadership, my sense is that there is a mature set of reflections on being cautious where caution is needed, and engagement about possible coordinated activities to make sure that things go well. At the same time, there’s an excitement, and a feeling that we can’t miss this wave. We have to be on it, we have to guide it. And it’s not like AI is doing its own thing. We’re in control, we can shape where this technology goes.
MOLLY WOOD: You’ve even talked about the idea of creating the world’s best “ideas processor” using AI. I assume this is a play on “word processor.” But this idea of this sort of collaboration and next level…
ERIC HORVITZ: Think about how our own intellect works—and still, of course, human intellect is still one of the biggest mysteries of all time to our leading scientists who study human cognition. But we take so much information in. We have the ability to do impressive synthesis across ideas to generate new ideas. We imagine possibilities that don’t exist. We think about desirable worlds that we can actually work towards. In fact, I think our creativity, our ability to synthesize new ideas from existing ideas and from precepts really makes us human, which makes us unique as animals on the earth. And I think that the machines we’re now building are starting to show certain kinds of abilities like that, that could complement us in our thinking. And in some ways, as I said, supercharge our human uniqueness to help us process ideas faster and in richer ways to achieve those worlds that we don’t have now, the worlds that we desire.
MOLLY WOOD: It’s like the thing that the human brain does that feels like magic, and it feels like magic when you see GPT-4 do it, or any program do it, is this pattern recognition that kind of continues to unlock more and more pattern recognition.
ERIC HORVITZ: It’s almost like learning to ride a bicycle or a horse—learning how to prompt, learning how to talk to these systems, learning how to trust or distrust what they’re saying, understanding how to engage them in what I would call a conversation around problem solving, and learning how to take their behaviors and outputs in a way that positively takes our ideas forward. It’s really early days, you know, we don’t realize sometimes that we’re in the future. But we’re also way in the past from a different point of view. And I do think from the point of view of where these technologies can go, we are in very early days of how they work, how we work with them. So again, we are riding a wave of innovation, at the same time learning how to surf on the wave as the waves change.
MOLLY WOOD: Correct me if I’m wrong, but even with how long you’ve been following AI, it’s my understanding that ChatGPT-4, which powers Microsoft products like Copilot, still kind of blew your mind.
ERIC HORVITZ: [Laughs] Yeah, I mean, look, we’ve been working very hard, especially when it comes to the experience that humans have when humans work with computers to generate fluid, fluent, and valuable interactions over time, whether it be in medical diagnosis, or transportation, aerospace, consumer applications. The power that I saw when I started playing with GPT-4—and we got early access to that model as part of Microsoft’s ethics and safety team that I oversee. We were there working to make sure the system was safe, and we put the system through all sorts of interesting tests: reliability and accuracy, the possibility that it could cause various kinds of harms. But there I was starting to explore how well the system could do at hard medical challenges, and scientific reasoning, and the possibility that it could be used in education. And two words came to mind at the time. The first one was phase transition. There has been almost like a physics-style phase transition between GPT-4 and what was called GPT-3.5. Yeah, usually in a version change you get, you know, spit polish, get the next version. This was like a jump in qualitative capabilities. The second word was Polymathic. I had never seen a system that had the ability to just jump across disciplines and weave together different ideas in the way that you’d need a room of people trained in different areas, different degrees, and here was a system that was jumping around like a polymath. So it was pretty surprising to me and to colleagues—I would say, jaw dropping.
MOLLY WOOD: So this idea of choosing a path forward that centers human uniqueness and human flourishing, that’s sort of the mindset that led you to develop the AI Anthology series, right? Can you tell us a little bit about what that is?
ERIC HORVITZ: Yeah, so as I was exploring GPT-4 in the fall, my first inclination was to share the excitement. I’ve always had this sense of democratization of the thinking, getting people, bringing multiple thinkers to the table. And GPT-4 then was what we call “tented.” It wasn’t public, only a few people had access to the system within OpenAI and Microsoft. And I just was bursting at the seams, wanting to share this technology with leaders in medicine, education, economics—have people play with the system, and then start providing the world with feedback and guidance. And so I engaged with OpenAI and with my colleagues at Microsoft leadership to create an opportunity, a space to do this. And this led to what we now call the AI Anthology. But under special agreements, I provided access to GPT-4 to around 20 or 25 world leading experts across the fields, chosen for diversity of thinking and span across the disciplines. And I just said to everybody, Look, I’m surprised by this technology, how capable it seems. And I asked folks to then think through two questions. One, how might this technology be harnessed for human flourishing over the next several decades? And secondly, what would it take? What kind of guidance would be needed to maximize the prospect that this technology could be harnessed for human flourishing? You can go online to read the 20 essays from fabulous folks, each from their own perspective of what were the best answers to those questions, following their own personal interaction early on with GPT-4.
MOLLY WOOD: There goes your weekend, everyone. [Laughter] And then finally, in addition to all of that, you also are the founder and chair of Microsoft’s Aether Committee, committed to making sure AI is developed responsibly. Talk about that effort and how important that is, because a lot of people have anxiety about what this means for their lives and their wellbeing, and, you know, we want them to flourish.
ERIC HORVITZ: So I engaged Brad Smith, our general counsel at Microsoft, now president, about the prospect of creating a committee and process that would provide advice and guidance on the influence of AI and people in society, and the implications for Microsoft. One of the earliest things that we did with this committee—and we had leaders nominated from every division at Microsoft on the committee—was to think through what were Microsoft’s values or principles, and Satya Nadella himself weighed in on this and even led discussion on what have now become Microsoft’s AI principles. There are six, and they’ve stood the test of time, and they will continue to stand the test of time. Fairness: AI systems to treat all people fairly. Reliability and safety: we want AI systems to perform accurately, reliably, and safely. Privacy and security: these systems that we rely on should be secured and respect our privacy. Inclusiveness: really important for Microsoft’s leadership. AI systems should empower everyone and engage a diversity of people. Transparency: AI systems should be clear and understandable, including what they can do and what they can’t do well. And accountability: accountability for AI. The accountability of AI systems should always be people, people should be accountable for the systems that have been fielded and used. And those six principles became central in the work by a committee that was named Aether, the Aether Committee, and that stands for AI and Ethics in Engineering and Research.
MOLLY WOOD: With you of all people on the line, I do have to ask, do you think that artificial general intelligence is possible?
ERIC HORVITZ: The phrase AGI, artificial general intelligence, scares people in that I think many people feel that it refers to a powerful intelligence that would outsmart humans someday and take over, for example. I don’t think that kind of thing will ever happen. I believe that people will be the directors of this technology and will harness them in valuable ways. I do think that the pursuit of what’s called artificial general intelligence is an interesting intellectual activity. I think it’s a very promising and inspirational pursuit.
MOLLY WOOD: I want to ask you really specifically, how you imagine business leaders can get away from that fear state and refocus on a mindset of the real abundance that’s possible at work?
ERIC HORVITZ: What a challenging question. My sense is people are experimenting with some of the pain points in their businesses and industries, and seeing, could this system be a reliable tool for augmenting and flourishing—removing some of the drudgery of daily life and jobs and tasks. Allowing people to work on the fun creative aspects of their jobs where you need the brilliance of humans.
MOLLY WOOD: So we’ve brought up this idea of human flourishing several times now. At a high level, can you just quickly explain what it means to you?
ERIC HORVITZ: There’s remarkably little written about human flourishing, what it means. It goes back to Aristotle’s writings on what it means to really achieve notions of human wellbeing in the arts, in the literature, and understanding. In human contact and relationships, and the richness of the web of relationships we have as people. In our ability to contribute to society. In democratic processes. There’s a civil society component to what it means to flourish as a society, to have a resilient and robust society. There’s a biological or medical component to be full of health and vitality and to live long, rich, vibrant lives. And there are notions of what it means to pursue unique goals that people have. Of course, they differ from person to person, but we all want to be kind to others, we want to make contributions to society. We want to learn and understand. If you think about the things we pursue sometimes, we sometimes get off track and we think about these proxies—what’s my salary, or how can I get ahead in this front or that front? But those kinds of things don’t really read sometimes on the richness of our contentment and our happiness. It’s the deeper notions of achieving our deepest desires.
MOLLY WOOD: I mean, you were saying that 500 years from now, we’ll be talking about this period, this 25-year period, as some name for the “get to know you” period. [Laughter] But I really want to scoot right ahead to the age of flourishing.
ERIC HORVITZ: But look at how far we’ve come as a civilization. It’s really impressive.
MOLLY WOOD: Wonderful. Eric Horvitz, thank you so much for this time.
ERIC HORVITZ: It’s been great spending time with you, Molly. Thanks for all the great questions.
MOLLY WOOD: And that’s it for this episode of WorkLab. Please subscribe and check back for the next episode, where I’ll be chatting with Erica Keswin, a workplace strategist and a bestselling author who’s worked with some of the world’s most iconic brands over the last 25 years. We get into how business leaders can create a human workplace, and her latest book, The Retention Revolution, which is about keeping top talent connected to your organization. If you’ve got a question or a comment, drop us an email at email@example.com. And check out Microsoft’s Work Trend Indexes and the WorkLab digital publication. There you’ll find all of our episodes, along with thoughtful stories that explore how business leaders are thriving in today’s new world of work. You can find all of that at microsoft.com/worklab. As for this podcast, please rate us, review, and follow us wherever you listen. It helps us out a ton. The WorkLab podcast is a place for experts to share their insights and opinions. As students of the future of work, Microsoft values inputs from a diverse set of voices. That said, the opinions and findings of our guests are their own, and they may not necessarily reflect Microsoft’s own research or positions. WorkLab is produced by Microsoft with Godfrey Dadich Partners and Reasonable Volume. I’m your host, Molly Wood. Sharon Kallander and Matthew Duncan produced this podcast. Jessica Voelker is the WorkLab editor.
How Copilot Is Transforming One Global Creative Agency
Dentsu Creative is among the first organizations to use Copilot for Microsoft 365. James Thomas, the company’s global head of technology, takes us behind the scenes on how it’s transforming the business.
LinkedIn VP Aneesh Raman on Why Adaptability Is the Skill of the Moment
Generative AI is changing jobs, expanding opportunities, and shifting how we lead.