An interview with Microsoft President Brad Smith
Episode 113 | April 1, 2020
Brad Smith is the President of Microsoft and leads a team of more than 1400 employees in 56 countries. He plays a key role in spearheading the company’s work on critical issues involving the intersection of technology and society. In his spare time, he’s also an author!
We were fortunate to catch up with Brad who, late on a Friday afternoon, sat down with me in the booth to talk about his new book, Tools and Weapons: The Promise and the Peril of the Digital Age, and revealed the top ten tech policy issues he believes will shape our own century’s roaring 20s. He also gave us a peek inside the life of a person the New York Times has described a “de facto ambassador for the technology industry at large” – himself!
- Research Collection: Research Supporting Responsible AI
- Keeping an Eye on AI with Dr. Kate Crawford
- Life at the Intersection of AI and Society with Dr. Ece Kamar
- Examining the social impacts of artificial intelligence with Dr. Fernando Diaz
- Microsoft Research Podcast: View more podcasts on Microsoft.com
- iTunes: Subscribe and listen to new podcasts each week on iTunes
- Email: Subscribe and listen by email
- Android: Subscribe and listen on Android
- Spotify: Listen on Spotify
- RSS feed
- Microsoft Research Newsletter: Sign up to receive the latest news from Microsoft Research
Brad Smith: Fundamentally, what we are talking about, is endowing machines with the power to make decisions that previously could only be made by humanity and we have to ask ourselves what kind of decisions do we want machines to make? If we have any aspiration of these decisions reflecting the best of humanity, we better focus on responsibility and all of the pieces of it.
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: Brad Smith is the President of Microsoft and leads a team of more than 1400 employees in 56 countries. He plays a key role in spearheading the company’s work on critical issues involving the intersection of technology and society. In his spare time, he’s also an author!
We were fortunate to catch up with Brad who, late on a Friday afternoon, sat down with me in the booth to talk about his new book, Tools and Weapons: The Promise and the Peril of the Digital Age, and revealed the top ten tech policy issues he believes will shape our own century’s roaring 20s. He also gave us a peek inside the life of a person the New York Times has described a “de facto ambassador for the technology industry at large” – himself! That and much more on this episode of the Microsoft Research Podcast.
Host: Brad Smith, welcome to the podcast.
Brad Smith: Thank you, nice to be here.
Host: You’re an unusual guest for us in the booth. As President of Microsoft, you oversee a lot of stuff and you wear a lot of hats. So let’s kick things off by talking about what gets Brad Smith up in the morning. What does a day in the life of the President of Microsoft look like?
Brad Smith: I think what gets me up, frankly, is the opportunity to sit down and work hand-in-hand, or at least arm-in-arm, with, you know, researchers, with engineers, with people focused on computer science and data, and what it all means for the world because that’s really, in many ways, my job. It’s the intersection, if you will, between engineering and the impact of data and technology on the world today, the issues, the challenges that all this creates. I, you know, spend a lot of time representing Microsoft externally. I spend a lot of time working on our big initiatives internally. I like to say that if there’s an intersection, and there is, between engineering in the liberal arts, I’m the liberal arts side of the intersection, but I’m right smack in the middle of it every day.
Host: I want to go there for a second because we’re looking at universities around the country that have been responding to the uptick in stem majors and the downtick in humanities majors and they’re responding financially. They’re closing some departments and they’re consolidating some. Speak for a second about the importance of the liberal arts and humanities road coming into this intersection.
Brad Smith: I think the thing that people are missing today is that, more than ever, technology is a multi-disciplinary sport. This is an industry that was largely built by engineers, researchers and developers and the like, and I grew up in it. I’ve been at Microsoft for more than twenty six years. But if you look at where technology is going, I think everyone who majors in computer science or data science needs to take a dose of other courses in the liberal arts. I think everybody who studies in the liberal arts absolutely needs some exposure to computer science, to data science, to statistics and the like. But what we really need to recognize is the teams that are going to do the best work, who are going to solve the world’s greatest problems using technology, are almost always going to be multi-disciplinary teams, people who’ve come from different functions and different backgrounds.
Host: Well, a big chunk of what we’re going to talk about today is on the topic of artificial intelligence, or AI, and we have a lot of ground to cover, but before we get into the weeds, I want to start at a higher level and look at AI through the lens of responsibility. I think we all realize the power of AI and many have begun to talk about things like ethical AI and trusted AI, but you’ve chosen the word responsible. Why?
Brad Smith: I think it’s important to have a word that encompasses more of what we’re really talking about. Ethics play a fundamentally important role. There are things that I think go beyond ethics, to some degree, that are grounded in the rule of law, in the recognition of human rights, an element of societal responsibility. Fundamentally, what we are talking about, is endowing machines with the power to make decisions that previously could only be made by humanity and we have to ask ourselves what kind of decisions do we want machines to make? If we have any aspiration of these decisions reflecting the best of humanity, we better focus on responsibility and all of the pieces of it.
Host: Hmm. Well on that note, you and your colleague, Carol Ann Browne, who’s Microsoft’s Senior Director of External Relations and Executive Communications, have a new book out called Tools and Weapons. Just the title is fantastic, and it’s evocative of the idea that every new technology comes as a package deal. It’s both a blessing and a curse. So tell us what inspired you to write this book at this time?
Brad Smith: I think two things inspired us to write it. One is the ubiquitous nature of digital technology in the world today. It really has become the fabric of our lives, our homes, our communities, our societies. It is, in some ways, at the foundation of every opportunity to make progress. Technology is also part of every challenge that every community is facing. That really speaks to the tool and the weapon that technology has become. And we really felt that it was important to reach a broader audience to bring these issues to life. These issues are too important to be left to people who work in tech companies. Uh, by definition, they’re affecting everyone and I think it’s, to some degree, incumbent upon us who are closer to it to help make the issues, the facts if you will, more accessible to more people.
Host: In your work at Microsoft and in Tools and Weapons, you outline six core principles that you suggest will guide us into this next decade and they provide the underpinning of responsible AI, which we’ve just alluded to. So give us a brief overview of the principles and why they’re important, but also how you see them playing out in what I’ll call an AI, 5G, quantum computing, cloud scale era.
Brad Smith: Well, first we, at Microsoft, did develop and publish our six ethical principles in a way that’s sort of remarkable to me. This was only two years ago that we did it. This was a joint effort of, really, people in Microsoft Research led by Harry Shum and Eric Horvitz, and people in the part of the company that I lead, to work together. The six principles really cover first, fairness or the avoidance of bias, the need to protect privacy and security, the need to ensure that artificial intelligence is safe and reliable, the need to ensure that it’s inclusive. I will say for all people, and perhaps with a special eye towards the billion people on the planet who have some sort of disability.
Brad Smith: That adds up to four. Those four principles really sit on two others that are foundational for all of them. One is transparency. The notion that people can’t understand or have confidence in the fulfillment of these principles unless there is a level of transparency. And then there is the principle that I think is the bedrock of them all: accountability. The notion that machines must remain accountable to people. The principle that the people who create this technology must remain accountable to society as a whole. That adds up to the six, and what I think is interesting, in part, is that this set of principles, or other principles like them, are really spreading around the world.
Brad Smith: I think to some degree, Microsoft’s principles influenced others. Certainly, to some degree, other people’s work influenced us. But mostly, and I think it’s encouraging, people are tending to think in fairly similar ways and you see a consensus emerging, more or less, almost organically. That’s encouraging.
Host: How do you think, how do you wrap your brain around the fact that, while you and others can say these are the things we’re aiming for, you’ve got all these other players and actors in the world that may or may not be as eager to follow those as you?
Brad Smith: Well, I think that really points to two very important dimensions. I’ll just call it the state of responsible AI in the world today.
Brad Smith: The first is even those of us who embrace these principles have to recognize that being able to articulate them is not sufficient to operationalize them. And so the biggest challenge, whether you’re talking about Microsoft, or any institution in the world today, is really to figure out how to take its commitment to principles and turn them into something that is real every day.
Brad Smith: And that requires going from principles to policies. You need to implement these policies in a series of standards, things like research or development guidelines. You need to put in place training programs for employees. You have to have the capability to measure and monitor whether they’re being pursued. You need compliance systems. You need to build all of that. And we need to do it, in a case like Microsoft, literally, at a global scale. And I don’t think anyone should underestimate just the magnitude of that challenge. And then, by the way, you have the second challenge. What do you do about people who say, that’s very nice. I don’t care. Um, I’m not going to be principled, or I’m not going to sign on for that principle. I’m going to use artificial intelligence in ways that are going to do societal damage. And I think this is where public policy and the law kicks in.
Brad Smith: Ultimately, the only way to ensure that everyone is ethical, or is accountable for some ethical standard, is to take the ethical principles that we want to apply universally and enact them into law.
Host: Every year you and your team – while we’re on the topic of lists – identify ten top tech issues that you predict will be important for the coming year. And when it’s a beautiful year like 2020, for the coming decade. As you’ve said in your book Tools and Weapons, technical innovation isn’t going to slow down so the pace of the work around it has to speed up. Give us an overview of the list you’ve got this year, for the decade of our own roaring 20s, as it were, and your thoughts on how people doing the technical work, as well as the people doing the other work, might help address them and do so at the speed of technology.
Brad Smith: We really found it helpful to create our top ten list this year. This is something that Carol Ann Browne and I have done for a few years in a row and yeah, having then written the book and been out talking to people about the book, we took the conversations and, frankly, everything we were hearing from other people, took a step back and said, well, it’s the 2020s, let’s just not focus on ten issues for the year, let’s think about ten issues for the decade.
Brad Smith: And they tended to fall into, I would say, you know, four buckets. The first, an issue all of its own, but a bucket completely on its own, is sustainability, just because we see climate as such an important issue and it’s going to reshape everything, including technology.
Brad Smith: Second, we have issues of fundamental importance, around trust, around privacy, security, digital safety, responsible AI. Third, we see huge issues around geo-politics, whether it’s the relationship between the United States and China, or the focus on digital sovereignty, especially in countries in Europe. And finally, there’s really the role of technology in inequality. We talk a lot about income inequality. You see technology playing into that, especially in the context of internet inequality.
Brad Smith: Some people have broadband, some don’t. Skills or educational inequality, especially access to digital skills. Housing inequality, in cities like Seattle or San Francisco where the tech sector is fueling a rise in housing prices. So when you take, you know, the future of the planet, our ability to trust technology, the geo-politics of technology, and, you know, technology-fueled inequality, it’s going to be quite a decade! The roaring 20s may be pretty roaring, I think is one way to think about it!
Host: You know, you’re a lawyer, and the thing that seems to be lagging the most in my mind, and I may not be alone, is that the law hasn’t caught up to technology. What kinds of things are happening in the, sort of, political and legal structures around – we’ve seen GDPR in Europe and some of the other sort of thinking forward – what’s happening elsewise in this arena?
Brad Smith: Well the basic thesis of our book is that tech companies need to step up and do more, and governments need to start moving faster.
Brad Smith: We are starting to see governments move faster, probably first and foremost in, I’ll say, Brussels and Beijing. Those are the two places where regulation tends to move the fastest. We’re seeing it in other places. I think it will be fascinating to see what unfolds in London, now that the United Kingdom is really its own regulatory power, if you will.
Brad Smith: We will see more momentum in Washington, D.C. Already we’re seeing it at the state level in the United States. We’re seeing California be a leader in the United States around privacy.
Brad Smith: So I think it’s very clear that, by the end of this decade, technology is going to be more regulated than it is today. And that will be good, and that will create challenges for all of us who work with it.
Host: Well and the fact that, it has to. I mean, you’ve got things that people would say, we don’t even know what to do with this in a court, right?
Brad Smith: One of the points we’ve made is that, in so many respects, digital technology has gone unregulated for probably a longer period of time than any important technology in the period of time, say, since the 1850s.
Brad Smith: Compared to the automobile or airplanes, for example. Everything that resulted from the combustion engine. We saw more regulation. Or just think about the world in which we live: foods, drugs, you know, cars today… they’re all regulated by health and safety standards, and yet digital technology is not. And yeah, I think it’s overdue. It doesn’t mean that regulators should be thoughtless or uninformed or fail to think about balance, but we do need a regulatory floor and I think it’s right to recognize that.
Host: Right. And even the things you mention, these are all things that have imminent harm potential if something goes wrong, and I think we’re just starting to figure out there’s potential imminent harms with these technologies.
Brad Smith: I think that is true and I think that, you know, by 2030, in so many ways, an automobile is going to be a computer on wheels.
Brad Smith: An airplane is going to be a computer with wings. But fundamentally, computers, digital technology, AI, will raise many of these issues even if they’re in a box that’s standing still.
Host: Well, one of the biggest fears that people have about AI, aside from sensational predictions in the popular press, is a grouping of topics that you’ve mentioned, privacy, safety and security in an AI world. We’ve talked a bit about the “what” of these concerns, but I want you to talk a little bit about the “what now?”
Brad Smith: Well, I think the first question for anybody who works in the technology field, as a researcher or a developer or a designer, is actually to think hard about what these issues mean for the products that people want to create.
Brad Smith: What does it mean to have privacy by design, to have digital safety by design, to have responsible AI by design, to have cyber security by design? All of these are design fields that have started to really take off and, in many respects, they’re maturing rapidly. In many respects, I think those of us who are connected with the creation or the research advances in the technology are absolutely in the best position to bring innovation to the protection of people that will be essential. And then if you look beyond that, all of us are users of technology. We’re all consumers. Increasingly there are many features in popular products, consumer products, business services and the like, that do protect privacy. Certainly they protect security. And the question is whether, as consumers, we want to use them. And, you know, for all of us who care about these causes, I think there is some real benefit to using them and, frankly, helping to give a boost for the kind of usage that will help drive improvements.
Host: Right. Interestingly – and I had some other researchers in the booth who’ve talked about these privacy and security and safety issues – a lot of technology is binary. You either want to use the app and so you agree to everything, or you say no and sorry, you can’t use the app. So is there any move towards controls on the part of consumers and users in technology to say, hey, it’s not just binary. You can have this about me, but you can’t have that?
Brad Smith: I think the answer is no and yes. Um, no, I mean some services are binary, but increasingly, you look at an app on a phone and you think about something like the location service, there’s three choices: you can never use the location service, you can always have the location service on, even when the app itself is not running, or you can say, the location service can locate me, but only when I’m using the app.
Brad Smith: Um, and the first thing I would say is, if you want to protect your privacy, you can go to that middle…
Brad Smith: …level and only have the location service know where you are when you actually want the app to do something for you.
Host: Right. Right.
Brad Smith: But I would then actually step back and look much more broadly. There’s a lot to what you say in suggesting that we don’t have as much choice as consumers that we might like.
Brad Smith: So what do we do? I’ve had vibrant debates in Silicon Valley where some in the tech sector have said, look, the fact that people are not turning away from this app or another means that people fundamentally don’t care about privacy. I believe they do care, but people want to continue to use these services and where you see them manifesting their opinion is actually the public opinion that is increasingly shaping the views of government officials.
Brad Smith: The fact that California passed a sweeping privacy law after it had enough signatures to go on the ballot, after the polling showed it would be passed overwhelmingly, I believe says, people do care, they want to have their privacy protected and they want to be able to use the service.
Brad Smith: They want both.
Host: In your book and elsewhere, you also talk about the positive things that we’re seeing as a result of advances in technology and one of the best things about AI is its ability to democratize and improve areas like medicine and accessibility and the environment. So just in 2020 so far, it’s been a busy January for you, Brad, you’ve led two big announcements for the company. One is Microsoft’s Carbon Negative by 2030 initiative.
Brad Smith: Yes.
Host: To say it right. And another is the launch of AI for Health with Peter Lee from Microsoft Research here. Both are part of your AI for Good program, so tell us a little bit more about these announcements and why they’re important to Microsoft’s larger mission in developing technology.
Brad Smith: They were both really important and, in my view, exciting steps for Microsoft to take. Our carbon announcement I think is not just important to Microsoft, I hope it is something that can be part of an ongoing broader movement that we’re clearly seeing every day that is sweeping around the world, moving across the business community and really mobilizing companies to do more to address carbon and climate issues. It took a huge amount of work to bring together every part of Microsoft, to really make that announcement possible and it took a lot of iteration to sort of get to a point where we could have the ambition that was as high as I felt we needed, but also the rigor of a plan that would give us confidence that the goals could be met. It speaks powerfully to the role of digital technology in part, because we have these huge goals, as you mentioned, to be carbon negative by 2030. To, in effect, go back in time and remove, by 2050, all the carbon that Microsoft has emitted since its founding in 1975. And part of this goes to the heart of more renewable energy for our data centers, more efficiency for our data centers, a variety of other steps where digital technology, digital transformation, will just be fundamental to not just Microsoft’s own direct carbon reductions, but also across our supply chain, our value chain. So digital technology is, I think, a foundational tool for helping to address the world’s climate needs. And at the same time that we hopefully have a planet that is habitable in the right kind of way, we can also spread better health for the human population.
Brad Smith: And this is where the AI for Health initiative that, really, Peter Lee and then John Kahan from the data science side, have been at the heart of leading. And there are so many areas where it’s now clear that data and artificial intelligence can help lead to breakthroughs. Breakthroughs in helping us find cures for diseases, helping us understand the distribution of, if you will, health among different populations…
Brad Smith: …helping us bring better health to broader populations. AI is, in a sense, at the heart of everything in the world today, so it makes sense that as we’ve been expanding our own AI for Good efforts, we now have five pillars. We started with AI for Earth. We went to AI for Accessibility, AI for Humanitarian Action, AI for Cultural Heritage, and now, AI For Health. It is exciting to see how many different problems AI can help us address. I think what it really points to, and I think it’s an interesting aspect of all of this, is again, the multi-disciplinary nature of technology.
Brad Smith: So much, I believe, of the cutting edge of research is not just within a field, but, you know, the AI for Earth work is a great example of this. At Microsoft, we have a team that consists of computer scientists and data scientists and environmental scientists.
Brad Smith: And you can take the first two and add in a third discipline from a broad list of disciplines and if you can get people working together you can probably do some good for the world.
Host: Well, Microsoft isn’t the only one in the AI game. It’s at the forefront of every major tech company and, more importantly, the forefront of many nation states now. As President of this company, I’d like to know how you position Microsoft in this very large arena and how you view the company’s role in the AI world. What’s Microsoft’s vision in terms of leadership in AI, both inside the company and outside?
Brad Smith: There are two things that come together that I think are critically important. The first is Microsoft’s grounding for all of us who work here in our mission. You know, it really is a mission to empower other people, other organizations, all around the world to use technology, including AI, to achieve more. Now, what that means, put in that context, is a couple of key things. One is, our mission really is universal. I mean, we’re trying to create technology that people can use around the world to better themselves and their communities. One of the things that means is that we want to democratize technology. We want to democratize access to it. I don’t think that any of us should want a future where the secrets, or the wealth, of AI resides just in a couple of countries.
Host: Or companies.
Brad Smith: Or companies, absolutely. I think we should think of it more like electricity. Electricity has spread around the world and a country benefited from it mostly based on how quickly it adopted it…
Brad Smith: …and spread it to its rural communities and the like. That’s what we should want of AI. But there is a second dimension that is also, to some degree, at odds with the notion of providing this technology to anyone who wants it to do with it whatever they choose. It goes back to these principles. And I would argue that those principles are even implicit in our mission. You can’t empower people if you can’t protect them. If you can’t keep them safe. So there are certain use cases that we won’t allow for our technology. At times it means there are certain countries where we won’t be comfortable providing the full range of services. And this is a more complicated world. It is, in some ways, vastly more complicated than the world of producing Microsoft Word and letting anybody use it knowing that somebody would create a work that would get the Nobel Prize in literature, and someone else would write something truly horrible, but we created the tool and we were not responsible for whether somebody turned it into a weapon if you will.
Brad Smith: Because we couldn’t control that.
Brad Smith: But in a world where AI runs as a service in a data center from the cloud, you can impose more controls.
Host: Hmm. Interesting.
Brad Smith: And I think that’s one of the reasons that governments and the public is expecting more of tech companies. They expect us to do more because we can and should.
Host: So along those lines, you’ve said that Microsoft isn’t planning to deliver AI in a big box, but rather deliver the building blocks of AI so anyone can build AI systems. Obviously with some caveats there. Since we’re sitting here in the heart of Microsoft Research, I want to get your take on what those building blocks are and the role of research in delivering them.
Brad Smith: Well, I think it’s a really great question and I see it not just at a place like Microsoft Research, but I’ve also served as a trustee at Princeton University for a number of years. And I would say two things. One is, you see in computer science departments, or you see in other departments that are really, you know, at the foundation for data science, certain ongoing opportunities for advances at the basic research level. And these are, in many ways, fields that people here at MSR and elsewhere have been, you know, heavily involved for not just years, but decades.
Brad Smith: Things like, you know, computer vision. Things like speech recognition. Almost anything relating to machine learning. You know, so you have a lot of these fields that are just moving forward very quickly. But at the same time, I think so much of the most important work is actually very multi-disciplinary.
Brad Smith: Certainly, at a place like Princeton, you know, I have the opportunity to work and see, you know, some of the issues in the environmental field again, or microbiology. I see issues that we’re working on, Microsoft and Princeton together, around so-called programmable biology.
Brad Smith: And I think that is such a defining part of the future. It’s why I’m always excited about the fact that, at Microsoft, we have a lot of people who have PhDs in computer science or data science, and we have a growing number of people who have PhDs in other fields and then we work to bring them together, and the same thing is happening at universities.
Host: Well, Brad, we’ve reached the part of the podcast where I always ask the guests to get real and answer what could possibly go wrong. A good part of your professional career has been dealing with things that go wrong, in a court of law, and you’re a veteran at the “what keeps you up at night” question. So as a leader of one of the most well-known tech companies on the planet, you have to consider, every single day, the potential down sides of every technology that your company is putting out there. So what keeps Brad Smith up at night and how does he mobilize a company like Microsoft to help him sleep better?
Brad Smith: I think, fundamentally, the thing that I worry about the most is the weaponization of the technology that we create. It can be weaponized in very specific scenarios, say, something like facial recognition, to stop people from peacefully assembling in a city square. It can be weaponized because of the risks of bias by a police force that’s not well-trained. I worry that data, and data centers, can be accessed by governments to monitor people on a scale that, you know, has been well imagined. It was written about seventy years ago in the book 1984, but now it can become a reality. I think the most natural thing for any creative company to do is to just keep creating more products and keep selling them to anyone who will buy them. And yet, if you want to be principled, you want to do good, if you want to be responsible, you have to be able to say no. No, that is not something we want to create. No, that is not something we want to sell for that particular use to that particular user. And it takes an enormous amount of discipline, self-discipline and business process, to ensure that an organization, especially one operating at a global scale, will avoid falling into those traps. That’s one of the things that keeps me up at night, wanting to make sure that we, at this company, don’t fall prey to this kind of problem.
Host: You know, the researchers that answer this question can rarely go into those weeds. They’re making the things. A person like you can. Upstream is where the company and/or the leadership decides how we’re going to be as company.
Brad Smith: One of the things that gives me great hope and encouragement is that I find that our employees do care about it, and want us to do the right thing. And I’ve been so encouraged, even, typically, when I’ve run into an account team that might have been working for months to sell something and then they’re told they can’t, but they really do get it. But it does require that we all remember that we have to stay constantly focused on this. You can say you’re principled, but if, at the end of the day, you’ll do every deal that can be done, then the only principle you’re really upholding is a principle that you’ll do every deal that can be done and it ends up swallowing everything else.
Host: Brad, you have small town, mid-western roots and a decidedly non-technical background. Give us a brief history of Brad Smith. How did your early life shape who and what you are today, and how did you gravitate from history to high tech?
Brad Smith: Well, I was really fortunate. I like to joke that I grew up in a middle-income family, in the middle of the country, with the last name Smith, the most common name in the middle of the phone book, almost literally. But out of all of that, I came out of Wisconsin, was really lucky to go to Princeton and, you know, work my way and get scholarships on my way through college, and that was one of the places that introduced me to technology and technology policy issues. While I was a student, by my junior year, I had literally graduated from delivering newspapers in the morning and serving food in the cafeteria in the evening, to having a job working for the university’s Director of Government Affairs.
Brad Smith: I was just a student assistant. It was nothing terribly grand, but the issues that we got to work on were, fundamentally, science and technology policy issues. Things like federal support for basic research. Things like the federal government’s support for plasma physics fusion research, where Princeton did, and still does, have a national laboratory. So that really awakened my interest in this intersection between technology and policy. And then, a few years later, there was this new thing coming out on the market called a personal computer and, as somebody who was going through law school, somebody who had to do a lot of writing, I looked at this and I got quite excited both because of, sort of, the technical, technology gadget side, but also, I looked at it and said, I’ll bet I can write faster and better if I have this, and then play games as well, and it turned out that all that was true!
Host: So how did you end up working for the company that makes personal computers?
Brad Smith: Well, in a sense, it all was sort of a continuous journey. I bought that first personal computer. My wife and I were both law students. Loved it so much, that then, my first job after law school was working in the federal courthouse for a federal judge in Manhattan. And so I literally took the equivalent of ten percent of my annual salary and bought a new, improved personal computer, took it into the courthouse where there had not been, and there were not, PCs, and then applied for a job in the law firm in Washington, D.C., and when I got the offer, I said I would only accept it if they would give me a PC on my desk. Happily, they said yes. It was such an unusual request for someone to make at that time that everybody in this large law firm of about two hundred and fifty lawyers said, there’s this weird kid on the eighth floor who seems to know something about computers. And so I had an opportunity arise to start to do legal work for Microsoft. I loved it so much, when they asked me to join the company in 1993, I said yes. It was supposed to be a two-year leave of absence, I had just become a partner at the law firm, and that was more than twenty six years ago.
Host: And here you are now, President of the Mothership.
Brad Smith: It’s something!
Brad Smith: Yes.
Host: Well, this has been fantastic, Brad. At the end of every podcast I ask my guests to share some insight or wisdom with our listeners and usually they’re seasoned researchers at MSR speaking to some version of their grad school self. But you’re in a unique position to offer advice from a different perspective. So what would you say to our audience, many of whom are the very people who will shape the technology that will shape our world for the decades to come?
Brad Smith: I would say three things. One, always push the edge of the envelope without quite busting the entire door down because that’s when you end up, you know, fraying relationships and finding it more difficult to get things done. But push the edge of the envelope. Have confidence in yourself and take those creative ideas within you and pursue them. The second thing I would say is, balance that with a sense of humility. I actually think that the great superpower that we have in the Nadella years here at Microsoft, and something that I’m absolutely passionate about, is what I’ll call the power of humility. I like to joke across Microsoft, no one ever died of humility, but it really helps you stay curious. It helps you ask other people good questions. It encourages you to listen and not just talk, and stay focused on getting better. And finally, I would say, at the end of the day, it’s great to be smart, it’s great to be successful, but it’s better to be honest. To have a sense to integrity. To me, the favorite story, perhaps, that Carol Ann and I tell in the book, is one that involved me personally. And it was a story where we had stated publicly to our customers that we would sue the federal government if the government came asking for their data without, in this case, organizations being allowed to know. And you know, when our litigators came and said we shouldn’t pursue this case because we were likely to lose, and it was likely to be expensive and painful, I said, look, I’d rather be a loser than a liar. It’s okay to lose. Everybody does sometimes, and then you bounce back, but if you lie, if you sacrifice your integrity, I do think you pay a price for that for a very long time. So, be ambitious, be humble, be honest. It’s a good recipe. It serves people well.
Host: I think that needs to be on a bumper sticker.
Brad Smith: I’ll work on shortening it even more!
Host: Yeah! Brad Smith, thank you so much for joining us today. It’s been a real treat.
Brad Smith: Thank you. Thanks for having me!
To learn more about the research behind the tools and the researchers who do it, visit Microsoft.com/research