Microsoft Research Podcast

Microsoft Research Podcast

An ongoing series of conversations bringing you right up to the cutting edge of Microsoft Research.

Machine teaching, LUIS and the democratization of custom AI with Dr. Riham Mansour

October 23, 2019 | By Microsoft blog editor

headshot of Riham Mansour for the Microsoft Research podcast

Episode 95, October 16, 2019

Machine learning is a powerful tool that enables conversational agents to provide general question-answer services. But in domains with more specific taxonomies – or simply for requests that are longer and more complicated than “Play Baby Shark” – custom conversational AI has long been the province of large enterprises with big budgets. But not for long, thanks to the work of Dr. Riham Mansour, a Principal Software Engineering Manager for Microsoft’s Language Understanding Service, or LUIS. She and her colleagues are using the emerging science of machine teaching to help domain experts build bespoke AI models with little data and no machine learning expertise.

On today’s podcast, Dr. Mansour gives us a brief history of conversational machines at Microsoft; tells us all about LUIS, one of the first Microsoft products to deploy machine teaching concepts in real world verticals; and explains how an unlikely combination of engineering skills, science skills, entrepreneurial skills – and not taking no for an answer – helped make automated customer engagement and business functions more powerful, more accessible and more intelligent!

Related:


Transcript

Riham Mansour: I don’t really care where I start in the research spectrum. I might start completely solving the problem from scratch, I might start by capitalizing on other people’s work, I might put pieces together to make it work… The important goal to me is to have something that works for the purpose of the customer. 

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga. 

Host: Machine learning is a powerful tool that enables conversational agents to provide general question-answer services. But in domains with more specific taxonomies – or simply for requests that are longer and more complicated than “Play Baby Shark” – custom conversational AI has long been the province of large enterprises with big budgets. But not for long, thanks to the work of Dr. Riham Mansour, a Principal Software Engineering Manager for Microsoft’s Language Understanding Service, or LUIS. She and her colleagues are using the emerging science of machine teaching to help domain experts build bespoke AI models with little data and no machine learning expertise. 

On today’s podcast, Dr. Mansour gives us a brief history of conversational machines at Microsoft; tells us all about LUIS, one of the first Microsoft products to deploy machine teaching concepts in real world verticals; and explains how an unlikely combination of engineering skills, science skills, entrepreneurial skills – and not taking no for an answer – helped make automated customer engagement and business functions more powerful, more accessible and more intelligent! That and much more on this episode of the Microsoft Research Podcast. 

Host: Riham Mansour, welcome to the podcast. 

Riham Mansour: Thank you. 

Host: I love situating my guests at the beginning of every podcast. It’s such a research-y term. So let’s situate you: you’re a Principal Software Engineering Manager and you run the Language Understanding Service, aka LUIS. And I’m absolutely going to have you tell us about LUIS in a bitbut let’s start with the macro view of who you are and what you do. What big problems are you trying to address and what gets you up in the morning? 

Riham Mansour: Hmmm. I’ve started that effort of the Language Understanding Service with the Machine Teaching Innovation Group almost five years ago. And we’re trying to solve a very interesting problem which is, how can we get people, and domain experts, who don’t have a lot of machine learning expertise or even don’t have data, or don’t know what are the pieces that they need to put together to build an AI model. So we’re trying to get those people on board to the AI world 

Host: Okay. 

Riham Mansour: because those people have lots of problems that AI can help them withSo, we’re trying to meet those people where they are today, helping them with AI solutions, powerful solutions to their problems. In the past, AI was only coming from the giant tech companies that can afford having the AI experts and they have enough data, but then, like, now, enterprises and developers who want to build intelligent systems, how can they unlock the power of AI? I think this is the exact problem we’re trying to solve. 

Host: Before we get technical, I want to talk a little more about you, because you bring what I would call hybrid vigor to the research party. You have roots across the tech spectrum from academic research to applied research to product and engineering… And not necessarily in that order! So tell us a bit about how all these parts make up your unified whole, and how it speaks to the work you do. 

Riham Mansour: Yeah, so, actually, I didn’t mean to be all over the place! I think the journey of my career has pushed me into starting in engineering and, back in the time, I was very interested in doing software development, coding, and design, and all that. That was my passion back then and then working in the product brings a lot of rigor and you see all the real world problems and what can be done, what cannot be done and so forth. But all throughout my career in engineering, I always had the science appetite. I like to clarify things and play with the ambiguous. And being an engineer has part of that because you need to figure things out, but there’s more to the dimension of science than just being an engineer because you start with a bunch of hypotheses, where, like, we know nothing about how to solve that problem, and then we start tackling the problem through the scientific method. So I always knew, during my engineering career, that at one point I want to explore the science aspect of life and, when the right time came, I joined a PhD program. But then, when I was done with my PhD, the question came to my mind, what should I be doing? And then, I worked a little bit as a professor because this is the thing I didn’t do before my PhD. I did the engineering part. So I tried to explore options. That was not a job for me! Uh I, like, the job of a professor is to make a delta change in the student life, and I love working at a fast pace with the smartest people in the world. So, this is why, like, a professor job was not for me! 

Host: Move me to a faster lane. 

Riham Mansour: Exactly, get me to a faster lane! So I joined Microsoft Research and I started exploring that sweet spot for me and that’s exactly where I found my real passion. So yes, engineering’s great, academic research is awesome, but what’s more awesome for me was, how can I build something that solves a scientific problem but yet, it’s useful to people? 

Host: You joined Microsoft Research in 2012 and you came as a senior NLP applied researcher… 

Riham Mansour: Yes. 

Host: …in the Advanced Technology Lab in Cairo. 

Riham Mansour: Yes. 

Host: And you worked on several language innovations that ended up actually shipping in products. So tell us a bit, in turn, about the work you did in sentiment analysis, topic detection and key phrase extraction, because those are some of your big innovations. 

Riham Mansour: Yes. Back in the time, when I joined the Advanced Technology Lab in Cairo, my mission was to focus on making an impact out of whatever innovations that we work on. So, I always start with the customer. I don’t like to start with the problem that interests me, it has to be a problem that interests me and solves a problem. So at first I went and, like, talked to multiple groups who are either doing NLP at the moment, or have plans to do NLP like the Azure Machine Learning teamSo, when we started talking, then I wanted to come up with my research agenda. And at that time, like, I figured out a bunch of problems that have to do with, I would say, the basic fundamental blocks of building NLP stacks. So I started abstracting the different problems I got from the different groups and tried to put it in one technology that can serve multiple groups so that we can, like, really make an impact on multiple ones. And back in the time, those were the three key pieces that we landed as important across multiple groups, and they ended up landing in production for many of these groups. These are very classic NLP problems that many people in the company have kind of tackled. The other main activity I did was looking at who, in the company, is doing that, right? Who in MSR or non-MSR groups that are working on that actively. And then I started a bunch of collaborations. So, for example, the sentiment analysis, I found that there were two main players in that area, and I found that they are already talking, but kind of each of them is working on a different approach. And my goal was always to solve what the product group wants. 

Host: Right. 

Riham Mansour: So I was the person among the three who played that role with that collaboration between the two groups and myself. There’s that example, but then there was another example of topic detection which was completely something that, for example, Bing News wanted to tag their news data with topics that might not necessarily be mentioned in the news articles but, by finding the semantic relatedness between the content of the article and other web data, we can get to that level of detecting the topic but that is not mentioned. That work was not available in other groups and that was something we started working on and moved forward. I don’t really care where I start in the research spectrum. I might start completely solving the problem from scratch, I might start by capitalizing on other people’s work, I might put pieces together to make it work… The important goal to me is to have something that works for the purpose of the customer. 

Host: Well, your work now falls under the big umbrella of Human Language Technologies, which many have called the “Crown Jewel of AI. Give us a brief overview of the HLT landscape right now, specifically this movement toward Conversational AI. What’s stateoftheart right now, and how’s your work situated in it? 

Riham Mansour: So, I think I’ve been, in general, interested in human languages and how to get machines to understand human languages, to maybe do more for humans in their daytoday. So, part of that has been the Conversational AI space. This is one vertical I would say that emerged from the fact that we can make it possible for machines to understand human languages. Then that unlocks a bunch of opportunities. For example, how do we do customer support today? Customer support is all about agents talking to humans trying to solve their problems. If we can automate pieces, we can save those enterprises a lot of money, right? There’s other verticals within Conversational AI like finance and banking and you need to do more task completion. So between, like, customer support, question answering, and task completion that is very specific to the business, I think that’s where the HLT comes into play in the Conversational AI space. Because businesses have started paying attention to the fact that that can help them a lot with changing the way they do business today. There was a lot of traction to that space and that, right away, rang a bell for me. And I think this is the part I love to play at. Like, this is where I bring in my engineering skills, my science skills, and I would say, even, my entrepreneurial skills into, like, looking at, how can we take that technology and make it work for that specific vertical to solve that specific problem? 

(music plays) 

Host: I want to turn our attention specifically to machine teaching for a second because it’s the technical foundation of what we’ll spend most of our time talking about today. I recently had your colleague, Patrice Simard, on the podcast, and he gave us a fantastic overview of machine teaching. But for those people who didn’t hear that podcast (and they can go back and hear it if they want!) and even for those who might have heard it but can’t remember everything, let’s review. What is machine teaching and how is it different from traditional machine learning? 

Riham Mansour: The goal of machine teaching and traditional machine learning is to build an accurate model. Same goal, right? So a user who’s using either, would have the goal in mind to build a model, a good model, right? But then, the what and how is what’s different. So usually to build any model from data you need to have some knowledge that exists somewhere. In machine teaching, it’s about extracting the knowledge from the teacher, so it has the human-in-the-loop providing the necessary knowledge about the domain, so that we can build an AI model specific to that domain. Traditional machine learning is about extracting knowledge from data. So, using the compute power to extract the knowledge from huge amounts of data, and that’s where deep learning and other key words, transferred learning, come into play. So when and why machine teaching can be useful, I would say, in situations where there isn’t enough labeled data already available and you want to build a custom AI model that’s specific to your domain, but you don’t have machine learning expertise. Between the three pillars that I just mentioned, this is when machine teaching shines as a good solution versus traditional machine learning. If a problem has lots of labeled data, just go on with traditional machine learning because deep learning would shine way better, but machine teaching is when you don’t have data, you don’t have machine learning expertise, and you want to build a custom model.  

Host: Mmm-hmm. 

Riham Mansour: So machine teaching is all about custom AI when you don’t have labels or when you don’t have machine learning expertise. This is exactly the problem we’re solving. And we’re providing the first programming language of AI. What we’re providing is a teaching language for humans to teach the machine so that we can build an AI model in the background. So the way we extract the knowledge from the teacher is basically by offering some language they can communicate to the tool with and we translate that language that humans provide into a model. And that’s exactly where the customization part comes into play. So, for example, you have a bunch of vocabulary that’s very specific to the domain. And, in machine teaching, we give you a venue to provide that specific vocabulary… 

Host: To feed those words in. 

Riham Mansour: Exactly. And it’s not only words, because in machine teaching we have a lot of analogy between machine teaching and programming, so we’re trying to learn from the six decades of programming, how to build AI models. We look into programming and we see how developers build their programs. The first thing they do is they design and decompose, right? So it’s hard to solve a complex problem. We have to divide and conquer so that we can share pieces of code, re-use pieces of code that’s exactly the core of machine teaching. And that’s why we’re building custom AI models in a different way, different from traditional machine learning. 

Host: Let’s talk about LUIS, Microsoft’s, what you call, entry point into the machine teaching market. 

Riham Mansour: Yes. 

Host: You founded it in collaboration with the Machine Teaching group in Microsoft Research in 2015… 

Riham Mansour: Yes. 

Host: …and then it moved from research to product landing in Office just this year. 

Riham Mansour: Yes. 

Host: So this is sort of multilevel, progressive question forgive me for that? And I’ll circle back if necessary, but tell us the story of how LUIS was born, how it grew, where it is now, and what aspirations it has for the future. 

Riham Mansour: Mm-hmm. Mm-hmm. So back in 2014, the Advanced Technology Lab where I was working got reorged to Xuedong Huang, XD, who is running the speech services in the company. And XD was fascinated by solving reallife problems. And we had that expertise in natural language processing, but then he wanted us to do a bigger impact, by kind of taking the technology even further to adopt millions of users. Back in the time, we were looking at Wit.ai that Facebook acquired. And Microsoft was looking at the technology and started saying oh, the virtual assistants space, that’s where we need the kind of Wit.ai technology. And then we looked around and we said, hey, why don’t we build it ourselves? And XD, back then, believed that we have the right talent to build that. So the task was basically, we need to put Microsoft in that space of virtual assistants. How can we build a bunch of tools that can enable different enterprises to build their own virtual assistants? But then, what if an enterprise company or a bank wants to build their own assistant to help their customers or to serve their internal employees? Then we need a bunch of tools, right? So language understanding has been a key part in the speech stack, so speech comes as an audio, translated into text using the Automatic Speech Recognition (ASR), and then out of that text, we need to apply language understanding, right? To extract what is the user intention, what are the key entities or key words in the utterance. But then, understanding the content of the utterance is very important, right? Translating the unstructured data, or human language, into a structure that the machine can understand and act upon is the stage what we call language understanding. So, XD started tasking us with that and then we looked around in the company, like, with my practical mentality, should we just go ahead and uh, like, uh, build something from scratch, because there isn’t any technology that we can leverage, or should we just leverage something? And, back in the time, I got to know Patrice Simard, who is, like, I would say, the godfather of machine teaching, and we figured out that that technology can, with more tweaking, can work really well in building a thirdparty offering for language understanding. But when we started the journey, we didn’t have really in mind that we were building that product that I can see today. We were testing the water, I would say, because the whole market was new, there wasn’t any of the tech giants playing in that space. Back in 2014 even, bots were not trendy, right? So we didn’t know what’s possible, why that technology is useful. We knew that Microsoft needs to play a key role in the virtual assistant space, right? Or the Conversational AI space… 

Host: Yeah. 

Riham Mansour: …but we didn’t know how that would look like, what the shape of the market or any of that. So we tested the water, we took some baby steps, I would say, so we got the technology, catered it for the Conversational AI space. We branded it LUIS, and it became one of the early cognitive services when cognitive services went out. 

Host: Just to clarify, LUIS stood for Language Understanding Intelligence Service and you still call it LUIS but you’ve dropped the intelligent.” It’s just Language Understanding Service? 

Riham Mansour: Yes, becausback in the time, we just launched the Language Understanding Intelligence Service, acronym LUIS, and then, later on, with the marketing team and the branding team, we dropped the word ‘intelligent’ and we kept the Language Understanding Service, but the acronym LUIS had a lot of mindshare by that time and we kept it. So, LUIS, when we launched it in 2015, in private preview, we were only lining up, I would say, two hundred developers. We just wanted two hundred developers to come and use the system so that we can understand how they can use it, why it’s useful and so forth. And then we got a queue of 10K developers! 

Host: Ten thousand… 

Riham Mansour: Ask… Yeah, ten thousand! Asking for the service, to access the service. And the problem back then is, we needed a lot of engineering work to make it scale and really work, but the traction we got in the market, and the fact that other companies are investing in that… so because of that traction, XD and Harry Shum said, okay, we need to invest more in making this a product. So I got a little bit more engineering resources and I worked closely with Patrice and his Machine Teaching Group to move that forward with more features, build it at scale, and make it work for enterprise. And then, in Ignite in 2015, it was the first time to support a non-English language in LUIS. 

Host: Oh, wow. 

Riham Mansour: So we supported Chinese. So, XD and Harry announced the Chinese support for LUIS in China. And then, we started landing more and more customers, over time, and we’re, today, at hundreds of thousands of customers using our platform. And now, Conversational AI is a thing, and all the giant companies are players in this space and, interesting enough, at first, we wanted to enable speech interfaces to some devices. Later on, there was a text-based chat bot. But then even, I would say, interactive gaming is coming into play. So there were scenarios we haven’t thought through when we built the product, but then we found that some of the customers are using it just for that, right? So, see? This is exactly what gets me up in the morning, coming to the office. You feel like you’re doing something that is very leading in the market. We’re creating that new market, we’re drawing the features of that new market, we’re defining it right? So, that is, like, lots of fun! But then, the journey of LUIS didn’t stop there. We took it all the way to general availability on Azure, and it became, like, a generally available service with an SLA and all that, right? So it became, really, a more mature product, I would say, towards the end of 2017. At some point, we said, okay, here’s the machine teaching technology in the form of a technology that serves Conversational AI, but machine teaching is a way of doing custom AI models, and the input could be a signal, or image, or text. LUIS is very focused on text, but then we wanted to look into how can we make even a bigger scope of LUIS to solve other problems in the text space, because that’s my area of expertise. 

Host: Right. 

Riham Mansour: Uh, right? So I’m kind of biased but then, in March 2018, we put together a proposal for growing the scope of LUIS to be able to use machine teaching, but to do processing of long documents. So basically, enabling customers to build custom document classifiers, entity extractors, which are very key to the document processing pipelines, but custom models, right? And that’s the reason behind the recent re-org, first unifying the Machine Teaching Science Group with the product, which is LUIS, and putting them together in one group in Office. So now we’re serving Conversational AI over Azure and Office and we’re serving document understanding in Office and Azure as well. 

(music plays) 

Host: This is the part of the podcast where I ask what keeps you up at night 

Riham Mansour: Mmmm! 

Host: …and we’ve been talking about a couple of ideas that could have unintended consequences, like giving machines human language capabilities, and giving humans who have no expertise in ML the tools to build their own models. 

Riham Mansour: You know, when we’re creating a new market, or redefining the way people do businesses – and this is not specific to that area but, in general, on the edge between science and product – there exists a lot of unknowns, right? 

Host: Right. 

Riham Mansour: And a lot of questions and assumptions that we make on the way because of who we are, right? But then, how much would that resonate with real people when we meet them where they are, is a different question. So what keeps me up at night is the general question of, would that product be successful or not? How can we make it useful? What problems is it trying to solve that it’s not solving today, and it’s supposed to be solving? These unanswered questions, and those assumptions that we take from the technology putting it in the product, is what keeps me up at night. Because if we do it right, if we keep validating, and be humble to take feedback from customers, and do thatthat is, like, kind of a solution to this anxiety that I might have. But I figured out that process of how to collect feedback, how to speak to customers, what language are we speaking to customers, and so forth. So you can translate that into a science problem that you communicate back to the science team to solve it. So I think this is exactly the dynamic I’m kind of good at, or the thing I’m doing day to day, is like, speaking to the science people, speaking to my engineering team, and speaking to the customers. Those three key stakeholders, you need to do a lot of translation in the loop to get it right. That kind of loop, and these kind of questions, and the input I get from customers, is what keeps me up at night: how to put it together, how to translate it into a problem that I give back to the science team and say, hey, this is what we need to solve. 

Host: You’ve an interesting, and somewhat peripatetic, story. I love that I could use that word in a podcast. How did you get started in computer science, where has your journey taken you, and how did you end up at Microsoft doing the work you’re doing today? 

Riham Mansour: It’s pretty interesting. So, uh, when I, um finished high school, I wasn’t really sure what I wanted to do. Yeah, I wasn’t sure. Today I know what I want really well, but back in the time, I didn’t know what I want. And computer science, in the 90s, it was like pretty new field. Like, students didn’t major in computer science a lot, it was pretty trendy, but nobody figured out, like, what should we do with it. Only a couple of companies that existed back then, out of the giants, were Microsoft and IBM, right? There wasn’t Facebook, there wasn’t Google, so we didn’t even know the size of the job market as a student in high school. So I grew in Cairo, in Egypt, and, back in the time, there was Microsoft and there was an IBM, and this is all what I knew about the world back then, right? But when I went in college, at first I decided to go to medical school because my parents wanted me to go to medical school. But this didn’t resonate well with me, so… so because I have a lot of passion to science and physics and math and all that… and my dad is a mathematician, so I had a lot of passion towards math since I was a kid, and problemsolving and all that. But you know, at that age, I didn’t know what I really love and it’s hard to recognize yourself at the time. So I get in college and then I decided not to do medicine and then I started exploring. I explored accounting, I explored the economics, I explored marketing, I explored a bunch of things… and computer science. And, from the very first computer science class, it was awesome! This is the thing I want. Problem solving, right? It resonated a lot with me, but I liked the fact that I did the homework of exploring other options so, when I went in computer science, lots of people said it’s a hard field, like, you will work day and night, you will only deal with machines, and that stuff, right? But I loved it. And when I graduated, the last semester, it’s interesting, because Microsoft, back in the time, was sending recruiters to Cairo to recruit people from my college. 

Host: Which college was that? 

Riham Mansour: American University in Cairo. 

Host: Okay. 

Riham Mansour: And it was kind of interesting because the top people in class used to go to Microsoft, so it was kind of a good prestige if you get an offer from Microsoft butback in the time, me and another colleague of mine, are the only people in our class who got an offer from Microsoft here in Redmond. But my parents didn’t want me to come here by myself and leave them. I started my career – because I stayed back in Cairo – I started my career in IBM, alternatively, and that’s where I started as a full stack developer and later, a dev manager. That’s how I started my journey in computer science. But in IBM, I learned lots of rigor, lots of discipline, the corporate world, and, like, it was a good learning that prepared me a lot to where I am today. 

Host: Tell us something we don’t know about you Riham. 

Riham Mansour: Okay. 

Host: Any interesting or defining moments in life, an epiphany, experience, a characteristic, that has defined your direction, personally? 

Riham Mansour: There are two things I love to do. I love to build dreams and achieve them! So I love to live the dream first before I achieve it. That’s me, yeah, that’s why I’m into that business. I think the one thing that keeps me going is, I don’t take no as an answer. So I always try to find another path to my dream. And I have to build the dream, embrace it myself, so that I can get my team to execute on it. Those are the two key things that I would say characterize me or define who I am. 

Host: Have you been like that since you were little? 

Riham Mansour: Yes, I would say, yes! 

Host: How did that go with your mom and dad? 

Riham Mansour: You know it’s interesting, because I was not a no person to my parents. 

Host: Okay. 

Riham Mansour: Yeah, not at all. And I’m not a no person in general, but I don’t take no as an answer. So if I want something, I want it so much that you might tell me no in one door, but I will go try to open another door so that I get a yes. So, I keep trying and, you know, I’ve learned in life that there isn’t anybody who wanted something very much and they didn’t get it. You always get what you really want so much. 

Host: At the end of every podcast I give my guests the proverbial last word and our audience is a really eclectic mix of researchers and devs and other interested parties… So here’s your chance to give some parting advice or thoughts to anyone across the tech spectrum who might be interested in joining the effort to build natural language capabilities into talking machines. 

Riham Mansour: Mmm-hmmm. I would say the first thing, you have to have the passion for that because it’s not a straightforward domain. It’s not welldefined yet, so it will be a journey for you. And the other thing is, embrace the signs. You need to learn what’s going on in that space, learn about the state-of-the-art, get to know what other people did well and succeeded in and failed at because that’s very key right. And then, watch for real problems to solve. Don’t try to just go with the crowd, after a trend, a technology, or… no. You need to do a lot of thinking around, like, how can I leverage whatever I get as input from the science world into solving a problem? I would encourage everyone to kind of be very practical, look at real world problems and then work it backwards. So instead of starting by the excitement to the technology, start by the problem and see what technology would solve that problem. 

Host: All right. So let’s just extrapolate there as we finish up. What’s next for Riham and what’s next for LUIS? 

Riham Mansour: So the dream is to have millions of teachers using our AI programming language to build custom AI models! 

Host: And not taking no for an answer. 

Riham Mansour: And not taking no for an answer! 

Host: Riham Mansour, it’s been so great having you in the booth today. Thanks for coming on. 

Riham Mansour: Thank you. 

(music plays) 

Host: To learn more about Dr. Riham Mansour, and the world of bespoke AI powered by machine teaching, visit Microsoft.com/research

Up Next

Dr. Patrice Simard

Artificial intelligence

Machine teaching with Dr. Patrice Simard

Episode 78, May 29, 2019- Machine learning is a powerful tool that enables computers to learn by observing the world, recognizing patterns and self-training via experience. Much like humans. But while machines perform well when they can extract knowledge from large amounts of labeled data, their learning outcomes remain vastly inferior to humans when data is limited. That’s why Dr. Patrice Simard, Distinguished Engineer and head of the Machine Teaching group at Microsoft, is using actual teachers to help machines learn, and enable them to extract knowledge from humans rather than just data. Today, Dr. Simard tells us why he believes any task you can teach to a human, you should be able to teach to a machine; explains how machines can exploit the human ability to decompose and explain concepts to train ML models more efficiently and less expensively; and gives us an innovative vision of how, when a human teacher and a machine learning model work together in a real-time interactive process, domain experts can leverage the power of machine learning without machine learning expertise.

Microsoft blog editor

Artificial intelligence, Ecology and environment

AI for Earth with Dr. Lucas Joppa

Episode 72, April 17, 2019 - We hear a lot these days about “AI for good” and the efforts of many companies to harness the power of artificial intelligence to solve some of our biggest environmental challenges. It’s rare, however, that you find a company willing to bring its environmental bona fides all the way to the C Suite. Well, meet Dr. Lucas Joppa. A former environmental and computer science researcher at MSR who was tapped in 2017 to become the company’s first Chief Environmental Scientist, Dr. Joppa is now the Chief Environmental Officer at Microsoft, another first, and is responsible for managing the company’s overall environmental sustainability efforts from operations to policy to technology. Today, Dr. Joppa shares how his love for nature and the joy of discovery actually helped shape his career path, and tells us all about AI for Earth, a multi-year, multi-million dollar initiative to deploy the full scale of Microsoft’s products, policies and partnerships across four key areas of agriculture, water, biodiversity and climate, and transform the way society monitors, models, and ultimately manages Earth’s natural resources.

Microsoft blog editor

Artificial intelligence

AI, machine learning and the reasoning machine with Dr. Geoff Gordon

Episode 21, April 25, 2018 - Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less… well, less computer-like.

Microsoft blog editor