Episode 116 | May 27, 2020
Dr. Siddhartha Sen (opens in new tab) is a Principal Researcher in MSR’s New York City lab, and his research interests are, if not impossible, at least impossible sounding: optimal decision making, universal data structures, and verifiably safe AI.
Today, he tells us how he’s using reinforcement learning and HAIbrid algorithms to tap the best of both human and machine intelligence and develop AI that’s minimally disruptive, synergistic with human solutions, and safe.
- Microsoft Research Podcast (opens in new tab): View more podcasts on Microsoft.com
- iTunes (opens in new tab): Subscribe and listen to new podcasts each week on iTunes
- Email (opens in new tab): Subscribe and listen by email
- Android (opens in new tab): Subscribe and listen on Android
- Spotify (opens in new tab): Listen on Spotify
- RSS feed (opens in new tab)
- Microsoft Research Newsletter (opens in new tab): Sign up to receive the latest news from Microsoft Research
Sid Sen: I feel like we’re a little too quick to apply AI, especially when it comes to deep neural networks, just because of how effective they are when we throw a lot of data and computation at them. So much so, that we might even be overlooking what the best human baseline is. We might not even be comparing against the best human baseline. So, a lot of what this agenda is trying to do is trying to push the human limit as far as it can and then kind of integrate AI where it makes sense to do that.
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: Dr. Siddhartha Sen is a Principal Researcher in MSR’s New York City lab, and his research interests are, if not impossible, at least impossible sounding: optimal decision making, universal data structures, and verifiably safe AI. Today, he tells us how he’s using reinforcement learning and HAIbrid algorithms to tap the best of both human and machine intelligence, and develop AI that’s minimally disruptive, synergistic with human solutions, and safe. That and much more on this episode of the Microsoft Research Podcast.
Host: Sid Sen, welcome to the podcast.
Sid Sen: Thank you.
Host: So fun to have you here today! You’re a Principal Researcher at the MSR lab in New York City, which is a lab with a really interesting history and mission. But since research is constantly moving and evolving, why don’t you tell us what’s new in the Big Apple from your perspective. What big questions are you asking these days and what big problems are you trying to solve as a unit there?
Sid Sen: Our lab is really unique. We have four main disciplines represented, in addition to what I represent, which is systems, and machine learning and systems. We have a machine learning group. We have a computational social science group. We have an economics and computation group, and we have a FATE group, which is this new group on fairness, accountability, transparency and ethics. And, you know, I would say that, if I had to sum up the kind of work we’re doing, and a lot of us work on decision-making, decision-making with opportunities and under certain kinds of constraints. And I think a lot of the work we’re talking about today is trying to understand where computation, and in particular artificial intelligence, can help us and where it’s not so strong, and where, you know, the more human approach is more appropriate, or is even better, and then trying to figure out a way for these things to work together. So that’s going to be a very core kind of a theme in what we’re going to talk about.
Host: The word hybrid is going to come up a lot in our conversation today, but in different ways with different spellings. So, let’s start with your own hybrid nature, Sid, and talk about what gets you up in the morning. Tell us about your personal research passions and how you bring what I would call hybrid vigor to the research gene pool.
Sid Sen: So, I think I’ve always been a bit interdisciplinary in everything I’ve done. And ever since I’ve, kind of, grown up, I’ve always been able to do like a lot of different things reasonably well. And so, you can imagine I always questioned myself in terms of how much depth I had in anything, right? I even questioned myself throughout my PhD. When I went to do my PhD, I was co-advised in more systems-oriented work as well as theoretical work. I had two different advisors and I essentially ended up doing two separate PhDs because I couldn’t quite get those two disciplines to gel well together. It’s very hard to do that. And you know, a lot of students have asked me, oh, I want to do what you did, and I said yes, that’s a nice idea in principle, but it’s not so easy to synergize two different areas. Since then, I think I’ve kind of figured out how to do that, so how to, kind of, leverage and appreciate the breadth I have and the appreciation I have for different disciplines and make these things work together. I figured out how to synergize ideas from different disciplines to solve bigger problems, problems like, you know, how do we use AI responsibly in our systems? How do we leverage what humans are good at, or how do we keep things safe? I mean, I think these kinds of problems are not problems you can solve with expertise in one area and one field.
Sid Sen: And so, a lot of what I figured out how to do is how to bring these different ideas together and work with different colleagues to bring these solutions out.
Host: Well your current research mission is to – and I quote – optimize cloud infrastructure decisions with AI in a way that’s, as you call it, minimally disruptive, synergistic with human solutions, and safe. And to do this, you’ve got three correlated research agendas which I’ll have you dig into shortly. But before we do that, I want to go upstream a bit and talk about the conceptual framework for your work which we might call reinforcement learning in real life or RL IRL. Give us a rationale for this framework and then get we’ll get technical on how you are going about it.
Sid Sen: I think a lot of AI is going to be this kind of gradual integration into the background of our lives, into the systems that we use. So, I think it’s important to first, see how much we can learn and how much we can do without disrupting existing systems, just by kind of observing what they’re doing now. Without having to change them so much, what can we do to learn as much as we can from what they are doing and to kind of even reason about what would happen if we tried different things or if we used AI in different ways without actually changing what they’re doing? So that’s why I want to take this minimally disruptive approach.
Sid Sen: Then, at the same time, I think it’s important to understand where AI should fit in, because if you just come in barging in with an AI solution, that is not something that is so easily digestible by existing systems and, you know, processes that we have in place. And so understanding where the AI is good, where it should fit in, where the human solutions are good, and finding a way that they can complement each other is something that, to me, is critical to ensuring that you can gradually make this change. And then finally, when you do make the change, you know, how do you maintain people’s trust and how do you keep things safe? You know, I don’t think AI and humans are ever going to be held to the same standards. Even if an AI system statistically affects or harms fewer people than a human operator would, they’re still going to be held to a different kind of standard.
Sid Sen: And so, I think it’s ultra, ultra-important, once we make this kind of change, to try to keep things safe.
Host: Well, let’s talk in turn now about the three research agendas, or roadmaps we might call them, that you’re following on the road to optimal decisions online. So, the first one is something you call harvesting randomness. And this one takes on the minimally disruptive challenge. So why don’t you start by giving us a level set on the problem, and then tell us all about harvesting randomness and how the idea of counterfactual evaluation is gaining traction in reasoning about systems.
Sid Sen: This project kind of came out of a simple observation. One thing I observed was that, whenever we want to try something new, or figure out how something new is going to work, we often have to deploy it and try it out. This is what we call an A/B test. And people use it all over the place. They use it in medical trials, they use it online. At any given time, when you’re using, you know, Bing or Google or any service, you’re seeing something different than what I’m seeing because they’re continuously trying out different things by deploying these live A/B tests. And the reason that works is because they randomize what people see. And because they randomize what people see, they can evaluate how good different alternative solutions are, and avoid any kind of biases or confounding issues that might come into place. And I realized that randomness is all over the place. We randomize stuff all the time, like when we load balance requests, when we decide where to replicate or put data, when we decide what to do if too many people are trying to do the same thing at the same time, we make these randomized decisions all the time. And I thought, what if I could, you know, harvest all that randomness to do all of these experiments without actually having to do the experiments? But that’s, in essence, what counterfactual evaluation or counterfactual reasoning is: what if I had done this? Can I answer that question without actually running a big A/B test and exposing the world to it? Because that’s risky and that’s costly.
Sid Sen: And so, that was a theory behind this project is that I’m going to go around to all these systems that are already doing all this randomized stuff and, using a slightly different set of statistical techniques, I can maybe answer questions about what would have happened if I tried this other policy. And that way, I could, you know, without disrupting or changing the system, reason about what would have happened if I had changed things. And I think it turns out that it’s not as easy to do that as we thought it would be. And the reason is, all these systems that are randomized are very complicated and stateful systems. We don’t really have good enough practical techniques to understand them. But what we have found is that, in the systems that surround us, there’s a lot of information that’s similar to the kind of information you would get from randomization that’s already just sitting there. And I call that implicit feedback, and this is something that we’re trying to leverage. It’s kind of a very simple concept in systems. But when you wait for something to happen, and we do that all the time in systems, you know, we’re waiting for some event to happen, if you wait for five minutes, it turns out, you find out what would have happened if you waited for four minutes or three minutes or two minutes. Or if you provide a resource to someone, you give them ten cores, or ten virtual machines, you might find out what would have happened if you gave them nine or eight or seven, because you gave them more than those numbers. And so because of this kind of implicit structure, we’re able to look at a lot of the decisions that existing systems are making now, and we’re able to say, oh look, here are situations where they’re actually making decisions where we could answer the counterfactual question of what would have happened if they made smaller decisions? Can we leverage that feedback to learn more from that existing data and then, using that information, come up with better policies for making those decisions? So, it’s not the same as, like, the natural randomness that we started out initially envisioning we could harvest, but it’s similar because it gives us the same kind of information. And now that we have a technique for harvesting this extra feedback, we can basically take this feedback, you know, use probabilities to weight things in the right way so that we get what I call an unbiased view of the world, and, you know, this way we can kind of find a better policy to put in there without actually changing the system that much at all.
Host: So, what kind of results are you getting with this? Are you using it or deploying it in places where it would matter?
Sid Sen: Yeah. So one of the teams that we worked with for a while is the team that deals with, you know, unhealthy machines in Azure. What happens when you can’t contact a machine? It’s unresponsive. What do you do? How long do you wait for it? And how long should I wait for that machine to come back alive, or should I just go ahead and reboot it, recycle it and put it back into the pipeline or the process of what we call our fabric controller that takes machines and recycles them and gets them ready for production use? So that’s a decision where we are able to, kind of, harvest existing data that they have to find a better policy for choosing how long to wait. Another example is, whenever you ask Azure for a certain number of virtual machines, you say, hey, give me a hundred machines, they’ll actually allocate more than a hundred because they’re not sure if the hundred will succeed and how long it will take for them to deliver those hundred to you. So, they might say, allocate a hundred and twenty, and they’ll just give you the first hundred that get started up the fastest. And so here’s an opportunity where we can harvest that information because of this over-allocation, that implicit information that’s in there, to do something that’s much more optimal and save on the amount of extra, unnecessary work that they have to do when they’re spinning up these large clusters of virtual machines.
Host: Let’s say you wait for ten minutes and something happens, but you also know what happened at nine minutes, at eight minutes, at seven minutes, right? So, this is how you were explaining it. Is it always going to happen then at ten minutes? I know that I’ll wait for nine… if I just wait one more minute, I’ll be fine, instead of having to go to the expense of rebooting?
Sid Sen: No. Yeah, I think that’s a great point. We might not know. And one thing that’s unique about the feedback we’re getting is, it kind of depends on what happens in the real world. I might wait for five minutes and, you know, nothing happened. In that situation, I only learn what would have happened if I waited less than that amount of time, but I don’t learn anything about what would have happened if I waited longer than that amount of time. Whereas, sometimes, if I wait for five minutes and the machine comes up in two minutes. And now I know everything. I know all the information that would have happened if I had waited for one minute or two minutes or ten minutes or nine minutes or whatever it is. So, there’s kind of an asymmetry in the kind of feedback you get back. And this is a place where existing counterfactual techniques fall short. They are not able to deal with this kind of – the different amount of feedback you get which depends on the outcome of what actually plays out in the real world. And so, this is where we have to do most of our innovation on the, you know, the statistical and reinforcement learning side to develop the counterfactual techniques that work for this scenario.
Sid Sen: And you know, I should say that this work is something that we see come up in many different system decisions across our cloud infrastructure. Any time we’re allocating any resource, whether that resource is time, machines, memory, cores, any time we’re doing that, there’s this implicit information that we should be harvesting. It’s like a crop that we’re not harvesting and there’s an opportunity cost to not doing that. And so part of the goal of this work is to show people that there is that opportunity there, but also to kind of show them how we can leverage this in a principled way so that our system designers know how they can design their systems in a way that allows them to continuously evolve them and optimize them. And a lot of what this harvesting randomness work is showing you how to do is, how do I, in a statistically correct way, collect data and improve my system, collect more data and improve my system, and keep doing this? Because the way we’re doing it now is not actually, you know, fully correct.
Host: No one can ever see the grim that I have on my face. Also, on a podcast, people can’t see how things are spelled, and in the case of your second research agenda, it actually matters. You’re developing what you call HAIbrid algorithms, here we come on the HAIbrid. And that’s spelled H–A-I-b-r-i-d. So, what are HAIbrid algorithms, Sid, and how do they move the ball forward in the world of data structure research?
Sid Sen: So, I always get a snicker or two when people see the spelling of this, and I don’t know if that’s because it’s lame or if it’s because they like it, but, it’s spelled H-A-I because the H stands for human and the AI stands for AI, for artificial intelligence. And so, the idea of these algorithms is to basically find the right synergy between human and AI solutions, to find the right balance, the right way to use AI. And I do strongly believe in that because I feel like we’re a little too quick to apply AI, especially when it comes to deep neural networks, just because of how effective they are when we throw a lot of data and computation at them. So much so, that we might even be overlooking what the best human baseline is. We might not even be comparing against the best human baseline. So, a lot of what this agenda is trying to do is trying to push the human limit as far as it can and then kind of integrate AI where it makes sense to do that.
Sid Sen: And we’re starting very classical. We’re starting as simple as it gets. We’re looking at data structures. You know, there was a lot of buzz a couple of years ago when folks at Google and MIT were able to replace classical data structures with very simple neural networks. A lot of what data structures are used for is to organize and store data. And so, oftentimes you use them by giving them some kind of key and telling them, hey, can you look up this piece of information? Find it for me and give it back to me. And they kind of observed that the key, and its position in the data, look like training data for a machine learning model. So why don’t we just shove it into a machine learning model and make it answer the question for us? And you know, they showed that if the data set is fixed, if it doesn’t change, then you can train this kind of layer of neural network to answer that question for you and they can do it way faster, using way less memory than, let’s say, a classical structure like a B-tree. The problem with a lot of that was that the human baselines they were comparing against were not nearly the best baselines you would use. You wouldn’t use this big fat B-tree to find something in a sorted data set that doesn’t change. You would use something very simple, like maybe a binary search or some multi-way search. That’s something that is way more efficient and doesn’t require much space at all or much time. And so, when I saw this work, I thought about every human baseline they were comparing against and I felt bad for those human baselines. I said, oh, that’s not fair! These human baselines weren’t designed for that, they were designed for something much more. They were designed for these dynamic changing workloads and data sets and that’s why they are all bloated and, you know, inefficient just because they are keeping space so that they can accommodate future stuff, right? They are not designed for this particular thing you’re looking at. And so, we set out to kind of design what we thought was the best human data structure for this problem. We call it, a little bit presumptuously, we call it a universal index structure. And the idea of this universal index structure is that it’s going to basically give us the best performance all the time. More concretely, what that means is that, I’m going to look at what the workload looks like now and then I’m going to try to metamorphose myself into the data structure that’s perfect for that workload and then I’m going to give that to you. So, it sounds like an impossible thing, but you know, Gretchen, I think a lot of the work I do is impossible-sounding at a high level. And I kind of like that because one of the reasons I’ve been able to appreciate kind of my hybrid view of things is that I come up with a crazier idea or approach than most people would. So, something like universal data structure to me sounds like an impossibility. But then what you can do is you can say okay suppose I had access to some kind of oracle, some supreme being that gave me, for free, the things that I wanted. Then maybe I could solve this problem. So that’s kind of how I plan my research. I say okay, if someone told me what the current workload was, and if someone told me, for this workload, what is the best structure to use, and if someone told me, how do I go from what I have now to this best structure, and I got all these answers for free, well then I would solve my problem and I would just use it.
Sid Sen: I’d have my universal data structure right there. So now we go about trying to solve these oracles, each of which is hard. But you know, when it’s a hard oracle, you can break that down into sub-oracles and make a roadmap for solving that sub-oracle, right? And then the problems slowly get easier and easier until there are things that are tractable enough for you to actually solve. And then you can put the pieces back together and you have a solution to the higher-level problem.
Host: All right… So, how are you doing that?
Sid Sen: Right. How are we doing that? So, what we’re doing now is we’re taking a black box approach. We’re taking existing data structures without changing any of them. All the classical stuff that people use: hash tables, trees, skip lists, radix trees, and we’re saying, how can I move efficiently from one of those structures to another? So, what we’ve developed is an efficient transition mechanism that allows us to take data from one of these structures and gradually move it to another structure if we feel like that other structure is going to be better at handling the current workload. So, we use some ideas from machine learning to profile these data structures, to try to understand what regimes are good at. Okay? So, we are using ML a little bit here and there when we think it makes sense. So, we’re trying to understand the space of when these data structures are good. Once we understand that space, then we’re trying to come up with an efficient way to move from one of these data structures to the other. And so, the big innovation of this work is in coming up with that transition mechanism. It’s a way of sorting the data that you are working with and gradually moving the data from one structure to another in a way that you can do piecemeal where you don’t lose your work, where you don’t affect the correctness of the queries that you’re trying to respond to – people are going to be continuously asking your system to answer questions while you are doing all of this – and in a way that it makes it worthwhile. Like we try to make sure that this transition mechanism is efficient enough that we’re actually getting a benefit from it. And if we find that we’re just flopping around all over the place, if we keep moving from data structure A to B back to A, back to C, well, we check for that and if that’s happening we kind of back off and say no, let’s not do that. This is not worth it right now. And so, this transition mechanism is kind of where most of the innovation has happened. It’s a simple transition mechanism because it just intelligently uses the existing APIs provided by these data structures to do this kind of transitioning between them. And later on in the future, I hope that we can open up the black box a little bit. And by understanding these data structures from the inside, maybe we can actually do something even more intelligent and more efficient.
Host: Let’s talk about a specific example of that while we’re on the topic of HAIbrid algorithms. One example of some very current work you are doing involves using the game of chess as a model system for bridging the gap between what you call superhuman AI and human behavior. So, this is really interesting to me on several levels. Tell us about it.
Sid Sen: Yeah, so whereas the universal index structure was trying to say, let’s take a pause and not use ML just yet, let’s see how far I can push the human design and find a way to just, you know, switch between them and use all of them in some fluid way so that I can do as good a job as I can. And then the idea would be, how can we then take that strong human baseline and improve on that with AI. And I think the verdict is still out there, it’s still unclear to me which one is a stronger force there and how they’ll work together. In chess, it’s a little different because we actually know, today, that chess engines and AI are far stronger than any kind of human player can be. So, chess is an example of a task where AI has surpassed human play. And yet, we still continue to play it because it’s fun, right? A lot of times you’ll find that when AI does better at humans, it will just take over. It just takes over that job from who was doing it before, whether it was a human or a human-designed algorithm or heuristic. But chess is a game where that hasn’t really happened because we still play the game, we enjoy it so much. And so that’s why I think it’s an interesting case study. The problem with chess is that the way humans play, and the way engines play are very, very different. So, they’ve kind of diverged. And it’s not really fun to play the engine, you just lose. You just, you know, lose all the time. So now, interestingly, people are using the engines now to train themselves which is kind of an interesting…
Sid Sen: …situation because it’s almost like this subtle overtaking of AI. It’s like our neural networks, our brains, are now being exposed to answers that the engine is giving us without explanation. The engine just tells you, this is the best move here, right? It’s figured it out. It’s explored the game tree. It knows AI to tell you this is the best move and you are like, oh, okay, and then you try to reason, why is that the best move? Oh, I think I kind of get it. Okay. So, a lot of the top chess players today, you know, they spend a lot of time using the engine to train their own neural nets inside their brains. So, I actually think that our brains, our neural nets, they’re a product of everything we put in there. And right now, one of the things that they are getting in as input are answers from the engine. So that’s the subtle way that AI is seeping into your brain and into your life. But going back to what we did, I mean, we basically realized that there’s this gap between how the engines play and how humans play, and when there’s this gap, maybe the AI engine should be helping us get better, right? But right now, we don’t have that kind of engine. These engines just tell us what the best move is. They don’t have what I would call a teaching ability. And so, what we’re trying to do in this project is trying to bridge that gap. We’re trying to come up with an engine that can teach humans at different levels, that can understand how a weak player plays, understand how a strong player plays, and try to suggest things that they could do at different levels. So, we looked at all the existing chess engines out there, ones that are not based on AI, and ones that are based on AI. And we found that these engines, if you try to use them to predict what humans will do at different levels, they either predict kind of uniformly well across all of the humans, or their predictions get better as the humans get better, which means that none of them really are understanding what different humans at different levels play like.
Sid Sen: So what we did was we kind of took this neural network framework that’s underneath it, and we repurposed it to try to predict what humans would do and we were able to develop engines that are good at predicting human moves at every different level of play. So this is, I think, a small building block in getting at this idea of a real, like, AI teacher, someone who can sit with you, understand where you’re at, and then, from there, maybe suggest things that you might try, that are appropriate for your level.
Host: Let’s talk about your third research agenda and it’s all about safeguards. The metaphors abound: training wheels, bumpers on a bowling alley, a parent. Explain the problem this research addresses, in light of the reinforcement learning literature to date, and then tell us what you’re doing in this space to ensure that the RL policies that my AI proposes are actually safe and reliable.
Sid Sen: Yeah, so safety is something that has come up in all of the things we’ve talked about. It comes up repeatedly in everything we’ve done, in the harvesting randomness agenda. Um, you know, after harvesting information and coming up with a better policy, every time we try to deploy something, with a product team or group, they always have some safeguard in place, some guardrail in place. If this happens, page me, or if this happens, shut this off, or things like that. It comes up even in the universal data structure we’ve talked about. What happens when I keep flip-flopping around these different data structures and I’m wasting all this time and my performance is going down? Well, maybe I need to put a brake on things. In the chess work as well it happens because one of the things that the chess teacher is trying to do is prevent the human from walking into a trap or walking down a path of moves that they can’t handle because they are not strong enough yet as a player to handle it. And so, you can reason about safe moves versus unsafe moves. And so, because I just saw this cropping up all over the place, I realized that it’s time to take a step back and reason about this and formalize this. So, this is something that systems people do which, I think they are good at doing, is they see people doing all kinds of ad hoc stuff and they say, oh, this is an opportunity here to come up with a new abstraction. And so, safeguards are that new abstraction. I think the sign of a good abstraction is, it’s something that, like, everyone is already using it, but they don’t know it. They don’t give it a name. And so, if everyone is already doing something like this, in their own ad hoc ways, what we’re trying to do is extract this component – we’re calling it a safeguard – and we’re trying to say, what is its role? So, to me a safeguard is something that kind of sits alongside any artificially intelligent system that you have, treats it like a black box, and it protects it from making bad decisions by occasionally overriding its decisions when it thinks that they are about to violate some kind of safety specification. So, what is a safety specification? Well it can be anything. It can be a performance guarantee that you want. It can be a fairness guarantee. Maybe it can even be a privacy guarantee. And we haven’t explored all of the realms. Right now, we’ve been focusing on performance guarantees. And I think the point is that, if you have this component sitting outside the AI system, it doesn’t necessarily need to understand what that AI system is doing. It just needs to observe what its decisions are and as long as it has an idea, like the safety specification, it can say, oh, I think what you are going to do now is going to violate the safety spec, so I’m going to override it. But if I see in the future that you are doing a better job, or you are within the safety realm, then I’m going to back off and I’m not going to change your decisions. So, the safeguard is like this living, breathing component that itself might use artificial intelligence and it can use AI to adapt. It can say, oh, I see that the system is doing a good job, okay I’m going to back off. I don’t need to provide that much of a safety buffer. But the moment I see that the system is not doing a good job, then, maybe I need to put the clamp down and the safeguard will say, okay, I need a big safety buffer around you. And so that’s why I like to think of this analogy of parenting, because I think that that’s what we do with kids a lot. In the beginning, you know, you childproof your house, you put training wheels on bikes and you constrain and restrict a lot of what they can do and then slowly you observe that now they are getting bigger. They are falling down less, they’re able to handle corners and edges more and you start removing the bumpers. You might start raising the training wheels and things like that. But maybe you go to a new house or a new environment or you’re on vacation and then maybe you need to clamp down a little bit more, right? Because you realize that, oh, they’re in a new environment now, and they might start hurting themselves again. And so, that’s kind of the inspiration behind this idea of a safeguard that adapts to the actual system.
Host: Okay. So, I want to drill in again on the technical side of this because I get the metaphor, and how are you doing that, technically?
Sid Sen: Yeah. So, what we’re doing now is we’re taking some example systems from the previous work we talked about and we’re hand-coding safeguards based on the safety specs of those systems. Someone might say, well, how do you even know what a safety spec should be? What we found is that people usually know what the safety spec is. Most of the teams we work with, they usually know what things they check for, what things they monitor that trigger actions to happen, like guardrails that they have in place. So, most of the teams we’ve talked to, they check for, like, six different things, and if those six different things exceed these thresholds, they do some big action. So, we can usually derive the safety spec for those systems by talking to the product team and looking at what they do right now. Once we get that, we’re kind of hand-coding and designing a safeguard and we’re using tools from program verification to prove that the safeguard that we code up satisfies what we call an inductive safety invariant, which means that it satisfies some safety property and every action that that safeguard takes will always continue to be safe. So as long as you’re in the safe zone, no matter what you do, as long as the safeguard is active and in place, you will stay in the safe zone. And so, we can use tools from program verification to write these safeguards, prove that they satisfy the safety invariant, then what we do is we add a buffer to that safeguard. Think of the analogy of, like, adding more padding to the bumpers on the corners of tables. Or putting the training wheels a little lower. So, I can use the same kind of verification approach to verify the safety of the safeguard plus some kind of buffer. And then what I’ll do is, I’ll use AI to adapt that buffer depending on how good the system is doing. So now, I’ve guaranteed to you that everything is going to be safe, even if you have this kind of buffer. I’m like okay, great. And now I can shrink and grow this buffer. You’re doing well? The buffer will be really small, which is good for a lot of systems because if you keep that small buffer, it allows systems to be aggressive. And when you’re aggressive, you can optimize better. You can use more resources. You can push the systems to their limits a little more, which is good for you. But if you’re in a situation where you’re not doing well or there’s some uncertainty or the environment has changed, then we kind of increase that safety buffer. And the whole time, you’re still guaranteed that everything is going to be correct. So that’s kind of what we have now. What I really want to do, again, because no project is worth it if it’s not impossible-sounding, is to automatically synthesize these safeguards. Like we’re coding them by hand now. I want to use fancy program synthesis magic to…
Host: I was just going to ask you if that was next.
Sid Sen: Yeah. Yeah. So that’s, that would be ideal. Like, I didn’t want to have to sit and do these things by hand, I want to get a safety spec and some structured information about the application and I want to automatically synthesize the safeguard and then show you that it is… and prove that it’s correct. So, we’re working with an amazing colleague in the program synthesis and program verification world who’s actually the director of our MSR India lab, Sriram Rajamani.
Host: I had him on the podcast! He’s amazing!
Sid Sen: Oh, not only is he amazing, he’s maybe the nicest person I know… Like, in the world! He’s working with us on this and it’s a lot of fun. I love working on projects where I love working with the collaborators. In fact, I think I find that these days I tend to pick projects more based on the collaborators than the topic sometimes. But this is one where it’s both the topic and the collaborators are just a complete thrill and pleasure to work with. And so, we’re hoping that combining our techniques, I do systems and AI stuff and reinforcement learning, he understands program synthesis and verification. We have someone who understands causal inference and statistics. And so, with these three disciplines, we’re hoping that we can come up with a way to automatically synthesize these safeguards so that any system that is using AI and that has an idea of what safety means for them, can leverage the safeguard to come up with one.
Host: Well speaking of collaboration, that’s a big deal at Microsoft Research and you’re doing some really interesting work with a couple of universities. And I don’t want to let you go before we talk at least briefly about two projects; one with NYU related to what systems folks call provenance, and another with Harvard trying to figure out how to delete private data from storage systems in order to comply with GDPR. Both fall under this idea of responsible AI which is a huge deal at Microsoft right now. So, without getting super granular though, because we have a couple of more questions I want to cover with you, give us a Snapchat Story version of these two projects as well as how they came about real quick.
Sid Sen: Okay. Yeah, so with NYU, we’re working with two professors there, Professor Jinyang Li and Professor Aurojit Panda, and two amazing students, on what we’re calling ML provenance. So, the idea is can we find and explain the reasons behind a particular decision made by an ML model. And so, there’s been a lot of work in trying to do this. And I think what we’ve done that’s different is we formulated the problem in a different way. We’ve said, what if there were different sources of data that go into training this model? So, if that was the case, and you have all these different sources, you can look at a particular decision made by the machine learning model and say, okay, what were the sources that contributed to that decision? So, it turns out you can formulate a new machine learning problem that uses those sources as features and, as its label, the decision. And we can use statistical techniques to try to narrow down and zoom in on which of the sources actually caused that decision. So, this is going to be useful for doing things like detecting poisoning attacks which is a very common security problem people look at in machine learning models. With our technique, you can kind of find the data source that caused you to do that. So, the idea there is we can find the data source that caused that decision or maybe even find the actual data point in some cases that caused that decision.
Sid Sen: And we can do this without training too many models, and… The coolest thing about it, I think, is again, this idea of harvesting existing work, is that I think we found a way that we can do all of this by just leveraging the existing training work that’s already being done to train these models. So that’s the NYU work. Um, the work that we’re doing with Harvard is with Professor James Mickens and he has a student we’re working with and there, what we’re trying to understand is this question of, can we really delete a user’s private data from a system? Let’s take a simple classical storage system as an example. Suppose all the person does that goes in and says, insert this data item into your storage system, like what happens? And what we found, it’s kind of fascinating to see that just one thing that you insert into a storage system all the different things it touches. Like you have all this transient state created. You affect these global parameters and state, and then you put the actual data into a data structure. That’s the easy part, right?
Sid Sen: But there’s all these other kind of side effects that happen. Maybe some of those statistics that are used, are used to train some kind of machine learning model. I mean, there’s all kinds of things that can happen.
Sid Sen: So, we’re trying to basically track all that and, in systems we have this notion of taint tracking which is, like, imagine you put a color on the data, and you see where the color spreads. So, we have some techniques for doing that in systems already. But what we’re trying to understand is, how do I measure, like, how much I care about all the things I’ve touched? Right? Like these taint tracking things, if you touch something, they are like, oh, it’s tainted. But you know, if I add one number of mine that’s a private number to like a thousand other numbers, do I really care about what I’ve done there? What we’re asking here in this work is also like, how do I reason about how much of my privacy is being leaked, which has connections to differential privacy, actually. How do I reason about how sensitive my input is to whatever state it’s affected and once I figure that out, and I decide okay, I don’t care about this stuff because it really hasn’t affected things enough, but I care about this stuff because I’ve affected it in a meaningful way, then how do I go about deleting that data? It’s not so clear how we can, in a generic way, allow the entire system to fully and completely delete what it needs to delete when it comes to that user’s private data. So, another work in progress.
Host: Well it’s about now that I always ask what could possibly go wrong, Sid. And I do this because I want to know that you have an eye to the risks as well as the rewards, inherent in research. So while you frame a lot of your work in terms of AI playing nicely with humans or collaborating with humans or helping humans, is there anything you can see down the line – or maybe are afraid you can’t see down the line, so that’s even worse – that gives you cause for concern or keeps you up at night? And if so, what are you doing about it?
Sid Sen: I think that AI solutions will always be held to a different standard than human operators or human solutions. I do worry about what happens when even a system that we deploy that has let’s say this kind of safeguard in place, if something goes wrong, how do you react to that? What kind of policies do you put in place to determine what to do in those situations? You know, what happens when one of these AI systems actually hurts a person? And this has happened in the past, right? But when it does happen, it’s interpreted very differently than if the cause for that accident or incident was driven by a human. In a lot of the applications we’ve looked at, if the safeguard gets violated a little bit here and there, it’s actually okay, and we actually can leverage that, and in the work we do, we do indeed leverage that. But what if it’s not okay? Right? What if it’s never okay for even the slightest violation to happen? How do we, in those situations, still learn, still do what we need to do, without the risk? So that’s something that does worry me because it makes me realize that there are some things where you just don’t want to replace a human. But I do believe that this kind of hybrid approach we talked about, I think it will ensure that both sides have an important role to play. I think what’s happening is that AI is showing us that we’re not playing the right role. And so, what we need to do is just kind of adjust the role. I’m not worried about AI replacing creativity and elegance. I mean a lot of people are worried about that. I think there’s just too much, like, elegance and beauty in the kinds of solutions humans come up with and it’s one of the reasons why I spend a lot of time learning from and using human solutions in all of the AI work that I do.
Host: I happen to know that you didn’t begin your career doing hybrid systems reinforcement learning research in New York City for MSR, so what’s your story, Sid? How did you get where you are today doing cutting-edge research in the city that never sleeps?
Sid Sen: That’s a good question. I grew up in the Philippines, but I had a lot of exposure to the US and other countries. I grew up in an international environment and I went to college at MIT in Cambridge in Massachusetts and I remember seeing a student solve a problem that we had been assigned in a way that he wasn’t told how to solve it. And it kind of blew my mind, which is a little sad if you think about it. I was like, why did you do it that way when they told us to do it this way? You know, you would have figured it out if you did it that way, why did you do it that way? But that resulted in a new idea that led to a paper that he submitted. And, it was I think at that point that I realized my professors are not just giving me problem sets, that the stuff that they’re teaching me is stuff they invented and that we have this ability to kind of innovate new ways of doing things, even new ways of doing existing things and I had not really realized that or appreciated that until that point. All this time I’d been like, you know, thinking of my professors as just people who gave me problem sets, but now when I go back and look at them, I realize that I was taught by these amazing super stars that I kind of took for granted. And so, after I left college, I went to Microsoft to work in Windows Server actually for three years. I worked in a product group as a developer, but I always ended up going back to thinking about the algorithms and the ideas behind what we were doing. And, you know, I was fortunate enough to have a good boss that let me work on researchy-type things. So, I would actually visit MSR and talk to MSR people, as an engineer from a product team, and you know, ask them questions and I always had this, you know, respect for them and I always put them on a pedestal in my mind. I think this kind of inspired a more, kind of, creative pursuit and so that’s why I went back to grad school. And I think the reason why I ended up at MSR after doing my PhD is because MSR was the one place where they really appreciated my interdisciplinary, and slightly weird, background. Right? The fact that I did some systems stuff, the fact that I did some theory stuff, the fact that I worked in the product group for three years, like they actually appreciated a lot of those things. And I felt like this is a place where all that stuff will be viewed as a strength rather than, oh, you’re kind of good at these things, but are you really good at one of these things more than the others? You know, that kind of thing, which is something that does come up as an issue in academia and even in our labs, we struggle with it, at hiring people who are at the boundary of disciplines. Who are cross-cutting… Because it’s not so easy to evaluate them. How do you compare such a person to another candidate who is an expert in one area, right? How do we put a value on this kind of interdisciplinary style versus this deeper, more siloed style of research? So, it’s not an easy question. It’s been a recurring theme in my life, I would say.
Host: Sid, what’s one interesting thing that we don’t know about you that has maybe impacted your life or career? And even if it didn’t impact anything in your life or career, maybe it’s just something interesting about you that people might not suspect?
Sid Sen: So, I was not a very typical Indian kid growing up. I grew up in the Philippines and partly in India and I spent a lot of time doing, like, hip-hop and break-dancing, things that you know, drove my parents a little crazy I would say. They would let me do it as long as my grades didn’t suffer. When I came to MIT, I saw these people doing a style of dance where they didn’t have to prepare anything. It was just like… do you know how square-dancing works?
Sid Sen: Um, right? You just follow a caller who calls all of the moves and, you know, I joined that dance group and I eventually became the president of that group and that’s actually where I met my wife. She was PhD student at the time.
Host: In a square-dancing group?
Sid Sen: No. It was in a salsa version of that kind of work. So, it’s called Casino Rueda. It’s like square-dancing, but it’s a caller-based dance where the caller calls all the moves. So, I would call these moves with hand signals and all that, and everyone does it and makes these beautiful formations and patterns, but it’s all salsa. So that was a very important, kind of, part of my time at MIT. And I do appreciate the university a lot because it gave me the ability to have a balanced life there. So that’s something kind of non-work related that people don’t usually expect. I guess something work-related that people may not know so much about me is that I don’t really like technology! I’m not a very good user or adopter of technology. I’m always kind of behind. In fact, I think I use it when it becomes embarrassing that I don’t know about it, and this has happened repeatedly in the past. So, I do a lot of things because I need to for my work, but I think this actually works to my advantage. I don’t like technology… I don’t like, like, writing thousands and thousands of lines of code. I like thinking more about how to do things minimally. Finding the simplest and most elegant route is really important to me. One thing that I have a big passion for is teaching. If I’m honest with myself, I probably should be in a university because I love working with students and mentoring them. I actually feel that might be my greatest strength compared to the other things that I do. And one of the things you need to be able to do well, if you want to teach a student, is explain something in the kind of simplest way. And of course, it’s a lot easier to explain something in a simple way if that thing is simple to begin with, right? Not only is it easier to explain, it’s easier to code up. It’s usually easier to program. And that means that it’s less likely to have bugs and other issues in it. So, there’s a lot of value to simplicity and it’s something I think about all the time. It really has permeated everything that I do.
Host: Well, it’s time to predict the future, or at least dream about it. So, if you’re wildly successful, what will you be known for at the end of your career? In other words, how do you hope your research will have made an impact, and what will we be able to do at that point that we wouldn’t have been able to do before?
Sid Sen: Wow, that’s a good one. That’s a tough one to answer. Let me tell you two things that I think would make me happy if, at the end of my career, these things happened. If our world is run by AI systems that operate in a harmonious way with humans, whose safety we are so assured of that we take it for granted, if a part of that can be attributed to the work that I’ve done, that would make me happy. So that is one thing. The other thing, though, which may even be more important to me is that if I’m remembered as someone who was you know a good teacher, a good mentor. So if, for example if my ideas are being taught to undergraduates or even better, if they are being taught to high school students, I think the further down you go, the more fundamental and the more basic, the more simple the stuff is, to me that’s, you know, a sign of a greater impact on education. So, I’ve had a bit of a taste of that. When I was in my PhD, I did some work on very classical data structures and came up with some new ones. And those are being taught to undergraduates now and they are in textbooks now. And that kind of thing makes me really happy. You know I hope that, by the end of my career, that big chunks of my work – or even parts of it – will be taught to undergraduates or even better, taught to high school students. To me, that would make me super happy.
Host: Sid Sen, thank you for joining us today on the podcast. It’s been terrific.
Sid Sen: Thank you, Gretchen. I really appreciate it.
To learn more about Dr. Siddhartha Sen, and how researchers are working to optimize decision making in real world settings, visit Microsoft.com/research