Episode 15, March 7, 2018
There’s a big gap between memory and storage, and Dr. Anirudh Badam, of the Systems Research Group at Microsoft Research, wants to close it. With projects like Navamem, which explores how systems can get faster and better by adopting new memory technologies, and HashCache, which brings with it the promise of storage for the next billion, he just might do it.
Today, Dr. Badam discusses the historic trade-offs between volatile and non-volatile memory, shares how software-defined batteries are changing the power-supply landscape, talks about how his research is aiming for the trifecta of speed, cost and capacity in new memory technologies, and reminds us, once again, how one good high school physics teacher can inspire the next generation of scientific discovery.
- Microsoft Research Podcast: Visit our podcast page on Microsoft.com
- iTunes: Subscribe and listen to new podcasts each week on iTunes
- Spotify: Listen on Spotify
- Email: Subscribe and listen by email
- Android: Subscribe and listen on Android
- RSS feed
Anirudh Badam: I think that whole notion of being tech savvy has to go away. Nobody has to be tech savvy. Tech has to be people savvy. That’s how I want to think about it.
Host: You’re listening to the Microsoft Research podcast. A show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: There’s a big gap between memory and storage, and Dr. Anirudh Badam, of the Systems Research Group at Microsoft Research, wants to close it. With projects like Navamem, which explores how systems can get faster and better by adopting new memory technologies, and HashCache, which brings with it the promise of storage for the next billion, he just might do it.
Today, Dr. Badam discusses the historic trade-offs between volatile and non-volatile memory, shares how software defined batteries are changing the power-supply landscape, talks about how his research is aiming for the trifecta of speed, cost and capacity in new memory technologies, and reminds us, once again, how one good high school physics teacher can inspire the next generation of scientific discovery.
That and much more on this episode of the Microsoft Research podcast.
Host: Anirudh Badam, welcome to our podcast. Let’s start with a general 10,000-foot view of your group at Microsoft Research. What do Systems and Networking researchers do when they get up in the morning and what problems are you trying to solve?
Anirudh Badam: So, Systems and Networking research is about thinking of new primitives, all the way from hardware, you know, down to the operating system, maybe even at the application level, trying to help applications achieve what they want. At a high level, we have different components in application software that want to communicate with each other, trying to get access to the same set of resources. Because resources such as, maybe, memory they’re ultimately very limited. So, if you want to share these things across – and we want to share these things across – multiple customers, multiple applications, multiple virtual machines, we need a mechanism to enable that. And what we are essentially doing here is, we are trying to get out of the way of applications as much as possible while doing these things. So, that the application is thinking, “Yes. So, I’m the sole user of this system. I get the performance that is promised by raw hardware capabilities. That’s what I’m getting.” And the systems research is essentially about adding new capabilities in terms of adding new hardware. But at the same time making sure that each of these entities is thinking, “Hey, I have full control over this hardware, and I’m secure while I’m running this and I’m also secure while I’m sharing these resources with others.”
Host: So, I would say maybe an analogy is, you have four siblings, each one thinks they are the favorite of mom and dad.
Anirudh Badam: Yes. Kind of like that.
Host: They don’t have to know they are not the favorite, they can just continue to work.
Anirudh Badam: Yes.
Host: What’s the gap between memory and storage and what are we doing about it?
Anirudh Badam: So, traditionally computer systems have been using this technology called Random Access Memory.
Anirudh Badam: RAM. And RAM has been serving as the main form of memory for maybe three decades now. Maybe more…
Host: And it’s short-term memory, right?
Anirudh Badam: Exactly. And it’s short-primarily because for RAM to remember something, it requires power, such that it actually remembers, okay this is a one, this is a one – you have to continuously remind it. And the process of reminding it, actually consumes energy. And that’s the difference between, traditionally, what we call as volatile memory and non-volatile memory, or storage. More traditionally, non-volatile memory has been called storage. So, unfortunately, traditionally, the main memory which has been volatile, in the sense that you need to continuously remind it what it is storing, requires power, so if you lose power, there’s a chance that you can actually lose your data. So, what application developers typically do when they want to store data for the longer term, is to just create a copy or a snapshot of the data that you have in memory, dump it onto a solid-state disk, wait for the solid-state disk to actually acknowledge, saying that, hey, the solid-state disk goes and programs a few what we call NAND flash cells, they essentially remember that, “Okay, I’m going to remember that this is a one,” and it will do that, send an acknowledgement back to the operating system saying that, “Hey, I have actually remembered what you asked me to remember, and even if you unplug me from the power, I’m going to continue to remember that.” And, because such devices which remember data for the longer-term, they tend to be slower, there’s a huge performance gap between memory and storage. So, that’s one of the gaps, the performance ones, which is, you know, it takes you longer to store something more permanently, and the other gap is that the amount of memory, volume wise, how many GBs or gigabytes can I actually fit in a given volume, is significantly different for these two technologies. For D-RAM, maybe, you know, in a laptop like this, I can have maybe 16 or 8 gigabytes of D-RAM.
Host: And D-RAM being Dynamic RAM. Dynamic Random-Access Memory.
Anirudh Badam: Yes. So, on the other hand if you look at the amount of SSD storage that you have in these machines, the chips would look roughly the same size, you know, most of these laptops these days use something called an M.2 SSD and the D-RAM audio that they also use looks about the same size.
Anirudh Badam: M.2 SSD. It’s a form factor. The M.2 form factor SSD and the kind of D-RAM that most of these laptops tend to use, they are about roughly the same size. But their capacities are dramatically different. The SSD would store hundreds of gigabytes. I’ve even seen some laptops these days come with a terabyte of SSD. But the same size D-RAM is actually, you know, just 32 gigabytes, or maybe 64 if you’re lucky. And that’s another gap between these two. This sort of translates into cost because it’s roughly the same amount of silicon that goes into these things for manufacturing these. So, cost is roughly the same, but the capacities are drastically different.
Host: So, why wouldn’t you just get the one that’s got way bigger capacity but cost the same?
Anirudh Badam: Good question. So, you would want to, essentially, do that, but it would slow down all of your processing down to the SSD – in the sense that okay, I’m going to edit one line of my Word document, it’s going to take you noticeably longer. Every time you type a word into a Word document, you’re going to see that spinning wheel.
Host: Okay. Is there a third gap between storage and memory that – you said one and then two.
Anirudh Badam: So, capacity is one. There’s performance and there’s cost. If you look at cost per gigabyte, for those two chips, you’re going to pay roughly the same. And that’s really the kicker. So, you would probably pay about 10 to 15 dollars per gigabyte for D-RAM and you’d be paying maybe 30 cents or 20 cents per gigabyte for NAND flash. And this, as I said, is primarily because similar amount of R&D has gone into those process geometry developments. It’s just that, technologically speaking, there is only so many of these cells I can fit in a given area without actually heating it up too much.
Host: You talked about an arms race between networking and storage…
Anirudh Badam: Yes.
Host: What do you mean by that and who’s winning?
Anirudh Badam: Yeah, so what has gone on over the last 30 or 40 years is, between the power that your CPU has, the power that your memory and storage have, and how fast your networks are, they have evolved at different rates. Back in the 90s, or, you know, late 80s, to where maybe 7 years ago, Intel was making tremendous improvements in CPU speed, using all these cool techniques like, you know, more caching, RFR execution, speculation, so on. And because of these, we were seeing tremendous amount of increase in CPU speeds. So, if you go back to the 90s, most of the research was about, “How do I hit my disk less often?” So, there was a lot of research on caching, trying to, you know, pack as much of your data in D-RAM as possible so that you hit your hard drive as less frequently as possible.
Host: Because of the speed?
Anirudh Badam: Exactly. The moment you start hitting your hard drive, that CPU is underutilized, I’m not running as fast as I actually can. So, that way, you know, if you go back to the 90s, CPU completely dominated what our storage devices are actually capable of. Likewise, you know, networks were really slow, people were stuck behind dial-ups. So, it was a similar line of research even in networks where they were spending a lot of time in R&D about, “How do I cache as much of the web as possible?” That suddenly started changing right around the time when networks significantly got faster. There was investments in broadband everywhere. And, the other side of the networking innovation went on, not just in consumer Internet, but also in the data centers. So, when you actually access a web page, what happens in the data center is there’s all these servers that are talking to each other to put together a web page for you. There’s immense amount of communication that goes on between the servers of a data center. And even that communication was a significant amount of bottleneck back then. And I want to say maybe, you know, the early part of this century is when a significant amount of R&D investments were made in intra-data center networking. And the results of that, today we see that, you know our data centers are capable of networks that are 40-gigabit per second, 40-gig Ethernet as we call it, is commodity now. When I say commodity, I mean to say that the cost of these technologies is low enough that we don’t have to say that oh, you know what only the high-end banks and all these trading shops are the only ones who can afford it.
Host: Everyone can have it.
Anirudh Badam: Everyone can have it. So, now networks started getting significantly faster. And storage was, you know, still lagging behind. So, this is what happened in the data centers. People were talking about, you know what, instead of fetching the data from my local hard drive, I have only, you know, 32-gigabytes of D-RAM in my own server. But networks are fast enough that I can access a hundred different servers, each of them have 32-gigabytes of their own data. So now I have scale. Because networks were getting faster, you could think of remote memory as, somehow, you know, local memory. That’s what is happening in our data centers today. That’s when significant improvements happened in the storage world, which was when NAND flash-based SSDs started maturing out of research, and people started doing research and scaling it up, saying, “Okay, I have maybe a thousand cells today. How do I make them into a million cells or, you know, a billion cells?” And, once they figured out how to do that, they became cheap enough that, “Hey, I suddenly have something that sits between D-RAM and hard drives.” Hard drives are probably a million to ten million times slower than D-RAM, and SSDs were 10,000 times slower than D-RAM. That suddenly meant, you know, fresh air. Finally, I don’t have to worry so much about hitting the hard drive. Maybe I can put some of my really important information on an SSD and not worry about my D-RAM being too small. So, this sort of brought SSD back into that arms race, where it could tell the computer networking folks that, “Hey, you know what? We don’t have to be paranoid about, you know, buying more storage at the rate I’m actually thinking of using my networks to go to more servers.”
Host: So, your research interest is system support for new memory technologies.
Anirudh Badam: Yes.
Host: This is straight from the website.
Anirudh Badam: Yes.
Host: And you say the field is exciting and rapidly evolving right now.
Anirudh Badam: Um-hm.
Host: So, how have computers and memory technologies changed recently so that researchers are thinking in new ways about how to support the changes?
Anirudh Badam: So, you start thinking about, how do we make faster storage, today, practical? There have been a lot of efforts into identifying new materials and material science technologies, that would help you bridge some of these three gaps that I said, you know, maybe a cheaper version of D-RAM, maybe a faster SSD, maybe something that is both. You know, something that is as cheap as SSD, and as fast as D-RAM. So, in terms of this research, it was probably, you know, maybe 6 or 7 years ago, we were seeing material scientists hit a lot of roadblocks in terms of scaling these technologies. They would look really good in the lab, but for some reasons they weren’t able to scale them. An example of this technology was phase change memory. But still a lot of researchers are looking into it. It had scalability problems. It wouldn’t scale beyond a certain size. But it had those properties of, you know, it is storage, in the sense that it can remember things for the longer term. It was not quite as fast as D-RAM, maybe, you know, 10 times slower, 20 times slower in some instances and maybe even 100 times slower for writes versus reads. So, we were getting there. But they weren’t, you know, increasing in capacity. And some prototypes were as expensive as D-RAM if not more expensive. So, they were not practical. So, we were thinking, are there practical ways of getting faster memory, faster storage or larger amounts of memory?
Host: Were you thinking technically, or in line with physics and chemistry?
Anirudh Badam: We were thinking more from systems approaches, thinking, you know, better architecture of the existing hardware components that we have, and laying them out better inside the software that we have, so that it feels more like what you want, rather than thinking more in terms of fundamental material science improvements.
Host: So, still using the same materials that you had, but making the technological use of the materials better.
Anirudh Badam: Yes.
Host: Okay. How is that going?
Anirudh Badam: Yeah, so this was 5 years ago, when I started work at Microsoft Research. I was thinking of this problem of, how do we get faster storage today without waiting for the material science breakthroughs? Right around the same time, I started working with a few colleagues here on this project of improving battery life for mobile devices. This project was being led by Ranveer Chandra. So, he was leading this group called Battery Life Improvements for Mobile Devices. It was fascinating for me. So, when I started looking at this field of improving battery life, I was reading papers about how researchers were getting 5% or 10% improvement in battery life and that was considered significant; primarily because batteries, lithium ion batteries, didn’t scale as fast as Moore’s Law would. They were scaling at, you know, a curve that was determined by improvements in chemical engineering, improvements in converting raw materials like lithium, and packing them as densely as possible, while, you know, not catching on fire, for instance.
Host: That’s important!
Anirudh Badam: Exactly. And those curves were really slow. I mean, they were making improvements like 10% every year. So, just by modifying some piece of hardware or software that you have in a mobile phone, if you can get that additional 10%? It’s like you’ve beaten Moore’s Law. You know, you didn’t have to wait for those two years and you have this competitive advantage on your competitor saying that, “Hey, I have the same size battery, but I can give you 10% extra battery life, you know, instead of 8 hours, I can give you 10 hours, right?” So, this was the game. But this project, it had this vision set for itself which I found quite fascinating, is that they wanted to move from the paradigm of charging mobile devices once every day, to once every week. So, I was like, man, that seems very challenging, you know. I want to help these guys get there if I can. So, it was almost like doing another PhD in battery technologies, understanding, you know, mobile energy issues and aspects overcoming here. And, that’s when I got really interested in batteries. We had an intern coming from Berkeley who was doing PhD in batteries. He was probably more my mentor than me being his mentor in terms of he teaching me about how batteries work, what’s the state-of-the-art in the batteries and so on. That’s when we saw that the cost of lithium ion batteries has been falling for the last 25 years significantly. What we realized then was, you know, back in the 90s, early 90s, there was this talk about using batteries to ensure that your D-RAM, that needs to be constantly reminded, that, “Hey, this is a one, this is a one,” we can use batteries to survive other sorts of power outage from the grid, to continue to help D-RAM remember what it’s trying to remember. So, it sort of married ideas from these two things that I was working on saying that, “Okay, why don’t we revisit that problem now that batteries are cheap and convert some of the D-RAM that you have in your systems, maybe laptops, maybe mobile phones, maybe even in the data center, to make it into faster storage?” In the sense that technically for all practical purposes, the application is thinking that, “Hey, this portion of D-RAM, somehow magically remembers that, you know, whatever I’m storing in spite of power failures, in spite of the operating system crashing.” That’s how we started working on this idea of battery-backed D-RAM and that’s a line of research we have continued in terms of using more systems techniques to shrink the size of the battery that you require for ensuring the non-volatility of a gigabyte of data. So, that’s the line of research that we’ve been doing ever since, the last five years.
Host: And where are you with that, because my 19-year old daughter has to charge her phone like 4 times a day, not just once a day. A week would be like, WHAT?
Anirudh Badam: So, yeah, in that problem, we’ve made significant amount of R&D in the last few years, especially with this technology called software defined batteries. So, what software defined batteries is, we see in the battery world, we read these articles, right? You know, hey, there’s a laboratory from Europe, has this battery that charges in 5 minutes. And then, you know, by the time we get to the end of the article, it would be something like, hey, it will be several years before it becomes practical.
Anirudh Badam: And what they are essentially trying to say is, the battery that we have today, which is the lithium ion battery, the traditional lithium ion battery that we use in our laptops, you can think of it as, it is being good, reasonable in all the metrics you care about, but it’s not stellar in any of those metrics. So, I can explain that further which is that the things that we care about in a battery, how much am I paying for it, how much time does it take to charge, how many years does it last?
Host: And how long does it hold a charge?
Anirudh Badam: Correct. What is the capacity? How much energy can it actually hold? Right? So, we were thinking, you know, can we put all of these different kind of batteries that are coming together, somehow combine them using new software and hardware to essentially get the benefits of all of these new batteries. Instead of having 100% lithium ion battery, maybe I can have 30% of this new battery that charges in 5 minutes. Which means that you can get 30% of your charge in just a few minutes. But the moment these batteries that you want to put in the same device that look very different from each other, existing circuitry doesn’t know how to handle it.
Anirudh Badam: If you use any electronics or electrical devices, you always see that it’s the same kind of battery that goes in. It’s 4AA. 3AAA. It’s never like you know, hey, one AA battery, two AAA batteries and one D battery. That almost never happens.
Host: That is true. I’ve never seen that.
Anirudh Badam: Circuits can’t handle that, neither can software because it doesn’t know. If it’s suddenly getting power from these batteries, it doesn’t have enough voltage. It doesn’t know how to, you know, pick between these batteries for different things. And that’s exactly what we did in software defined batteries. We tied the batteries that you have to specific tasks that you want to achieve and somehow mask, using novel hardware, that these are actually different batteries.
Host: Are they smart batteries?
Anirudh Badam: The batteries themselves are traditional batteries, in the sense that there’s no circuitry in them. They’re dumb.
Host: But the system is smart.
Anirudh Badam: But the system is smart enough to identify that, “Hey, the power is coming from this battery right now, so I know exactly what to do to not confuse the guy upstream, who is expecting exactly this amount of power.”
Anirudh Badam: That’s what software defined batteries is. And that’s helping us in a way in terms of getting you faster charging, getting you more capacity, while not compromising on what you essentially think of a mobile device.
Host: How nascent is this thinking? I mean is it…
Anirudh Badam: So, we shipped some of the features of software defined battery in Windows RS2.
Anirudh Badam: So, that the operating system is itself ready to deal with multiple heterogeneous kinds of batteries that you have in your hardware. And the hardware itself, we’re helping several different OEMs and several different chip manufacturers to adopt our designs. That’s where we’re at right now.
Host: So is it kind of in a Beta stage or…
Anirudh Badam: I would say so. Yes.
Anirudh Badam: The software is in a stable state and ready. The hardware is evolving.
Host: You know, when I was in sales, I used to say, “Cheap, fast, good. Pick two. You can’t have all 3.”
Anirudh Badam: Exactly.
Anirudh Badam: That’s kind of exactly what happens with batteries. In this case, good is, you know, how much charge do you want to have? Fast is how fast you want to charge. Cheap is, I don’t want to spend half of my mobile phone cost on the battery. That doesn’t make sense.
Host: But that’s great. So, the battery technology, that I don’t think I would have thought, hey, they’re working on batteries at Microsoft Research. I would think of it as a, you know, software – although it is, software, but you have limitations with physics and chemistry and all the things…
Anirudh Badam: It’s a great point that you bring up. The software defined battery project, we had two or three PhDs who had PhD in chemistry, two or three of them who had PhDs in electrical engineering and circuitry. And, you know, I was in operating systems. We had one person in mobile technologies, Ranveer Chandra. We had people in the entire stack, end-to-end. And it was made possible primarily because of expertise in all these fields that we had, to think end to end in terms of this problem.
Host: I’m hearing that over and over and over on these podcasts that people are not just software people here.
Anirudh Badam: Yes.
Host: It’s every discipline and they’re thinking broadly about problem-solving as opposed to just, hey, what’s the next technical thing we can do? So that’s really cool.
Anirudh Badam: Yes.
Host: Let’s talk about another project that you’ve been involved in. It’s called HashCache. And it received MIT Tech Review’s Top 10 Emerging Tech awards about 5 years ago.
Anirudh Badam: Yes.
Host: And, it’s been dubbed “storage for the next billion.”
Anirudh Badam: Right. So, this essentially goes back to the storage problems that we had in the 90s or before SSDs showed up or just when SSDs were showing up they were really expensive. They were made for you know banks and trading houses, not really for commodity computing. And the problem that we had back then, was, when we go to places where there isn’t high-speed networking available…
Host: So, places that lack infrastructure.
Anirudh Badam: Exactly. Places that lack infrastructure, places that lack stable power supply, for instance. So, even if you have internet connection, if you don’t have power supply, that is coming into the grid, then, you know, how you deal with providing something that is I want to call continuous access to information? So, this is necessary for both education sector and other sectors that are increasingly reliant in the developing world on technology. So, the question that we had back then was, “How do we pack as much information as possible into laptops?” Don’t focus more on enterprise workloads, but more on education workloads, more heavy on in terms of browsing workloads and so on. So, what we wanted to do was, can we somehow solve this problem of intermediate internet connectivity or intermediate power availability and deal with them by putting as much information as possible in the netbook itself, that students and teachers and educators have access to? So, one of the things that we were looking at was, you know, why not just take as much information as possible, dump it onto a hard drive, plug it into a netbook and, you know, that’s your information retrieval system now? And we realized yes, browsers were good in terms of rendering content. They were very efficient, but they were not fast enough to process all that information from a hard drive. By the time you retrieve all this content from the hard drive, it’s going to be several tenths of seconds. And people would lose focus, lose attention.
Host: Over several tenths of seconds?
Anirudh Badam: Yes. Yes.
Host: How we have changed in our patience level…
Anirudh Badam: Exactly. So, we were thinking, you know what? We wanted to give better experience, and we tried to understand why it was taking so much time, and one of the reasons why it would take so much time was, all of this information that you wanted to retrieve was spread entirely across the hard drive. So, one of the things that we wanted to do was, can we bring content that is similar to each other as close as possible? This is one of the research that we have done in HashCache. And the other main part of the research was, if you have a billion things to remember, trying to essentially put a finger on one of these things, is going to be very hard. You need to essentially build an index. Traditionally, indices, they were not a huge hindrance to performance, primarily because they were small enough to fit in your main memory. Now, the moment you start trying to remember too many things, the index itself is large enough that you start going onto the hard drive. So, the problem in HashCache was, I want to retrieve this piece of information, but the index that will help me to retrieve this piece of information is itself on the hard drive. So HashCache, we were trying to solve this very problem of how do you reduce the size of indexes without reducing the performance of rendering these pages? So, what we did in HashCache, was instead of trying to precisely remember how a word maps onto a document, what you actually remember is, remember it approximately. Instead of remembering all the 100 bytes of the name, we would only remember, maybe just the first few 10 bytes of the name. And essentially, it allowed us to index two to three orders of magnitude more information for the same size of D-RAM that would fit in an index. But typical indexes back then would require maybe 100 gigabytes of D-RAM to index a terabyte of hard drive. We were doing something in a gigabyte of D-RAM. So, it was two orders of magnitude improvement in terms of how much information you could retrieve. And with this, we made a lot of deployments in Africa and we made a lot of up tie-ups with people in the education sector, especially the OLPC — One Laptop Per Child. And we made it available for people to use it inside the OLPC project. And that’s how, you know, it got the award.
Host: So, it’s been deployed in a couple of areas to see how it plays?
Anirudh Badam: Yes.
Host: And the hope is that you’ll be able to deploy it further in places and sectors that don’t have, maybe, the material resources to invest in these bigger, more expensive systems?
Anirudh Badam: Right.
Host: Tell us about Navamem, your memory and storage “cloning” project that attempts to bridge the gap between D-RAM and SSD.
Anirudh Badam: So, that’s the project which is about system support for memory and storage like we’ve been doing. So, storage and memory research in the data center is all about trying not to lose the data that you’re storing online. The photos that you’re storing online in Microsoft Azure or other cloud services, their goal is to make sure that they never lose your photographs. And the way they do this is by creating copies of your photographs and storing in as many servers as possible. So, storage systems research is about creating copies as fast as you can. So, I thought okay, you know what, I’m going to try that.
Host: Just unpack a little bit more about that.
Anirudh Badam: Yeah, so when I was talking about the functionality gap between memory and storage which is, memory is temporary. The power is gone, you sort of lose the data that is there. You either need a battery or you need some other way of helping ensure that D-RAM continues to remember that it’s a one, it’s a one, it’s a one… Storage on the other hand, the reason why you’re storing this is because this is data that you actually care about. Right? So, when you care about something, you want to make as many backups as possible because what happens if there’s a permanent failure in this hard drive? So, what the data centers have done, is to deal with these problems of, you know, hard drive failures is, “You know what? I’m going to put your data in three different hard drives and hope that not all three hard drives will go down at the same time. The probability of that happening is astronomically small.” It’s not just about creating one copy, but it’s about this promise that you want to give to the customer that, “I will never lose your data regardless of which technology I’m storing it in.” So, how do I make storage — how do I make this problem of storing something durably that, you know, this promise of not losing something, not be the bottleneck in the system. And especially in the light of this new technology that we’re seeing, which is sort of a non-volatile memory, is what we’re calling it. It’s sort of bridging these gaps between D-RAM and SSD. But at the same time, it’s not as fast as D-RAM, not as cheap as SSDs. So, it’s still a hierarchy. I still need to understand, who are the customers who really care about speed? I want to put them in D-RAM and battery-backed D-RAM. Who are the customers who are okay with a little bit of slowdown? I’m going to put them in this new technology. Who are the customers who really don’t care about speed? Maybe they’re just uploading photographs in the background on their phone. So, maybe I’m going to put them on the SSD. Things like that.
Host: So, you’re really customizing your thinking for the kinds of people that have different kinds of needs. Because you can’t do all three, cheap, fast, good… Yet.
Anirudh Badam: Yes. Yet.
Host: You’re working on it.
Anirudh Badam: Yes. And when that “yet” happens, storage probably would’ve surpassed network and, you know, compute. And if it has not, there’s still more work for all of us to do.
Host: Tell me a little bit more about 3-D XPoint.
Anirudh Badam: So, 3D XPoint is this new technology that Intel and Micron have come up with. It’s kind of a solid-state material that can actually remember information for longer periods of time without being reminded again and again. It still has that problem. You have to remind it maybe once every three months, once every six months, that it is storing a one or a zero. Once every three months and six months is a lot less frequent than reminding D-RAM which is probably tens of times a second of reminder, this is a one, this is a one, and so on. So, the amount of energy and power it consumes to actually represent a bit is significantly lower. What this means is that you can pack more number of bits in because you’re not continuously pumping power into it. So, it can dissipate that energy faster. But at the same time, it’s still nowhere close to SSDs in terms of capacity. As I said SSDs can have maybe, you know, like 2 or 3 or 4 terabytes. But still 2 terabyte M.2 which is about the size of a Snickers bar but not as thick. It’s probably as thin as a Fruit Rollup. So, that’s how big these SSDs are these days in servers and laptops. And in that same size you can probably pack 128 gigabytes of 3D XPoint and maybe 32 gigabytes of D-RAM. So, that’s kind of the gap that we have in terms of how much you can put in a given volume and the gap is roughly, uh, logarithmic in the sense that if D-RAM has a speed X, this guy has X by 10, and this guy has, maybe, X by 10,000. So, you can think of it as a spectrum, saying that, “OK, I’m getting slightly more memory, but it’s still not as fast as D-RAM. But fortunately, it’s not as slow as SSDs.”
Host: Let’s talk about you for a minute. You’re young, and some people around here say you’re one to watch. What’s your academic background, and how did you end up doing research, and doing research at Microsoft Research?
Anirudh Badam: Yeah, so. I think my interest in science sort of goes back to this really good physics teacher that I had in my, I want to say 11th grade? This was time when I was actually preparing for what we call the Joint Entrance Examination in India. The Joint Entrance Examination in India is an examination is that tens of thousands of kids write every year to get into these top engineering schools in India, called Indian Institutes of Technology.
Anirudh Badam: Yes. So, this lecturer, he had a really inspiring way of telling us what was going on in the physics world. That’s how I got interested in, science or, you know, cutting-edge science. And the transition from physics to computer science happened primarily because, towards the end of the physics course, he would show us more and more demos of how atoms and how dynamic systems such as, you know, maybe a car moving, you know, a cat chasing a mouse. Or even, you know, like, black hole simulations, he would show us in these really complex computer programs and animations. That’s why I got interested in what he was using to make those animations. He would actually program these things using Java applets. These were programs that he was writing himself. I got really interested in that, you know? You know, what is this programming?
Host: He was writing them just to demonstrate to you these concepts?
Anirudh Badam: Yes, yes, yeah.
Anirudh Badam: So, that’s when, you know, my love for physics morphed into love for computer science, fascination for computer science. I didn’t know what computer science was, you know, back then. We didn’t have a computer at our home. My first computer was when I was in my sophomore year. The first year in IIT, I shared the computer with a lot of other students inside the shared laboratory. The first personal computer that I had in my own dorm room was in my junior year.
Host: What year was that?
Anirudh Badam: That was, uh, 2003.
Anirudh Badam: And that was, you know, when, you know, I started discovering so many things about what he did back then using the programming languages for demonstrating all these cool things for us in physics. And I realized you know, programming is so cool.
Host: And the two kind of came together?
Anirudh Badam: Exactly. And that’s how, you know…
Host: The rest is history.
Anirudh Badam: Yes, and after, as I said, you know, in the third and fourth year is when they gave us choice, of choosing. That’s when I was, oh, wow, now I can finally figure out what is it that I like in computer science, what is it that I want to do? And something that we did in operating systems really stuck with me. It was this thing about systems research, as I said was, to make the system that you’re building as transparent as possible. In the sense that you want multiple customers and multiple applications and multiple processes to use the same hardware. And somehow while doing this, you want to vanish. You want to disappear. You don’t want to get in their way. Because yes, you’re helping them share. But in doing so, if you hinder their experience, it’s not a good thing, right? So, there’s so many cool new algorithms about doing this, you know, adding value, while, you know, not hindering their experience. It’s almost as if the destiny of systems is to vanish. It’s like, here’s the thing that wants to destroy itself. I have an operating system, but I don’t want to get in your way. Very amusing thing. There is something, and its whole purpose is to vanish. Computing should be something that is natural. It should not be a learning curve. It should be like interacting with other humans, interacting with other things in the real world. Just get out of the way. Yes, there’s a lot of technology behind all of this working, just get out of the way. Let people interact naturally with whatever innovation that you have. We are seeing that in terms of natural user interfaces, all the wonderful research, you know, our colleagues are doing in Eric Horvitz’s team, around natural user interfaces, using voice for achieving complex tasks, using gestures for achieving complex tasks — people are used to using gestures. Like, hey. Next photo for instance. As simple as that.
Host: So, kind of Tony Stark in his downstairs lab, and he just waves his hands and things happen.
Anirudh Badam: Exactly. It has to be more natural. And so, I think that whole notion of being tech savvy has to go away. Nobody has to be tech savvy. Tech has to be people savvy. That’s how I want to think about it.
Host: Anirudh Badam. You are one to watch.
Anirudh Badam: Thank you.
Host: Thanks for coming in and taking to us today.
Anirudh Badam: Thanks for having me here. It was such a pleasure taking to you.
Host: To learn more about Dr. Anirudh Badam, and how Microsoft Research is working to close the gap between memory and storage, visit Microsoft.com/research.