Remarks by Scott Charney, Corporate Vice President, Trustworthy Computing
San Francisco, Calif.
Feb. 15, 2011
ANNOUNCER: Ladies and gentlemen, please welcome Corporate Vice President, Trustworthy Computing, Microsoft Corporation, Scott Charney. (Applause.)
SCOTT CHARNEY: Good morning.
As I think about what I heard Art Coviello say Public Key Infrastructure, that '95 was the year, '96 was the year, '97 was the year, it made me think about this slide, and particularly the ring around the slide, what I call SEPITA, Social, Economic, Political, and IT Alignment.
Very often society needs something that IT is not ready to deliver. Sometimes IT has the capability, but there's no economic model to make it work. Sometimes politicians want to do things, like protect children online, but the IT doesn't support age verification.
So often this issue of SEPITA blocks our progress, and one of the reasons it's so exciting to be here today is that is changing. We are starting to see alignment.
Now, hopefully, people who have been at RSA before recognize this slide, because I've used it before. It's about our vision for how to get security right on the Internet. We have to do the fundamentals right, we have to build trusted stacks that start at the hardware level and move all the way up to people, we need a good identity management system, and then, of course, we need this alignment.
And if you look at the activities going on around the world now, you see a lot of this happening. You see us continue to revise the SDL, for example, the Secure Development Lifecycle. You see on February 11th NIST did a draft publication on security guidelines for BIOS. We're seeing a lot of activity in both the government and private sector, a lot of consumer concern about the cloud, privacy and security.
But when we think about these things, we need to think about the threat model. And one of the interesting things to me as I've gone around the world and see countries try to grapple with a security strategy is that we have been in somewhat a state of paralysis.
So, I started to think about why, and what really occurred to me is we don't need a cyber-security strategy, we kind of need four. And the reason for that is you have to go back and think about what the threat is.
There are many malicious actors on the Internet, and they have many different motives. The problem is their techniques are often the same. And what that means is when you see a denial of service attack coming or some other activity, you can't always identify or infer the identity of the actor or their motive.
These attacks are occurring on a shared and integrated domain. I mean that in two respects. First of all, it's shared by citizens, organizations, and governments. And that sharing is intertwined in a way that's hard to tease apart the parties.
So, if you think about warfare in the physical space, the troops are over here, the hospital is over there. You can bomb the troops but you can't bomb the hospital. If you put troops on the hospital rooftop, we have rules for that. You can shoot at the troops if the collateral damage to innocent civilians doesn't outweigh the military objective.
But on the Internet you can't tease these things apart. The parties are all commingled.
And it's not just the parties that are commingled, their activities are commingled. One packet could have malicious payload, the next packet free speech. Why does that matter? Because governments might say we want the military to look for that malicious payload, but do you want the military looking at free speech?
So, this shared and integrated domain creates a huge problem.
The speed of attack exceeds our ability to respond. Political processes, international processes, human processes are slow; attacks, nanoseconds. So, now we can't respond fast enough.
When you think about the consequences of an attack, they're hard to predict. And so if you go back to the very earliest Internet attack, the Morris worm in 1988, consequence was not the intent, and to this day sometimes malware gets in the wild, of course, and it may have an objective, but then it expands in unpredictable ways.
And the worst case scenarios are alarming. And so you see a lot of very hot rhetoric, E-911, electronic Pearl Harbors, digital Armageddon.
And when you tell people, here's your problem, there are lots of actors, lots of motives, you don't know who's doing what, it's all shared and integrated, it's going to happen really fast, bad things are going to happen, it could be terrible, what are you going to do about it?
So, the first thing we have to realize is there's not great attribution on the Internet, of course, so we need to think about what do we do in cases where there's no attribution. And in some of those cases we might say we need a different model, like looking at consequence. You know, if someone is attacking a critical system, does it matter who's attacking it or do you need to take some action?
But the other thing we have to do is we have to increase attribution, and we have to do that for two reasons. First of all, in a certain class of cases like cybercrime it will be a huge help, but the other thing that attribution will do is it won't make the really sophisticated targets get caught, but it will change the noise-to-volume ratio on the Internet, more things will be known, and therefore you can deal with the residual risk of the unknown.
So, when I started thinking about this, I thought that, you know, all the cases we have and all the things that are happening really fall into four buckets.
The first is cybercrime. There we actually know what to do. You harmonize national laws, you build up capability and capacity of law enforcement, you have to speed up international assistance. Tactics are hard, but the strategy we know.
The second class of cases are things like economic espionage where we don't have normative behavior on the Internet. Some countries say there should be a level playing field for business, and some countries look the other way for economic espionage. We actually know what to do when countries disagree on normative behavior. We've done it with money laundering, we've done it with weapons of mass destruction. Countries have to start talking, establish normative behaviors, start imposing sanctions, and work through the problems.
The next category is military espionage. You've probably seen a lot of reports that countries are exfiltrating data from other countries. This is a serious problem; I suggest we get over it. The reason we have to get over it is that espionage has been going on for thousands of years, and complaining about it will not make it stop.
The Internet does change the equation. It used to be that you had to put spies in country, and it was risky, and we'd catch their spies and they'd catch our spies, we'd go to a bridge in Berlin, we'd trade spies, we'd start it all over again.
Now in the Internet, of course, you can stay in your home country and actually trade terabytes of data with no risk. So, it does change the game, but espionage is not going to stop.
And then there's cyber warfare, and that's the most complicated of all. Why? Because we don't even know how to define it yet. We don't know when it's started, we don't know when it's over. It's a shared, integrated domain, so we don't know how to tease apart the battlefield from the civilian field. And policymakers haven't yet grappled with the question of would we order the destruction of people in response to the destruction of data.
If someone dropped a bomb on a telecommunication station in country, a kinetic attack, there might be a kinetic response. But if you can cause that kinetic damage through IT, would there be a kinetic response or just an IT response?
So, these are very challenging things as we start thinking through how are we as a society, as we've become so dependent on the Internet in so many ways, and at the same time, of course, this whole environment is changing. There's a massive proliferation of devices, and I don't just mean computers and cell phones, Internet connected appliances, there's going to be Internet sensors in everything.
And the result of that, of course, is a ton of data-centricity, knowing what different devices are doing, where they are, where you are, where your car is. Everything is going to be data-centric, and that's going to have huge implications for both protecting that data from a security perspective, and from a privacy perspective. Why? Because computers don't forget things, and search makes everything available forever.
So, anything that you've ever done in your life might be recorded and findable. That scares the Dickens out of me in a way, and I've lived a pretty mild life. But it's going to be a different world, and we're going to have to sort through that.
That's going to make the importance of identity just skyrocket. You heard a little about that in Art's presentation, too. Why? Because 20 years ago, if you lost your username and password, you lost your mail. Today, you could lose your health records, you could lose your mail, you can lose your tax records, you can lose the photos you stored in the cloud, your sensitive documents.
Gating identity to be able to know that the person accessing this data-centric world where everything is stored and nothing is forgotten is all going to be about identity management.
And then, of course, relatedly governments are back. I remember in the time in the '90s I was in the government, and there were some people in the technology community who said, you know what, the Internet is going to overrun government, there's going to be no way to apply sovereign law in a sovereign, agnostic Internet.
Well, you know, to paraphrase I think Mark Twain, the death of governments was greatly exaggerated. They're back on many levels, everything from standards for security to law enforcement access through CALEA reform and data retention legislation to the militarization of the Internet as it becomes something that's critical to defense infrastructures all around the world.
So, as we think about the threats we face, we also have to realize that our world is changing out from under our feet, and we have to think about how we look at this new world and look at the role of the security and privacy professionals, and figure out how to find a new path forward. And so it's going to be really challenging.
Now last year, I showed a demo involving Erica using an EID process that we have been very high on, and then more recently in Berlin I showed a different scenario involving Erica at work. And for those of you who have seen it, great. For those who haven't seen this video, it's really important because I think in a large part this is the way the future is evolving. So, I'd ask for the start of the video, please.
DEMO: As the Internet and cloud have grown to be part of the fabric of society, societal expectations for security, reliability and privacy are intrinsic.
Within Germany Fraunhofer FOKUS and Microsoft, with support of Bundes Druckerei, have been working together to design privacy-enabled identity systems to support a new generation of online services.
For everyday use these systems support the ability to minimize the unnecessary collection, sharing and disclosure of identity information, thus preserving privacy.
Erica has an appointment with her doctor, and learns that she is at risk for hypertension. Her doctor discusses preventive measures she can take, and offers Erica the option to participate in a pilot program using Microsoft's HealthVault to monitor her risk factors online.
Microsoft HealthVault is an online platform providing a privacy- and security-enhanced repository for patients like Erica to store health information, and enables them to share that information with those they trust.
HealthVault offers Erica a variety of applications to manage her personal health and wellness information.
Erica decides to sign up for a HealthVault account using her German electronic identity card. Based on her previous experience, Erica is confident that her German EID, in combination with U-Prove technology, will enable her to control the disclosure of her information.
Erica is asked for reliable authentication using her German EID card, and for explicit consent to share the information.
The verified claims provided are used by HealthVault to create an account for Erica in a seamless manner.
Erica then selects the My Health Info application, and in a few moments she is able to set up the permissions to upload data from her blood pressure monitor, and to view and monitor her health indicators.
Managing her health and wellness goals online with HealthVault, Erica has immediate access to her information wherever she is, plus the option of providing her information to the clinical systems used by her doctor.
The German EID system, in conjunction with Microsoft U-Prove technology, helps to preserve privacy while enabling consumers to interact confidently with online applications.
(End video segment.)
SCOTT CHARNEY: Now, when I showed this last year as a proof of concept, it was forward-leaning, of course, and it still is a little. But look what's changed in the world. And this is my point that I made when I came out. The U.S. government has announced the creation of a program office in the Commerce Department to start looking at how to catalyze identity management systems. At the same time, on November 1st, when I was in Berlin, it just happened to be the time that the German government was rolling out their EID cards to German citizens. So, what you're starting to see is this alignment between IT forces and economic models that drive values to consumers, like making healthcare easier, and driving down the cost of healthcare, and government engagement in making sure that the public-private partnership is actually action-oriented, and starts to deliver real value to the IT ecosystem.
And so these changes are coming, and a couple of the interesting things about this, of course, is in these models you'll notice that the user always retains control of the data that they choose to path, and when they choose to pass it. It's always interesting. We heard about FUD, and there is a lot of FUD in the security space, of course. There's FUD in the privacy space, too. There's FUD when industry does things. There's FUD when governments do things.
So, you see, for example, the government announced the creation of a program office, and some people report that there's going to be a national ID card. Where did that come from? It's not true. Just think about how you use your wallet today. You have maybe a corporate Id that you pull out when you're in your company building. You have a credit card that you pull out when you're at the store. You have a driver's license you pull out when you want to fly. You have a passport that you might pull out when you want to go travel internationally. You have multiple IDs that serve different purposes, and you get to choose which ID you want to pull out for what occasion. And it's going to work the same way in the ID ecosystem in the IT world.
People will have multiple IDs, and the key is to make this work, to get the EPIDA (ph) alignment, to get social acceptance, it has to be governed by user choice, and a concern for privacy.
I also want to talk about how we're changing our thinking about defending this IT ecosystem, because there are changes afoot there, too. You know, when the cyber-threat first started emerging in the '80s and '90s, a lot of it was about individual defense. Companies would raise firewalls, run intrusion detection, and anti-virus, start managing machine configurations, and the like. Individual's defense is helpful, but of course it's not enough. And then, when the government-industry partnership started in the '90s, there was a lot of talk about information sharing, that we needed to share information for our collective defense. And the problem was, of course, that sharing information is not an objective, it's a tool. You share information if there is something to action.
But there was a lot of talk about how we could build better collective defense, and now in certain quarters you hear discussions about active defense, which is can we drop packets higher up in the network. You think of the model, if a plane is coming at your territory, you shoot it down on the way. And then, of course, there are offensive capabilities that countries are increasingly looking at and building up capability for.
But one of the things we know is collective defense is better than individual defense. So, we started thinking about how we could apply public health models to the Internet. Using public health models is very interesting, both for the similarities, and for the differences. So, we started looking at, how does the public health model work? Well, first of all, we educate people on health risks. We tell them to wash their hands, and cough and sneeze into their sleeves, and the like. We have efforts to detect disease. We have vaccinations to prevent disease. Sometimes people get sick, and we treat them, of course, and then we actually build international structures so that we could respond to diseases when they occur and do so quickly.
So, countries have national health organizations. In the U.S. we have the Center for Disease Control. And at the international level there's WHO, the World Health Organization. And if you traveled around the world during SARS or H1N1, which I did, you might have had that experience where you get off a plane and someone is pointing a little device at you to take your temperature. And if they think you have a temperature or are infected, then you're going to be quarantined and treated because the good of the many is better than the good of the one. And so we've built this model to deal with human disease.
And it turns out when you look at our history in IT prevention, it's actually very similar. We've educated people on health risks, on IT risks. We've told them, you need to run a firewall, run anti-virus, backup your data. We have efforts to detect malware. We tell people to run anti-virus. And we actually give them the programs in advance to prevent infection, much like a vaccine.
Of course, once they get sick anyway, maybe because the anti-virus' signatures weren't up to date, it's a zero day vulnerability, or they've clicked on a bad attachment, we then treat people. Microsoft, for example, uses the malicious software removal tool so that when people come to Automatic Update, if we see known malware, we can remove it. And then, at the international level, of course, there is searching and information sharing.
So, we started looking at this model and saying, why can't we do this differently, and a little more aggressively on the Internet, because most of the model that we have had today is reactive, and we could be more proactive about machine health. That is, while we're going to continue to look for badness, can we help actually enforce goodness. And it's an interesting model.
I talked about this very briefly at RSA last year, and my thinking has evolved a great deal in part because of my experience with identity management, interestingly enough. Last year at RSA, I said, you know, we need to think about ISP as being the CIO for the public sector, and we need to think about them scanning consumer machines, and making sure they're clean, and maybe quarantining them from the Internet.
But, in the course of the last year I thought a lot more about this, and I realized there are many flaws with that model, and it could be improved significantly. The two primary flaws with the model, well probably three actually, is: One is, consumers may not want their machines scanned, right? They have a privacy interest in their machine. They may not feel comfortable with that.
The second problem, of course, is that it puts a lot of burden on the ISPs, because they're the ones who are gaining access to the Internet. And that could be a problem. And the third problem is the notion of quarantine at all, although we do do it in health cases, the problem with quarantining on the Internet is this issue of convergence, which is my Internet PC may have VoIP, and it may be the way I access 911 for emergency services, so you see the scenario, right, I'm having a heart attack, I run to my computer, it says you need to install four patches and reboot before you can access the Internet. That's not the user experience we strive for.
I started thinking more about this and thinking about claims-based identity. The whole thing that makes the identity thing socially acceptable and workable is the user retains control of their data and gets to decide what to pass when. Why don't we do the same thing for health of machines? So, the beauty of this model is a few things. First of all, the user remains in control. The user can say I don't want to pass a health certificate. Now, there may be consequences for that decision, if you're pulled over and a police officer thinks you've been driving drunk, you can refuse a breathalyzer. There may be consequences for that decision, but you can do it.
As long as we're transparent and people can make choices, that's fine. So, the user remains in control. The second great thing about this model is that it's not all up to the ISP, any organization can say we want to look at a health certificate, so a bank, for example, I'd say, we know there's a piece of malware that's targeting customers of our bank. We want to make and we know the latest antivirus signatures cover it. We just want to make sure you have that signature.
The third thing about this model is quarantining is far too binary. It doesn't have to be access or no access. It can be some sort of other risk management that's tailored to the problem. So, a bank can say, OK, because you don't want to pass a health certificate you can still access your account, and you can still move your money around, but we're going to limit transactions for $2000. That way if it turns out you have been compromised someone can empty your whole account.
We'll take other kinds of risk management measures, like seeing if you're actually coming from a recognized machine. So, this model is a very interesting model that really allows us to think differently about promoting health. So, let me give you an example of how this would work, let's show the video please.
Now, when you start talking about collective defense and health certificates on machines, especially when you're in a group of security experts, there are some things that you have to think about. One comes back to the trusted stack, how is that health certificate generated, every time you implement security bad guys will try and figure out a way around it. So, you have to think about how you generate those and prevent them from being spooked, and give the user an experience that they understand what's happening and see irregularities.
The other thing that I've often heard when we think about this issue is social acceptance, consumers. Will consumers even accept the idea that they should have to attest to anything before they get access to the Internet. Interestingly enough, in many parts of the world now, and you see this in surveys of citizens, people are saying that access to the Internet is a fundamental right.
And that has huge implications. There aren't that many fundamental rights that rise to that level of prestige in the world. And so I started thinking about, well, do you explain to consumers why they may need health certificates, and why this is a good model. I thought about two different things. One is, I thought about vaccines. So, I have young children who go to school and if they don't get vaccinated they can't go. Why? The good of the many outweighs the good of the one, and they want to make sure we don't spread dangerous diseases.
But, even a better example relates to smoking. We've known for a long time that smoking causes cancer and it causes a lot of other diseases, as well. But, we allow people to smoke. The question of risk management, you get to manage your own risk. You want to smoke? You want to get lung cancer? You want to die? That's your choice. And even though there are incidental costs to society, healthcare costs and other things, days off from work from emphysema or diseases, we permit it.
Then, of course, the EPA came out with your second-hand smoke analysis. It turned out that when you smoked you weren't just hurting yourself; you were hurting your neighbors. Suddenly, smoking was banned everywhere, public places, airplanes, restaurants, this room. Why? You have a right to smoke and make a risk management decision for yourself. You don't have the right to kill your neighbor.
When we started attaching all these people to the Internet, what we said is, you better run a firewall, you'd better run anti-virus, you'd better backup your data, because if you don't do that stuff, you may get wiped out. But, we also said, you have a right to be wiped out. If you don't want to do those things, that's risk management for you. And if you want to lose your family photos, and have keystroke loggers, that's your choice about how much security you deploy.
But, it turns out, not unlike smoking, that when you connect to the Internet, you're entering a place that's used by many. And if your machine becomes infested, and botnetted and used to spew out spam, or launch a denial of service attack, it's not just about your risk that you're accepting. You're accepting risk for the whole ecosystem. So, we need to think about this collective defense matter differently. We need to make sure that people understand that this is a shared and integrated domain that has a lot of new and emerging threats, and we have to be smarter about how we address those threats.
The other thing I'll say about this model is, I've heard people say, well, that's fine if you're looking for basic things and the malware you know, but what about zero days and new viruses. The goal isn't to catch everything a priori. That we know we can't do. But, it does do two things. One, it raises the basic level of hygiene, and two, as new things come out you've already built the infrastructure to respond quickly.
So, every year there's a new version of the flu. There was time before SARS. There was a time before H1N1. The point is when those things appeared, the world didn't scramble to figure out what to do. It already had the mechanisms in place. So, collective defense really gives us a great opportunity to learn from the public health model, address the botnet risk, and in the same time protect user's privacy and do it in a way that aligns the forces of IT, market, social, and political acceptance.
I did say, by the way, that there's an interesting comparison to the human health model, and interesting differences. Two of the interesting differences: In the human health model, although diseases may mutate, they're not malicious; you don't have an adversary on the other side who is doing something at light speed. The second interesting difference is when people die they're dead, but when machines die we can sometimes bring them back. So, there's interesting learnings about what we can do, and also things we have to think about differently.
So, what are the next steps? Well, we need to continue focusing on trusted stacks all the way from the firmware up through the identity management system. We need to deploy robust claims for identity solutions, and then use that same model to think about how we do collective defense by applying public health models to the Internet.
If you want to see some of these things in action that I showed you today in these videos, I would just invite you to go visit the Microsoft booth. You'll see, for example, bootable USB sticks that are authenticated from the bottom to the top. You'll see examples of health certificates in action, and ID management systems. I think we're finally seeing that alignment, we're finally getting traction, and things are moving. This is a really exciting time to be in security.
Thank you very much.