Craig Mundie: Windows Hardware Engineering Conference (WinHEC) 2007
May 15, 2007
A transcript of remarks by Microsoft Chief Research & Strategy Officer Craig Mundie at the 2007 Microsoft Windows Hardware Engineering Conference (WinHEC) in Los Angeles, Calif. on May 15, 2007.

Transcript of Keynote Remarks by Craig Mundie, Chief Research & Strategy Officer, Microsoft Corporation
Windows Hardware Engineering Conference (WinHEC) 2007
Los Angeles, Calif.
May 15, 2007

CRAIG MUNDIE: As Bill [Gates] said, my job has gotten more fun in the last year as I've taken on responsibility for Microsoft Research on global basis. And one of the things that I try to combine as we work in that area is thinking about not just how the technology evolves itself, but ultimately how our uses of the technology evolve. And in the next half hour or so I'd like to help you think about how these things are going to continue to expand not only the range of capabilities of that we'll build into all of our computers, but the kinds of applications and, in fact, the kinds of users that we're going to have for of all this incredible technology.

Microsoft Chief Research & Strategy Officer Craig Mundie keynotes the WinHEC 2007 conference. Los Angeles, May 15, 2007.
Microsoft Chief Research & Strategy Officer Craig Mundie keynotes the WinHEC 2007 conference. Los Angeles, May 15, 2007.
Image: Page | Print

Computing arguably has already transformed the global society. It certainly had a big impact on business and the economy. But it continues to transform our society. As the availability of networks and telephony have expanded around the world, we find ourselves insinuating computing into virtually every aspect of our daily Lives. It's not just used by us in our business activities; it's increasingly at the heart of our entertainment facilities. And as Bill just talked about, we're getting a new generation of transforming the way in which we all communicate, even extending back to the traditional notion of just what's a telephone and how does it interact with everything else.

But more and more society's biggest challenges in healthcare, education, even energy management may, in fact, be some of the targets where we apply these technologies next. Many countries are increasingly recognizing that their national security, whether on the defense side or just the economy side, are more and more tied up in their ability to harness these technologies to all kinds of these interesting applications. And in fact there's no field of science or engineering today that can really advance in any material way without an aggressive use of these information technologies.

And so I wanted to talk a little bit as we look to the future about the kinds of things that are going to be changing in the way in which we have to build these devices and the way in which the people who will access them will be different than the way that we've use them in the past.

An example of this is how healthcare is going to be changed on a global basis by the incorporation of these technologies. If you look at this simple graph, it kind of shows two worlds, the world of the developed countries who have already invested heavily in healthcare in one form or another, and the emerging markets where, in fact, the bulk of the world's population exists. And it's pretty clear that even in the developed environment that we are struggling to pay for the capabilities that technology brings us in the ability to sort of remediate health problems, and the question is how are we going to be able to bring of these things to another 4.5 or 5 billion people around the world where we don't have the kind of capability and the ability to pay that we have within the more developed countries.

So, I'd like to give you a little bit of a demonstration about how healthcare might actually be evolve in the next few years, and I'll give you both a rich world example and then perhaps a developing market example.

The other thing that Bill implied in his comments is that the way in which we interface with computers is changing. And this demonstration is another way where we're going to see people changing the way in which computing is something that they can interact with.

What we actually have here is a traditional table, nothing magic about the table, but here we have a projection capability with cameras attached to it. This is built from the same technology today that is being made in huge volumes to do Web cameras, for example, or video cameras on the input side, and TV displays on the output side. But by combining them together and putting them into a PC for processing, we're able to build an intelligent input/output system.

So, here this game of checkers is one that perhaps an elderly person would play, who's confined to their home, and she plays checkers with a friend of hers. And so here's a game in progress. I can just take a checker and move it. And I've got physical pieces and the computer is essentially reflecting the moves of the person on the other end by representing them in video. So, they can talk to each other. It's essentially a videoconferencing system, and it's a game at the same time. But the table can be used for many purposes.

So, here it recognizes at the end of this game it's time for Nellie to take some of her medications. One of the big issues is getting elderly people to take them at the right time and take the right ones. So, here we can actually use optical recognition, and they actually put the pills down, and because they're all sort of optically unique, it can recognize them and says, OK, these are the right ones in the right numbers and so you should take these pills and go ahead.

And so this allows us to also collect information that can be useful in the way that we can deal with medication or ultimately over time deal with interactions of drugs that may be prescribed by multiple people. It might ask if she wants to participate in a survey, and I can just use touch on the table that's optically sensed as a way to control it.

Another thing older people have a tough time with is reading things. So, here it says you've got some mail in your mailbox, and if you go get it and put it down here, I can essentially look at this postcard and I can make it big for you, I can make it easier for you to read.

So, here we're using things that are natural extensions to the kind of technology we have that's in the personal computer environment, and yet it gives us the ability to do things that would otherwise be quite challenging for people to deal with.

So, let me leave the developed world behind a little bit and go on now to a demonstration of what is going to happen in the emerging markets, and how the use of computing and other types of communications technologies is going to allow us to change that environment.

So, here we might expect that a lot of these people are going to live in an environment, maybe a rural village. The one thing that we do know today is that those people, they're buying computers. They happen to call them cell phones. Cell phones today and in the next few years will have microprocessors that rival the performance capabilities of the things that we all designed for and used as desktops not that many years ago. And the ability to use these not just for the traditional telephony activity but for other applications is going to become increasingly important.

One of the things we've been doing at Microsoft Research is looking for ways to use these types of screen and voice-based systems to be able to allow people to interact with computer activities even if they're illiterate. And so, we've been learning that by using speech and voice and video and symbology we're able to get people who have no previous training in computer-based systems, and, in fact, can't read or write to any significant degree to be able to perform some significant tasks.

In this case this woman is a young mother. She's got an infant that's sick. And we want to have an interface that would allow her to get some medical care using this capability. So, she uses the phone. She may get some instructions through video. So, the video basically guides her in simple ways to how she should interface with the system, and it can offer her different choices as a function of whether she can read and write or not. So, in this case she says she's illiterate it and we go on.

So, it gives her a set of icons. It says, OK, if you have a medical problem, which member of your family is it, and so she selects her infant. Then it says, what are the symptoms? And so she selects coughing, a fever, and vomiting by just touching the symbols on the number pad that correspond on the phone.

At this point the phone essentially transmits the data, the description, and her identity to the medical facility in the village that actually shows her a picture and maybe a map and an indication with a video prompt that these symptoms could be serious and she should take her infant over to the infirmary.

And so, of course, in this environment we won't have the kind of medical care that we all enjoy in the more developed environment. So, in fact, the infirmary increasingly might be something like this where I now have a computer, it's a kiosk of sorts, and it's been interfaced to another set of USB peripherals, if you will, like blood pressure cuffs and stethoscopes, because we want to not only enable untrained people to capture this information but potentially to allow it to be used with remote diagnosis facilities if there's a healthcare professional that could be available in a networked environment.

And so I can come up to this, get some more video prompts. She might take a smart card that has been provided by the government, put it on the system, and it recognizes her, gives her more continued information, and continues to go through a summary of the kind of activities that she's been presented. It summarize the kind of issues that she had presented through her cell phone to verify them with the mother. It actually asks for her to input, perhaps by just touching the screen, what days or nights that the symptoms have and present so we get some history on the problem. And it says, OK, we're going to need to do some lab work, and whether this is done by the young mother herself or somebody who's trained.

One of the things that's also happening is we're getting computers applied to many new types of devices. This is quite an interesting device itself. It's made buy Micronics. This is a PCR, polymerized chain reaction medical lab in a box, has an embedded microprocessor. You're able to basically take these devices and take a drop of blood, put it in here, put it in, and in less than 15 minutes it does a complete molecular DNA test for many of the diseases that could be at the root of the symptoms that were indicated. This kind of thing used to be incredibly difficult to do, it would take days or weeks to get it done by experts, but this kind of device actually was one of the results of the Bill and Melinda Gates [Foundation's] Grand Challenge in Global Health, where they funded people to look for unique ways to solve these kinds of problems, and this devices is actually coming to the market in the relatively near future.

And so now labs that really are rich world class medical diagnostic capabilities can be used even in these kind of remote environments, and they can make a tremendous difference in terms of the ability to deal with these diseases.

So, here she might get some more information or, in fact, an introduction to a doctor. In this case the doctor may be a recorded piece of video on a server that is essentially going to be played to her in order to tell her, OK, we looked at the symptoms, we've looked at the lab, your baby has a particular problem, and here's how you could care for her. In fact, the material can then also be put on the smart phone, and when she leaves the clinic and goes home she's able to continue to call up that information and use it in the continuing care of her infant or the ability to continue to input information about the diagnostics.

Linking all the systems together also turns out to be important for the people who own the problem of managing healthcare on a global or national basis. You can hook them up to call centers. You can provide emergency medical treatment of forms. You can monitor the spread of some of these diseases. So, as we see the outbreak of avian flu or SARS or other things, our ability to know accurately that that's what they are, due to the molecular DNA test are other types of test that can be done in these local environments, the ability to connect all these things together through the Internet really goes a long way toward changing the way in which people will think about getting healthcare.

So, I think I these things are going to be incredibly important in the world that we live in, in the future, and I want to talk now a bit about how the computing environment that we all are going to use to do that is going to be used to make many these changes.

One of the things that is going to be -- I've changed just to finish the thought -- is that we have at Microsoft started to think about this marketplace in three different tiers. We have sort of the billion richest people on the planet that we've been able to already provide a lot of capability to. They have high disposable incomes.

Increasingly there are another 2 billion people that are sort of coming online and are being able to acquire these technologies, because they have disposable income, and increasingly want things like cell phones and personal computers.

And the kind of demonstration I just gave is critically important in terms of ultimately being able to reach out to perhaps another 3 billion people over time where some involvement of the government or NGOs or other philanthropy activities is going to be required in or to bring technology to them. But I think it's the only way we're going to see the kind of scaling that will be required.

One of the things that we have been developing in Microsoft Research is the idea of using the cell phone as a computer, not just in the sense that you can do the kinds of things I showed in the healthcare demo, but in a project we've dubbed Fone+, the idea of being able to affiliate the phone with auxiliary displays, in this case a standard television.

Years ago, we developed the WebTV technology and it taught us many things about simplified 10-foot UIs that are both in the Media Center today, but are also able to be put down into the computers that are of the class of a cell phone. So, by hooking up a USB keyboard and a mouse to a cell phone when you're at home or in a school environment, the phone itself can become an entry-level computer.

So, as Bill said, we are really fanning out in terms of the range of devices that people want to be able to use as a computer, and we need to think about how we make the hardware ecosystem moving this direction. Many of the cell phone companies have been thinking about allowing you to watch television or short video clips on your phone. There's no reason that if you had these things connected to a larger display that you could watch whole TV programs in that environment. And you may be able to bootstrap a lot of people into an Internet-based experience with music and video and some type of productivity or creativity application even before we're able to find that they can afford the acquisition of a traditional device like the ones that Bill has demonstrated running Vista today.

So, increasingly the form factors of computing are evolving. The personal computer as we knew it started as a fixed device, a desktop form factor, and very early on people became very interested in making it portable. And what we largely know today as laptops are really in this class I'd call portable.

When Bill talked earlier and he showed some of these ultra mobile PCs running [Windows] Vista, I think one of things that you see is the beginning of another trend, a trend where, whether it's up from the phone or down from the laptop, the idea of having a more and more potent computing experience in a mobile environment, the distinction being mobile is something that you actually use while you're moving as opposed to a computing facility you can move someplace else and then use, and I think the lines may blur among these things, but we're very focused on what it's going to take in order to be able to create this mobile computing environment.

One of the things that is fascinating in the quest for mobility is the requirement to deal with lower power. Increasingly, the heaviest component of these portable devices and phones is the battery. The thing the limits people's sense of utility and freedom to have these things be an integral part of their life on a consistent basis is can I trust that it will not run out of juice when I need it.

And so the silicon industry has done a huge amount of engineering in order to start to focus on consuming power in a more sparse way, and that is leading to another interesting thing, which is the evolution of the microprocessor itself.

One of the things that has happened is our industry, and certainly even our company grew up in an environment for 20 years or more where the microprocessor people just continued to improve the clock speed of our processors. So, Moore's Law, which spoke when Gordon originally commented about it, not about the speed of computers but about the transistor density, has continued to function, and we have certainly larger memories and more transistors to build these powerful computers, but along the way we were also increasing clock rate. And so we were getting sort of a two-dimensional increase in capability: We got performance through clock rate and we got capacity or capability through expansion of memory.

And in a way as an industry, as a community we have really enjoyed this very, very steady progression, but a couple years ago one dimension of this progression really began to stall out, largely because of the ability to eliminate heat from the packages of the microprocessor. And so while many people would have presumed that we would see the use of these small cores as the dies got bigger to compensate for this through clock rate, in fact that isn't going to happen, and the free lunch to some extent is over. We're not going to get ever increasing clock rates. We may get more transistors, but we're not going to get one processor that just goes faster and faster.

And so for the very first time the bulk of the software development community and the hardware community is going to have to think about how to build machines that have a lot more parallelism in them, that have two cores, eight cores, and ultimately many, many cores, and, in fact, the cores will become heterogeneous from an architectural standpoint.

Each of these brings with it a number of interesting challenges for the computer software industry, and it's an area for the last five years that has been a big focus within Microsoft from a research standpoint.

In fact, the cores that these will be built from will largely be these relatively simple, low-power cores that are increasingly important in phones and ultra mobile devices as well. And so if think about how the computer chip itself will evolve, the first thing that will happen is we use all of its transistor capacity and an appropriate die size to get together the number of cores that we need for the application at hand. So, we'll use small dies with a modest number of cores and perhaps some special types of acceleration in order to be able to optimize for the cell phone. We'll put more cores on one of these things, along with some other accelerators, and we'll integrate more and more things on a single die, and that will become the basis of the desktop, the laptop, and the highly mobile computing environment. And we'll put very large arrays of these things together and we'll use them as the chips in an environment where we have more robust cooling and power facilities to build these very large serving capabilities that will be not only at the heart of enterprise servers and home servers, but ultimately will become the basis of these mega datacenters that the Live services will be provided through on the Internet.

One of the things that is a challenge for us though is that many, many of the things that have been standard fare within the computing industry are going to be very problematic here. We cannot use traditional programming languages and naturally expose all the underlying concurrency. As we have all this capability, the complexity of the systems that we're trying to build will continue to grow larger and larger. And our ability to deliver them with full features in a highly secure environment will be more and more challenged if we don't find ways to create more formal composition in the way that we construct these systems.

And I think, as Bill mentioned, and as all of you are aware, and the Rally demos indicate, the system itself is becoming increasingly distributed with intelligence embedded in many, many of the outlying components of the system. One of the challenges of these distributed systems is just getting them to work reliably and correctly, because they are highly parallel and asynchronous.

And so the challenges of concurrency and complexity I think are the ones that are going to have to be a huge focus for not only Microsoft as a primary provider of the tools that people use, but for all of you who consume those tools and are going to be involved in building this very, very broad ecosystem of devices, not just for the traditional enterprise or home entertainment environments, but increasingly for many, many of these other scenarios where very, very large volumes of equipment or devices are going to be involved, and yet the way that they come together will be increasingly sophisticated.

And, in fact, if you look at these two challenges, and you look at the attributes that we would like to have in these applications of the future, they're almost exactly the inverse of what we teach people about in school and provide tools, including programming languages and debuggers and other things for.

These applications of the future are going to have to be loosely coupled in their construction, intrinsically asynchronous in operation. They'll be highly concurrent in execution, both within each component itself and certainly across the distributed system.

I think that composability is going to be a real requirement, both to improve the productivity of construction, but ultimately to allow more formal reasoning around the construction of the software itself and the integration of these systems.

As these things become mission-critical in every facet of life, the society is going to depend on them every single day in every single way, and they're going to demand of us as an industry that these things are like utilities, they just don't fail. And so many new techniques are going to have to be brought forward to create more failsafe construction of these things. They will have to be decentralized so that there's no single point of failure, and yet, as we see in the demos earlier, have to have the ability to administer these complex systems by people who are less and less sophisticated as the technology seeps farther and farther into our homes in particular small businesses. This will be augmented by the Live services environment, but still the complexity is going to be quite daunting.

And the systems are going to have to be resilient. They will have to tolerate many, many types of failures, whether those failures just come from arbitrary addition and subtraction of devices, the movement around within a home or a mobile environment. Each of these things creates a level of variability in the environment that historically we didn't tolerate. It used to be once we got our system together it was fairly stable, we'd boot it, and it would live in that model, but that world is pretty much gone and I think a great deal more construction thinking is going to have to be done in how we build these systems in the future.

So, we do face a number of technical challenges: One, how to construct these highly parallel programs. This has been a task you could say that was reserved for two very specialized groups in the past. The people who did the very lowest level work in operating systems and device drivers had to deal with the parallelism that existed at the hardware level, and there was a special community in supercomputing and technical computing who had no way forward for the last 10 or 15 years other than to harness parallelism to solve bigger problems, and so they put a lot of energy into the tools and techniques necessary to do that. But if we're going to get the benefit out of these new devices that in a sense are supercomputers on a die, then we too are going to have to adopt the tools of the trade necessary to build these highly concurrent environments.

Coordinating all the resources and services that will exist in these systems will be increasingly challenging, and finding models of executing fine-grained programs across a variable structure, people aren't really going to want to write an application in a fundamentally different way for it to run on a cell phone, on an ultra mobile device, a laptop device, a desktop device, or even a cloud-based hosted environment. We're going to have to some extent common building blocks from which these things are all constructed. And in a perfect world if you move it up the scale you'll just get more performance to the degree that the programs are intrinsically parallel.

But if you look at the machines that we already see emerging, the Intel Core Duo and other dual-core and quad-core machines from Intel and AMD, we get some limited utilization of these capabilities. But increasingly the machines have outpaced the model with which we can tend to employ their capabilities. And I think that building the applications and thinking in a broader way about how to harness this incredible capability is going to be increasingly important to us.

So, the quest that I think we need to be on as an industry is one that I might call fully productive computing. Today, if you look at all of the aggregate computing facilities that exist in our pockets, in our briefcases, in our desktops, in our laptops and server farms, to some extent they're mostly highly underutilized. Today, the bar on the left here kind of implies what we really see in most people's personal computers, which is it does a relatively modest amount of work, usually in a highly interactive type of activity, and then it sits idle waiting for you to come back and fall on your mouse or keyboard and ask it to do something again. And we've really optimized these systems in a way were response time to the finest-grand interaction with the person is really, really important, and we've recognized that and optimized for it. But in doing so we haven't really thought about how to harness the rest of the compute facilities. People have used a variety of things from screen savers to some types of grid computing trying to harvest these cycles, but there's never been any concerted attempt, largely because of the programming complexity and many of the intrinsic security issues in our existing systems to put those cycles to use.

So, now if we speculate that the world is going to change dramatically to these increasingly powerful machines, where, in fact, what we used to know literally as supercomputers may be in every laptop or desktop, if we don't do something, then this graph would look even more lopsided. We would have instead of 3 to 5 percent average utilization of a personal computer, it might be .03 percent in 10 years.

So, clearly something is going to give and the question is what. And I think the tools are going to evolve, people are going to get more creative, as they always have, in trying to figure out what to do with this capability.

And so here's my own personal list of the kinds of attributes I would love to see in a fully productive utilization of these computing facilities. One, I think that the results of the computation, all the modalities of interaction will become more predictable. I also think they'll become a bit more humanistic. We need to find a different ways for people to interact with computers beyond point and click. And voice, as Bill mentioned, is becoming more and more prevalent. The use of vision, whether it's computer vision for that type of interaction, or the ability for the machine, in fact, to see and recognize the people that are interacting with it or around it so that it can do things in an appropriate way, I think all these things will become important. And, in fact, as you move into these other types of human interactions, the computational loads go up exponentially. And so the ability to use this local facility will be increasingly important.

I think another important capability will be what I call context awareness. Here the machine and the software that's running on it needs to be sensitive about where it is, what you're doing, what the people around you might be doing, and it should become more and more like a great personal assistant. It knows when to bring something to your attention, when not to bring a forward, when to interrupt you, when not to interrupt you, to bring forward things that you might find interesting that you didn't really expect.

And so one way in which this going to happen is more model-based construction of applications, and we're doing a lot of work now to try to think about how do we take all this contextual information that the machine has and find ways of representing it in uniform ways, providing APIs, if you will, so people can depend on it. And just as in the past where we provided APIs for graphics or storage or communications, we're increasingly going to be able to provide APIs to provide access to the models that the machine develops of the behaviors or interactions with the people and applications that they use.

I think this will become quite interesting in terms of making our applications be more personalized. In order to do that, they'll have to become more adaptive, and the way in which they present information to you will also take advantage of the increasing sophistication of display technologies, and the whole thing will become a bit more immersive.

In fact, one of things I think will be resurgent as we have this incredible capability -- and as we've seen before, it will enter at the desktop and then increasingly go portable and then mobile -- is the ability to have the machine do more speculative execution, and in conjunction with these immersive display environments have the machine do things that you might find useful or in anticipation of what you already have done and what you think you might do in the relatively near future.

It's interesting, and one of things that we did was we took some technology out of MSR as we were designing [Windows] Vista and we recognized what is it that people really like even going back to responsiveness as the applications have gotten bigger. They'd still like to be able to click on an application and get it to start almost instantly. And so we built a model of the applications that people use, and built it into the operating system, and built a facility called SuperFetch where it tries to use unused cycles and unused memory capacity to preload components of applications where it's guessing what the next most likely application is that you're likely to run.

So, I saw some statistics recently, now that there's some field use of [Windows] Vista and it maintains a list, internal to machine, and we've monitored some of these things in test environments, and it gets it right about 90 percent of the time. Ninety percent of the time the machine predicts as a function of the day of the week, the time of the day, what you're doing, what the last few apps you used were, what the next app is going to be. It gets it 90 percent of the time right in a list of four. And so it's able to build some of these things into preloaded utilization of otherwise idle cycles in the machine and the disk subsystem, and it appears then that the system is more responsive.

What if we raised that up, put it on steroids where when you came to work in the morning the thing said, hmm, I looked at all the stuff you have normally done, and these are the things you're most likely to do when you got here, so I did some of them for you, or I at least gathered some of the data that you might want to get, or gathered news clippings? There are lots of people who have done some of this kind of work. I predict that this type of speculative execution will become increasingly likely as we have more and more sophisticated software.

So this brings us to sort of an interesting question. Right now, as the Internet has evolved, broadband has become more highly penetrated, and to some extent the computers seems to be not fully utilized, we're in the middle of one of these natural pendulum like swings between centralized computing and computing at the edge. It started with the mainframe, and then we added terminals, and then we moved to departmental, and then we moved to personal; it just kind of moves back and forth. And there are a lot of people today who say, oh, you know, I think that in the future we'll just have dumb presentation devices again, and we'll do all the computing in the cloud.

But if you look at this kind of capability and you think about these modalities of human interaction, and you decide to have the computer doing a lot more of these things in a contextually sensitive environment, I contend that since the cloud is made ultimately from the same microprocessors, as the utilization becomes higher, it becomes impractical for a whole variety of costs and latency reasons to think you can just push everything up the wire into some centralized computing utility.

And so, in fact, I think for the first time in a long time we're going to see the pendulum come into a fairly balanced position where we, in fact, do have incredible power plants of the Internet in these huge datacenters that provide these integrating services across the network, but at the same time we're going to see increasingly powerful local personal computing facilities in everything from embedded devices, cell phones, and on up the computing spectrum.

And so at Microsoft we have made a very strong strategic commitment to the idea that we call software plus services. At times people have talked about should software just be a service, and there are certainly elements of the click-to-run model that people find compelling from a management and administration point of view, and we are fully embracing those, along with the ability to have new technologies that facilitate the construction of these applications and allow them to be brought down and installed or run on a dynamic basis.

But at the same time we see the potential of these increasingly powerful client devices, the ones that are physically with you, and we see no reason to believe that the world's programmers will let lie follow the incredible computational facilities that will be intrinsic in all these devices. And so the architecture that were really delivering now more and more is one that allows people to naturally construct a software plus services environment.

A few months ago, Bill and I gave a joint keynote at RSA conference, and one of the things that we talked about there was a concept of anywhere-access. In fact, if you look at Bill's keynote this morning, you look at the evolution of the environment we're all in, we have an incredibly complex ecosystem of intelligent devices. And people want access to those things, whether it's work information at home, home information on the road, personal information access at a point of presence in shopping or banking or other things, people really want anywhere-access. This is going to require an evolution not of just the software but ultimately of the devices in order to be able to do this with good facility and good security.

And so I think there are a number of tasks now that fall to the WinHEC community and Microsoft. Together we need to enable a fabric of devices. These suites of devices have to come together in interesting ways to solve all these problems. If you kind of just go around this thing clockwise from the bottom, context, we need to be able to have data and devices that's get work in conjunction with the things around them. The Fone+ is an example of augmenting the phone with the television in the home, with a very simple, low-cost interface in order to be able to use that screen in order to allow this computing to provide something different.

People want the scaling of data. They want the sort of right size amount of data in the right place at the right time, and so we need to think architecturally about how we allow that scaling to happen as a function of the display facilities or the computer local storage capability.

In the human the machine interaction, because these things are now everywhere, we're going to have new requirements for how we interact with them. We have to have more discrete ways, whether it's gesture or new types of touch interfaces or other things to discretely interact with computers so that it's not disruptive to other people about us. We're going to have to have new models of how we output information to these devices, whether it's like this checker playing table over here or other things where we just won't be able to have everybody buy unique mechanisms for each task at hand, and yet more things want to be done.

Importantly I think for the hardware community is this question of identity and trust. We have new architectures built into microprocessor chips like the TPM modules that allow us to sequester secrets in order to be able to ensure that we have very, very reliable secure interfaces and identity. Being able to have identities for people, programs, and hardware devices that are non-spoofable is going to be a big task for the industry in order to be able to assemble these very potent collections in support of business or personal task, and have people trust them.

When you start assembling devices at this scale, in absolute terms and also in terms of the numbers that an individual might have, management of these devices becomes a real challenge. You can see many of the lessons that we've learned in management being applied in the new Windows Server 2008 environment, and even in the [Windows] Home Server environment. And increasingly we need to find really, really good ways to take these collections of devices and their need to have common data access and bring them forward in an architecturally robust way.

Many of the goals that we have for the Live platform are designed to facilitate the construction of these aggregates, the ability to manage them in a seamless way, and the ability to have access anywhere to the information that you care about, and the ability to add a new vice to that collection in a very broad way just as simply today as we talked about adding new devices to the collection of devices that you have in your home network environment.

And, of course, to do this it all depends on connectivity. And I think that more work will be required in order to bring us the class of connectivity and a cost of connectivity that we want for all these interesting applications.

Just as the demo showed where you can now go and buy a dedicated extender, radio extender to simplify the construction of your home network for video capable delivery, without disrupting your old network, I think this kind of creativity is going to be very important, and the ability to use things like ad hoc mesh networking, which would take this to an even greater level, or, in fact, might allow the kind of network that we enjoy in the rich world coupled to our private home network to be emulated in a rural village, but where there's nobody who's going to build, operate, and maintain these things, and no one who's willing to put the capital equipment expenditure out up front in order to build these kinds of capabilities for those very, very large rural populations.

So, I think there are an almost endless array of opportunities, and while many people sometimes say, hey, have we seen it all, is the evolution of the personal computer slowing down, is the hardware ecosystem one of just perpetual refinement, I think the Microsoft view is emphatically no, that the PC itself will continue to evolve in radical ways, including probably the most profound architectural changes that we've seen in almost 30 years. These things will be upon us in nominally about five years' time.

And so it really is time for many of you to begin to think about this as an incredible pallet from which to paint the pictures of computing in the future, and we look forward to doing that together with you.

Thank you very much. (Applause.)

Read More: