This is the Trace Id: 355b750188fd72166f0252ddf231d635
Skip to main content
Investor Relations

Goldman Sachs Communacopia + Technology Conference

Wednesday, September 10, 2025
Jared Spataro, CMO, AI at Work

Transcript

icon_word

Who: Jared Spataro, CMO, AI at Work
Event: Goldman Sachs Communacopia + Technology Conference
Date: September 10, 2025

Kash Rangan: As the music dies down, the AI activity and IQ from our next guest is going to heat up. Did I set you up too much? I've heard great things about you.

Jared Spataro: High expectations.

Kash Rangan: Well, our next guest is Jared Spataro. I will have Jared walk us through his background. We would love to find out what you've been doing at Microsoft for the short span of 20 years. You look very young, and I find it very hard to believe that you've been at Microsoft for 20 years. But tell us about your background and career.

Jared Spataro: Sure. So, I was educated as a computer scientist and started there. It's always been my interest, and energy has been there.

When I came to Microsoft, it wasn't at the beginning of my career. So, I didn't start there. I worked at smaller firms prior to that. I was in, essentially, product management. And I went to Office. So, I worked on the Office set of products.

Over the course of the years, I started to migrate more towards the business side of what we do. So, we put – product marketing is the business management side, and we typically will pair that up with an engineering lead. So, now I'm responsible for a portfolio that we just call AI Business Solutions.

But over the years, I ran everything from Office Commercial to Office Consumer to Windows to Office 365 to Microsoft 365. The work during the pandemic on Teams was all work that I did. And then, just when I thought I was getting bored, this whole thing started to happen with AI, OpenAI and Copilot. So, that's what I'm up to now.

Kash Rangan: Got it. So, although you're a technical person by education, your title is Chief Marketing Officer. Is the bar so high at Microsoft to be in marketing?

Jared Spataro: Well, these days, I think it helps a lot to deeply understand the tech, for sure. I don't know, I love Microsoft because it's a place where we expect our business people and the CMO, if you will, to not only understand the tech, but to help shape it. And that's the part of my job I like the most, is trying to chart the future where things are going.

Kash Rangan: Great. On that topic, how do you envision AI evolving over time? So, Microsoft's got a unique vantage point, being super, super early. So, explain to us how you envision the AI tech stack evolving over time – your vantage point – and Microsoft's role in this evolution.

Jared Spataro: Let me talk tech stack, but I'll be brief. I don't want to go too deep, but you certainly can ask more questions.

We start at the hardware level, obviously, all the way at the chip level, data center level. There's been a lot of innovation not just in chips, but in data centers over the last couple of years.

On top of that, we actually start next with the data layer. There's some important work that we've done, particularly with a set of investments that have gelled into what we call Fabric now.

Then we have something called Foundry, where we think about the model layer. This is where the LLMs sit. And that's been important for us.

Next up, we move to the dev layer. Here, we split things between what we think of as pro-code and low-code. So, those things that are designed for makers versus professional developers.

And then, finally, we round things out a little bit differently than most, because Copilot for us is kind of that front end, if you will. We think of it as a platform that we're really trying to draw users into – not with a daily habit, but in fact, with an hourly habit, in the knowledge worker space.

So, that's how things compose.

We've spent a lot of work, if I were to take a step back, though, trying to project out three to five years and say, well, what's the firm of the future look like? We've decided to call that idea the frontier firm, just using that "frontier" label from models. And we're pretty excited about what that looks like. We can describe it today as a firm that we would say is human-led, but agent-operated. And we think that it behaves very differently. We think that humans play an important role, but the patterns of work between humans and agents are going to be very transformative. So, that's kind of where we are.

We see Microsoft's role in there as helping businesses/organizations get from today to that frontier firm. That's how we would frame up how I have conversations with CEOs and their teams, technical teams and business teams.

Kash Rangan: Got it. We all watched the launch of GPT-5. I'm sure you did. What is your reaction to GPT-5 launch? What do you make of the technological enhancements? And I have a follow-up question as it pertains to Microsoft 365.

Jared Spataro: This has been interesting for me. Because for me, it separated out a little bit people who understand what's happening versus those who are kind of superficially just tracking the technology. Let me explain what I mean.

GPT-5 was a systems launch. A lot of people focused on, "Hey, wait a second. We don't see – we were looking for non-incremental model differences." And I understand that, and I understand the expectation.

But if you get to the heart of what was going on, the biggest thing that was introduced with GPT-5 was that the orchestration layer – what they call the router – moved from being a deterministic, hard-coded thing to being an LLM model that had been post-trained to orchestrate other models and tool usage. And so, today, with GPT-5, when you send in a prompt, it's deciding, "Do I route this to a reasoning model? Do I route this to a cheaper model?" Going forward, that system, I can't explain how important that system's going to be for actually realizing the value of agents and the value of what's happening.

So, I understand the chatter about, "Ah, I was looking for more in terms of the model." This was a really important systems launch. And it exposes, in their case, for ChatGPT, it exposes the broad swath of free users for the very first time to reasoning models at the right time in an economical type of way.

So, I think we may talk a little bit more about that, but that systems-based approach I think is really important because it's the beginning of a system that is going to prove to be very influential.

Kash Rangan: So, you just got me thinking. So, the beginning of a system that's going to prove to be very influential. Where do you go forward with this? How do you make this more useful in terms of Microsoft's approach to this technology?

Jared Spataro: Man, it's a great question. We think of – here's what the state-of-the-art would tell us right now. There is a question about, can you create essentially a super app that is so smart model-wise that it can do everything you could ever imagine doing, with superhuman capability? And it turns out that the answer – we would think at Microsoft and I think many of the model producers feel – is no. In fact, it's better to go and train agents that are domain- or area-specific that prove to be superhuman, and then essentially to have a system that pulls on the right agents at the right time to answer the problem.

So, we look at this in commercial terms and say, man, we were just given a gift as an industry, with essentially an orchestrator, a conductor of the symphony that, with time, is going to be able to register up the right agents, and those agents will have real domain expertise, and then call on those agents in an orchestrated way to get incredibly sophisticated jobs – not just tasks, but jobs – done. That, for us, is the future of this frontier firm. We expect that companies will produce financial analyst agents. They're going to produce expert scientist agents. They're going to produce actuarial agents. And this router is going to call on them to do the right things in business process.

So, we look at this moment – that was one of the reasons we sim shipped GPT-5, we worked so hard to sim ship that in Copilot, is it's a real watershed moment for us.

Kash Rangan: So, the bleed over to M365 Copilot, how does this manifest going forward, the GPT-5 improvements in M365 Copilot?

Jared Spataro: Well, the same day we replaced the back end of Copilot with GPT-5. Currently, if you use Copilot, you have to select a button. That was just us making sure, running water through the price pipe. But the next couple weeks you'll see that turn to the default.

And what it means for us is it's the realization of a strategy that we simply would call "Copilot plus agents." We think of it this way, Copilot is to agents like the iPhone is to apps. Or in other words, just like the iPhone has become a platform or a window into the world of apps over the last decade, we believe that, in a commercial sense, Copilot will become a window or a conduit into the world of agents.

And what we envision happening is Copilot will help you at the right time select the right agents to solve the business problem that's in front of you. That's what our customers are asking for. They're saying, "Hey, I love the power I'm seeing, but I need it to be specific to my business. I need it to be specific to my industry." And Copilot is architected to do exactly that.

Kash Rangan: Got it. Any examples of that? So, you talked about the financial analyst thing. Any other example besides putting me out of a job?

Jared Spataro:  Sure. I can give you a bunch of different things. Across the industries right now, customers are working with us either pro-code or with something we call Copilot Studio to create purpose-built agents. These agents typically are being trained on, like, a reasoning model, and then you can take in their own content and train it.

So, I'll take pharmaceuticals as an example, and you can take something like the clinical trials or you could take a part of the scientific process. There are a bunch of different jobs that happen along the value chain of clinical trials, and what our customers are starting to do is to build agents to do those jobs. Someone, for instance, as simple as a technical writer. Another person prepares the submission to the FDA. Other people summarize, for instance, the actual trial data. I mean, there's a bunch of things that happen.

You can stack those agents up, and you can essentially design an end-to-end process where Copilot can do the orchestration for you of, "Hey, I want to run this and pick up where the data is today, and I want to start to run it down to the process of getting to an FDA submission." It's not done by Copilot itself. It's done by Copilot orchestrating these specialized agents to do their jobs. And that really moves us into this realm of not just agents, but digital workers, digital employees.

Kash Rangan: I'm going to throw out a bunch of seemingly unrelated terms and ask you to see if you can explain where this concept kind of fits in this: RAG, broader context windows, reinforcement learning, test-time compute. In that realm of seemingly unconnected things that I just blabbered out, where would this enhancement to GPT-5 fit in? Is it a broader context window? Or none of the above? Is it, "That's mad rambling, Kash, and it does not make any sense."

Jared Spataro: It's good rambling. I'll have to put them together, though, for a second.

It was RAG...

Kash Rangan: RAG, broader context window, reinforcement learning. So, those are all things that are supposed to enhance the core model...

Jared Spataro: Correct.

Kash Rangan: ...in some way: add context to it, provide information to it, hold more in memory. Whereas what you're talking about is getting deeper into domains. Does that happen because of any of these things? Or does the model itself come with the deeper domain, and you just have to call upon that? "Okay, I want to pull upon the financial analyst domain." And boom, it goes in.

Jared Spataro: Okay. I think I can construct it in a way if you're following the conversation out there.

Think of what we're describing as a way to create a system of agents that has an orchestrating agent in Copilot essentially directing traffic or orchestrating a team of specialized agents who do work.

Kash Rangan: Got it. Okay.

Jared Spataro: Now take, for instance, the things that we just threw out there. All those things that Kash just talked about are ingredients to make typically those specialized agents more effective. So, RAG is a way to, for instance, add very specialized context to an LLM so that it can reason over data. The context window creates a bigger kind of space of context for people to be able to do that.

So, the thing we're talking about here is a system of orchestration. A lot of what we've heard from the industry so far has been almost like incremental improvements on an agent-per-agent basis. And this is about how do you get non-incremental improvements because you have a set of agents that are working together in a structured way.

Kash Rangan: Got it. Not step-function, but orders of magnitude better.

Jared Spataro: That's exactly right.

Kash Rangan: That was what was echoed by our previous panelist from Sequoia Capital, that he said that people misinterpreted the GPT-5 as an incremental advance over GPT-4.5, and he made it look like it's a lot more than that, which seems to be very consistent with how you characterized it.

I watched Build, and I was fascinated. I was not prepared to be riveted to every second of what happened at Build, and I was riveted. In particular, the support for MCP protocol, the agent-to-agent protocol, and what seemed like you were building a middleware layer. And just my mind flashed back to BEA Systems. That may not mean anything to you, but maybe some of us here in the audience might actually think like BEA Systems. They produced middleware for SaaS web-based applications. Of course, the company went nowhere, but they were at the beginning. You could see the future if you envisioned.

It felt like you were building middleware for agentic applications. So, talk to us, if you don't mind, the implications of these frameworks – A2A, MCP – as it pertains to the AI apps ecosystem development. How important is this?

Jared Spataro: It's a great question. To answer the question, I'll back up a little bit and just say there is a huge delta, a gap, between the capabilities, the raw capabilities, we see of the technology – particularly, of the models right now – and the value that commercial customers are realizing. I think all of us understand that. Otherwise, there would be different things happening in the market.

The thing that we see that is the gap to be filled, essentially, is how do you practically take these very powerful workhorses, these engines, and put them to work in the very complex plumbing that is today's enterprise. And as you're talking about MCP and A2A and these protocols that allow agents to talk to one another, that's exactly what we're doing.

We see our role at Microsoft as taking the best innovations from all over the industry, from all sorts of different providers of those innovations, and providing a platform for our customers to build solutions that actually just kick out business results. And it could be whatever the industry is, if it's consulting or if it's pharmaceuticals or whatever it happens to be, it really produces results.

But to do that, you actually have to start to fill the gaps in there. And so, for instance, having agent-to-agent communication, even for what we were just talking about with the orchestration, it's key. It's foundational. It's really important. It might not be the most exciting thing, but we're trying to really build that up, build that stack at every layer so that there's connections there, and then it's easier and easier to program.

Kash Rangan: When Satya Nadella talks about Foundry, the foundry layer has all these protocols being supported. And then, if these protocols are supported, then you would need to pull in a data layer that would have all the semantics, the context, et cetera, to make the agents actually do the work. Right?

Jared Spataro: That's right. And for us, that's where, as I rattled off, Fabric is that data layer. You don't have to move all your data. Fabric can both do in-place, as well as data that you move over. Foundry, whatever models you want to, with, for instance, communication between them. You've got it exactly right, yes.

Kash Rangan: Got it. How dependent is the Microsoft Copilot stack on OpenAI's IP? And are you diversifying your tech stack beyond OpenAI?

Jared Spataro: This must be the question of the hour, I would imagine, right now. OpenAI is...

Kash Rangan: Because I read something about this MAI model that Microsoft is doing.

Jared Spataro: OpenAI is our first and best partner, no doubt about it. We have a very unique relationship, but it's a very important relationship for both companies. And the Microsoft 365 Copilot runs on the OpenAI models. GPT-5 sim ship was another vote of confidence. We think that they've got the best technology out there. So, it's really important to understand that we feel like it's a great partnership.

That said, you just talked about the tech stack, and you did a great job of hitting at the layer of Foundry. We have been now for some time, for some number of quarters – and  particularly, earlier this year – very open about the fact that we want to give enterprises access to just the best technology out there.

So, that set of announcements at Build, what most people didn't understand is, using Foundry, you could choose any model, make an agent, and essentially mount it in M365 Copilot already. That means you could choose a Llama as your model. You could choose whatever you wanted to out there from any other provider as a model and put it in there today.

So, that's been an important part of the way that we are pursuing the business. But OpenAI, first and best partner. We have a very unique and very important relationship.

Kash Rangan: So, the role of MAI, it's just one other model. It's a Microsoft first-party model. Where is that going? What are the plans for MAI in the context of the Foundry layer?

Jared Spataro: We would say it this way. We've always said for a long time – Satya was one of the first people to say this – that we believe that models will become commoditized over time. There are many ways to hear that statement, but one way to understand the statement is there is some threshold that when you pass, you're already so valuable that what you're putting on top of that becomes essentially diminishing returns for many tasks that you'd need to do. So, us developing our own models, it's very consistent with that idea, that we believe that these at some point are going to become commoditized and that you're going to say, "Hey, there's diminishing returns." We don't need something more powerful for the general purpose.

So, frontier models will always have their place pushing science and pushing the boundaries. And yes, we've said that we're going to develop our own models. We've said that we are not trying to develop frontier models, which is interesting.

Kash Rangan: Oh, okay.

Jared Spataro: And so, that – we may change that perspective. You shouldn't take that as anything more than kind of how we've talked about what the MAI models are right now. We've said that it's not meant to compete, for instance, against GPT-5. But the model space continues to change, and we hold that it hasn't reached a place where we feel like it's just slowed down in its overall progress.

Kash Rangan: We've heard so much about AI, the threat, the potential threat of AI disrupting seat count or put pressure on seats due to layoffs or maybe intentions to hire at a slower rate because of AI's ability to do the right code, to do design, and who knows what else it's going to do. Where do you stand on this debate?

Jared Spataro: We're trying to take both positions, I'll say. So, let me talk about both positions.

Here's what we'd say. Oftentimes when people come in and say, oh, my gosh, the world's going to change, they think it's going to change overnight, or at least that's kind of where the mind jumps to. Some amazing technology. It's going to change overnight.

There's no doubt that if our hypothesis is true that there will be human-led, agent-operated enterprises, that agents will become more and more important in doing work and that that will have an impact on the work that humans do, for sure. But how fast that will happen, what the humans will do, et cetera, we can't predict the future with perfect fidelity.

And so, we would say this. We have a business model today that has a per user licensing basis to it, with M365. We like that. We continue to grow it. We grow seats. We grow ARPU. At the same time, we increasingly have an agent-based business model, where we are charging in two different ways: per agent and we charge per consumption. Our customers can choose.

So, we kind of have our foot in the two camps right now, and we think that's the right place for us to be as a business, because we're just going to watch how the dynamic plays out over time.

Kash Rangan: What are the productivity benefits that you're seeing with your customers on account of adopting Copilot at this stage? It's been out for a couple of years now.

Jared Spataro: Correct. They fall largely into three categories. Category one, we would call personal productivity, and that's about making someone who already does a job mostly do that job more efficiently. And here, for the most part, we feel great about what we can do. We think in many tasks we can improve efficiency by 20% to 30%. And if you do kind of, like, control- and treatment-type group experiments, you can show that.

The most difficult thing about that is it's tough to drive ROI on saying Kash is 30% more productive, unless he's a salesperson and carries a quota, quite frankly. Because a lot of knowledge work doesn't translate directly into top line, bottom line. It's a team that has to work.

So, we continue to do that work. We feel good about it, but it is hard to make the ROI argument for it.

Most of what's happening in terms of actual OpEx savings is happening in process-based applications. So, by that, I simply mean people are picking a process out there that runs, that they measure, that they have KPIs with, and they're inserting AI in two ways: either incrementally improving that, which is important, in some sort of continuous improvement type of way; or in some cases, they're entirely redesigning the process and saying, "Oh, we think we can do this differently."

So, to make that very real for you, a lot of people have started – and we've heard that a lot – in customer support and customer service. So, incremental improvements, for instance, are throughput by a human customer service agent. We've done some work at Microsoft where we've improved throughput of our agents by about 12%. That's real money saved, for sure.

But there have been other places in customer support where we just entirely redesigned the system and we tried to do what we call "deflection" up front: never have it get to a human agent. Can we answer your question essentially with AI up front. That's a different process, and we've been able to save – about half of the savings we've realized have come from that deflection.

So, it's just real nuts and bolts. It may not be the most exciting thing to talk about, but at Microsoft we're saving a lot of money, and we see our customers doing it as well.

Kash Rangan: Well, I use Copilot to draft emails and make very little corrections to it. And I also ask it to summarize my calendar for the day and what are the priorities. And I even ask it to prioritize the emails that I need to respond to. And sometimes it'll look at an email and say, oh, the number one important email would be, like, some client asking me to comment on a wildly speculative thing that's happening that it thinks is very important because it's worded in such a, "You have to please get back to me on this."

Beyond that, I think it's been just generally a tool that has been growing. And I'd start to use Word, and not that I use PowerPoint, but I get into it, and Copilot starts to show up, "Can I summarize this for you?" It's just started to show proactivity. ChatGPT says, "Would you like me to do more research on this?" It's starting to show up a little bit more and more and more.

Jared Spataro: Yes. And we're excited about what you'll see from us this fall, because we finally feel like with GPT-5 we're starting to get a way to be pretty sophisticated about some of the things you're talking about.

And I'll give another couple of examples quickly. Maybe one, a lot of people don't think about it, but we often work outside of our native tongue. You may have to do something in Italian. People have to do things in English that isn't native. We see lots of people using Copilot actually to help them communicate in something that is not their native tongue incredibly eloquently, as an example. It's a small thing, but it really matters. It's a great example of something that's not currently being measured but makes a big difference at the personal level.

Kash Rangan: I think some people would say that Wall Street uses too much jargon: CRPO, RPO, KPI. I mean, translate all that for me in English. We actually have a jargon dictionary in the Goldman Sachs research. It's a massive one. It's a pretty long one. Because every industry analyst has their own, like, jargon list. One day, that goes away.

As of the Q4 earnings, which were phenomenal, Copilots are now more than 100 million MAUs across commercial and consumer. Where are we in this adoption cycle: first inning, second inning, third?

Jared Spataro: We're very early. We would say we're very early. In the commercial side, which I'm responsible for, the motion is "land and expand." The last data point that we released was a couple of quarters ago, and we talked about the fact that 70% of the Fortune 500 are using Copilot in a pretty extensive way. And those are our "lands," and we're expanding from there. This past quarter when we reported, it was our best quarter ever both in terms of seat adds for Copilot and the number of customers coming back to add seats in a big significant way. So, we feel like we're early.

One of the things I said in an earlier meeting today that I'll just repeat is the biggest difference that I would flag for the group, Kash, is that over the last six months I've seen a marked change in most enterprises. Whereas six months ago they were like, "Well, maybe we'll do something where we provide kind of some sort of assistant at work for everyone," most companies at this point have decided they're going to do that and that they're just trying to decide, "Who will I go with," or "What will my estate look like." And that's a big change. Because once you start to lay down that baseline layer, I think it improves the literacy, it helps people to find other ways to use the technology.

Kash Rangan: Got it. How has Copilot usage tracked relative to your expectations? How are those usage patterns changing?

Jared Spataro: Well, let me just say my expectations – and we'll talk about this group's expectations for a moment – my expectations would be, when you look at this, it still is our fastest growing M365 portfolio product that we've ever had. That's saying something, because M3 and ME5, like, those are big important products that continue to really grow. So, I really need to stress, if you want to get a sense for it, like, it's growing faster than either one of those two.

In terms of usage intensity, we are really encouraged. When people start to use it, they do what you just described. They start to use it, and then we see the intensity increase, in fact, user by user as a general trend. So, we like what we're seeing there. The only workload that we feel like is more intense, like, during a ramp-up period ended up being Teams, and that's because we had the pandemic and there was this single-world event. So, it's got a lot of intensity.

That said, if I were to sit down with each one of you, I think mostly what I'd hear from financial analysts is, "What's taken you guys so long? You have 430 million seats out there. Like, what's the seat count? What's the penetration? What does this look like?" And it's just early innings. It takes a while.

This is a business where we've been trying to say we feel like we've got a great position, but we've got to land, we've got to expand, we've got to help people change the way they work. There's some transformation required.

Kash Rangan: I have to say, another Copilot usage which really was absolutely productive for me was there was a point in time – March or April, I forget – there was this thing, "Oh, Microsoft is going to cut CapEx. Their whole hyperscalers are...", blah, blah, blah. I had the benefit of your most amazing IR team on the road, and I had conviction, and we wrote up a report. And I wrote up in my emails my thoughts on CapEx. And one client – and there were multiple emails sent to multiple clients on this topic, covering multiple aspects.

And one client called me, like, two minutes before a company was going to be reporting earnings, and he said, "I want your thoughts on Microsoft CapEx." And I said, "I'll call you right back." I just went to Copilot and said, "Summarize everything that I've said about Microsoft CapEx in five bullet points." Boom. I sent it to him. And he was ecstatic. He was like, "I really appreciate you getting back to me so quick." Thank you, Copilot. I'll not mention the client's name. I don't think he's here. He may be here.

Jared Spataro: I will say that's one of the big differentiators for us, is access to your work data cannot be – I don't think it can be overplayed. It's really important to understand.

Kash Rangan: I checked the email though before I sent it.

Jared Spataro: That's the right thing to do. AI can make mistakes.

But that work data is very valuable.

Kash Rangan: What is – who is Copilot's competition? When you try to sell this, who do you run into? Is it, like, ChatGPT modified by the customer to pose like Copilot?

Jared Spataro: There are two classes of competitors that we see. We see other chatbots, typically by the model makers out there. And then, interestingly enough, we see competition from roll-your-own. IT is really excited about creating the "Acme Corporation" chatbot based on GPT-5. Like, so excited about doing this. So, that's an interesting dynamic that I'm not sure I would have predicted. Now I feel like I understand what's going on there. ITs, they're builders. They like to solve problems and build. So, we see both those things.

For the most part, we really believe that those folks who are rolling their own are not going to be able to keep up. They won't keep up with the iterations of the model. They won't keep up with the innovations in user experience, et cetera. But right now, it is a portion of what we see as also challenging those of us who are making our own kind of assistants at work.

Kash Rangan: I mean, there's this tendency at the beginning point of any new tech cycle that when you have the building blocks, you're excited to build your own thing...

Jared Spataro: That's right.

Kash Rangan: ...and you feel like you're a genius developer and everybody's a genius developer or creator, et cetera. When things kind of settle down, that's not the core competence, and you want to rely on a provider like Microsoft that has watched hundreds of customers and has just simulated the problems and makes the product better. And what if the person that developed this just leaves? You're stuck. Who do you call? It's so simple.

The decision on the part of Microsoft to price M365 Copilot at $30 a month, how do you expect this to change over time?

Jared Spataro: We feel good about the price point. I'll say, without going into great detail, we've been able to hold that price point quite nicely. We have lots of, you can imagine, discounting pressures. We do big deals and things like that. But we've held the price point. So, that's good.

The product is growing into the price point, frankly. That's what happens. You start with technology, it gets better over time. We really feel like, for instance, our researcher and analyst agents that are a part of the price point are an example of us adding additional value and capabilities into it.

Our initial pricing of that was based on some research that we did really early out of the gate, looking at what the value would be and trying to not just have it all get squeezed out by customer expectations that it would be included into the suite. It was our feel that with 430-plus million users as a base, we were looking for a way to monetize the great gains that we'd give them. So, that's kind of how we started.

Kash Rangan: Got it. Okay. E5 upgrades have been a big driver of ARPU increases. Could you quantify for us or maybe give us a range what percentage of the base is on E5 today and how large is the opportunity that's left ahead of you?

Jared Spataro: We occasionally publish numbers here. Our last published number actually is not very new. So, it's about three years old. And three years ago, we talked about E5 being 12% of the base. So, that's the official number that we gave out. We don't have an updated number that we've shared. But suffice it to say, there's still a lot of upside with E5.

So, again, you can compare the 12% and think of our latest number being about a little over 430 million as the base.

Kash Rangan: That's the denominator. And Copilot has to be a catalyst for the E5. 

Jared Spataro: Absolutely. Well, I should also say, what we're really excited is there's real synergy. First, E5 and Copilot are the two ARPU drivers for the business. But there's great synergy. As you start to think seriously about moving to Copilot, one of the big questions you have is how you're managing your data estate. And that plugs into Purview, which is a component of E5. And so, security has always been the driver of E5, but we're increasingly seeing the data labeling, sensitivity, those types of things, compliance, really driving with Purview. So, we've got a nice connection there between those two drivers.

Kash Rangan: Got it. We've got about two minutes left. Are we okay to get a question from the clients? Or should I just go on? Okay. Anybody? We generally don't get any questions because it's such a large room, and by the time the mics get to the client...Is that the case? Or who's going to be the bold one? Yes. All right.

Unidentified Audience Member: Okay. So, I'm going to make the question that people would like to hear. Your partner, OpenAI, designed a gigantic multiyear contract with another cloud provider. Why not Azure?

Jared Spataro: I'm really not in the best position to answer that one, to be honest with you. And I'm not joking. I don't run Azure. And I don't have much to say, to be honest with you.

It's a good question.

Kash Rangan: You're not the right person for that. It's a good question, yes. You should contact Microsoft IR for that.

Anybody else? This is a question that has to be addressed to Jared, given the scope of his work.

Unidentified Audience Member: (inaudible) in the broader ecosystem of the enterprise? Thank you.

Jared Spataro: Beautiful. Well, we wouldn't say that we can perfectly predict the future, but what we see happening right now is that the patterns associated with knowledge work are all being redone. Like, they're being rewired at this moment.

We definitely have an ambition, just like we did with Office, of being the front end of those patterns. We think Office was very unique in that it brought together some things that people didn't think went together. At the time when we brought together the Office suite, people didn't think everybody needed Excel or everybody needed PowerPoint or everybody needed Word. Those things were kind of apportioned out depending on your role.

So, as we look at building that front end, we're trying to find the same analogous set of patterns that we can bring to the knowledge worker of the AI era. I'll just start there. That's the ambition.

Now, what happens to this question of SaaS providers? Are they all dead? Where do they go? We would simply say, gosh, those businesses are valuable. They have valuable data in them. If I choose any one of the SaaS providers, they typically have really valuable logic, business logic, inside them as well.

That said, people are going to increasingly, we believe, not want to go to those systems directly to do work, but they're going to want agents to be able to do that work for them.

So, I don't know exactly how it plays out. We don't think that those companies go away automatically, but we think the workflows, the data flows, even the architecture, is going to change in very significant ways. And I would say it this way, we're trying to really be in position to ride that wave, and I think every company needs to think about what position they should be in.

Kash Rangan: With that, let's give a round of applause for Jared. Thank you so much. Great discussion.

Jared Spataro: Great to be here.

Microsoft Corp (MSFT)

ar2025

2025 ANNUAL REPORT

VIEW ONLINE 

DOWNLOAD NOW

 

Follow us
Share this page