
At a glance
- AI is driving rapid changes in the workplace, more sharply than those covered in previous editions of the New Future of Work
- AI is changing how people work together, not just enabling them to work faster or from remote locations. Organizations that treat AI as a collaborative partner are seeing the biggest benefits.
- The benefits of AI are not yet evenly distributed, underscoring the need for industry leaders to build AI that expands opportunity. The future is not predetermined. It will be shaped by the choices we make today.
- Human expertise matters more, not less, in an AI-powered world. People are shifting from merely doing work to guiding, critiquing, and improving the work of AI.
For the past five years, the New Future of Work report has captured how work is changing. This year, the shift feels especially sharp. Previous editions have focused on technology’s role in increasing productivity by automating tasks, accelerating communication, and expanding access to information, as well as the rise of remote work. Today, generative AI has put this transformation on fast forward. Instead of simply speeding up existing workflows, AI increasingly participates in them, shaping how people create, decide, collaborate, and learn.
For decades, researchers across Microsoft have studied these changes not as abstract trends but as lived experiences. Across organizations and occupations, people are experimenting with AI in uneven, creative, and sometimes surprising ways. Many are saving time, expanding their capabilities, and taking on more complex work, but the real opportunity ahead is to use AI to help us work better, together.
The New Future of Work report brings together research from inside and outside of Microsoft to understand what is happening as AI enters workplaces. Through the efforts of dozens of authors and editors, it draws on evidence from large‑scale data analyses, field and lab studies, and theory to look at who is using AI, why they are using it, and how it is reshaping productivity, collaboration, learning, and judgment. It highlights professions where changes are unfolding especially quickly, as well as the broader societal impact of these technologies.
Taken together, these findings point to a central insight: The future of work is not something that will simply happen to us. We are actively constructing it, through the choices individuals make, the norms teams build, the systems organizations adopt, and the discoveries researchers uncover. At the same time, AI’s role is still evolving, and it is driving a range of impact—some of which may be viewed as positive or negative. What follows is a research-backed snapshot of this moment in time and what it can teach us about how to collectively create a new and better future of work with AI.
Adoption and usage
Generative AI is entering workplaces quickly, likely faster than most earlier technologies. But the patterns of who uses it, and how, will shape who benefits. Reports on early adoption appear to show significant penetration: in one German survey, 38% of employed respondents reported using AI at work. But usage and confidence vary widely across sectors, and men report using AI at work more often than women. It’s not yet clear whether that variability is driven by occupational distributions, relative comfort with new tools, or something else. This raises the challenge that uneven adoption is likely to translate into uneven productivity gains, learning opportunities, downstream career paths and more between those who adopt and those who do not.
A look at generative AI adoption globally reveals further differences. High-income countries still lead overall usage, but the fastest growth is happening in low- and middle-income regions. When local languages are poorly served, people switch to English simply to get reliable results. Without investment in infrastructure and multilingual model development, AI risks reinforcing existing divides rather than narrowing them.
Inside organizations, the decision to use or not use AI is shaped less by strategy decks and more by culture. People try new tools when they trust their employer and feel safe experimenting. They stick with tools that make their work better, but might reject tools that seem designed to replace them—which is a common concern among workers. And many of the most useful applications don’t come from top-down initiatives at all but from employees trying things, discovering what actually helps, and sharing those insights with colleagues. Research has shown that involving workers’ perspectives in the design of workplace technologies promotes sustainable improvements in productivity and well-being.
We are also starting to see what people actually do with AI. At Anthropic, an analysis of millions of user conversations found that 37% of Claude usage was tied to software and mathematical occupations. A study of Microsoft Copilot conversations found high applicability to the activities of information workers across sales, media, tech, and administrative roles. But the broader point is simpler: most occupations include at least some tasks where AI is useful.
These shifts come with social side effects. Several studies show that employees who use AI can be perceived as less capable, even when their output is identical to that of people who didn’t use AI. Whether these perception penalties fall unevenly across groups is still an open question. However, managers who have used AI tend to evaluate AI-assisted work more fairly. This suggests that AI may require broad exposure before it can be used openly and without judgment.
Spotlight: Event Series
Impact on work and labor markets
Understanding who uses AI and why they use it can help assess its value, but the harder question is how it impacts productivity and labor markets, which can be less straightforward. Productivity can increase through time saved, higher-quality work, or simply feeling more capable. Surveyed enterprise users of AI report saving 40–60 minutes a day, while model-based evaluations show frontier systems can approach quality levels like that of experts on a growing range of tasks. But AI may also reduce productivity. In one U.S. survey, 40% of employees said they had received “workslop”, i.e. AI-generated content that looks polished but isn’t accurate or useful, in the past month. When that happens, any time savings can quickly disappear, and quality can actually suffer.
We still don’t have the full picture of what this means for jobs and labor markets more broadly. Large-scale empirical work finds no clear aggregate effects on unemployment, hours worked, or job openings. However, AI does seem to be reducing opportunities for younger, inexperienced workers. Entry-level roles rely less on experience and knowledge and are easier to automate. Empirical evidence suggests employment for workers aged 22–25 in highly AI-exposed jobs declined by 16% relative to similar but less-exposed roles, and hiring into junior positions appears to slow after firms adopt AI. This pattern raises a longer-term concern: automating jobs that enable workers to learn skills may undermine how expertise is built over time. This point is reinforced by research using theoretical models as well as empirical evidence.
Meanwhile, AI is also changing which skills matter. Roles that mention AI skills in their job postings are nearly twice as likely to also emphasize analytical thinking, resilience, and digital literacy. Demand for work that can be outsourced to AI models more easily, including data-related tasks or routine translation, continues to fall. Even where overall employment remains stable, AI is already reshaping how jobs are structured and this trend will continue.
As more empirical evidence comes in, theoretical work helps frame what might lie ahead. One recurring theme is that human judgment – spotting opportunities, working under ambiguity or choosing from outputs – becomes more valuable as AI improves. And organizations that use AI to augment what people can do often end up creating new kinds of work, rather than simply eliminating existing ones. If AI is meant to deliver on its potential to support broad prosperity gains, the path forward is less about replacing tasks and more about expanding what people are able to do.
Human-AI collaboration
As AI becomes more capable, the nature of human-AI interaction is changing. AI systems are increasingly playing a role in decision-making, creativity, and communication, with AI systems being positioned as a “collaborator.” This raises questions about how to support “collaboration” between people and AI, what we can learn from how people interact with each other, and where the capabilities of AI systems raise different opportunities and create different requirements.
At the heart of effective collaboration is common ground: the shared understanding that allows people to coordinate and communicate. In human conversation, we constantly check for alignment – through clarifications, acknowledgements, and follow-up questions. Yet current AI systems often skip these steps, generating responses that assume understanding rather than building it. Research shows that this lack of conversational grounding can lead to breakdowns in human-AI interaction. Encouragingly, systems like CollabLLM (opens in new tab), which prompt AI to ask clarifying questions and respond over multiple turns, have shown improved task performance and more interactive exchanges.
Trust is another essential aspect of collaboration. Although AI can process vast amounts of information, its usefulness in decision-making depends on how well it grasps human goals, and how well people understand its capabilities. Using AI that doesn’t understand a person’s objectives can lead to worse outcomes than using no AI at all. Yet people often overestimate AI’s abilities, which distort their judgment on when and how to use it. Systems that support selective delegation can improve these decisions, especially when the AI is programmed to account for this selective approach in its responses.
AI’s advancing capabilities are fueling a shift in people’s roles. This includes software production, where developers who once wrote code from beginning to end are increasingly reviewing and refining AI-generated suggestions. Writers and designers are acting more as curators and editors, guiding AI outputs rather than producing everything from scratch. This shift demands new skills – like crafting effective prompts, vetting AI responses, and maintaining quality oversight – and new tools to support them.
Current chat-based interfaces are often too limited for these evolving workflows. Alongside knowledge about the capabilities, limitations, and workings of an AI system, as well as domain expertise and situational awareness to enable intervention, oversight requires observability of system activity, decisions, and outputs. New interface designs are emerging to address this, including visualizations of AI reasoning, shared editing spaces, and mixed-initiative systems that allow humans and AI to take turns leading a task. These innovations aim to preserve human agency while making AI more transparent and responsive.
Ultimately, the future of work is about building complementary interactions between people, drawing on knowledge of how people collaborate, while acknowledging the unique challenges of human-AI interaction, and drawing on AI capabilities to do so.
AI for teamwork
AI systems have been designed from the ground up to work best for individuals, not for teams of people. It is no surprise then, that when people use AI as a team, they often underperform, even relative to an individual using AI.
The good news is that a growing amount of research is dedicated to AI that supports team and group interaction. Researchers are using two broad approaches: (1) process-focused strategies, i.e. building AI to facilitate specific team processes like information sharing and (2) outcome-focused strategies, i.e. training end-to-end AI systems that attempt to learn from short- and long-range team outcomes.
Some examples of the former include systems that provide a devil’s advocate perspective in a group discussion or help amplify minority perspectives. Examples of the latter include systems that try to help teams make good decisions or drive meetings towards achieving goals.
Theory from fields like collective intelligence would suggest that both approaches have great potential: AI can unlock new models of collaboration that are wildly different and more productive than we’ve had before. One notable example is AI enabling much more ephemeral teams, where a precise group of people in a given organization (or even beyond) can come together to solve a specific problem, then disband when the problem is solved.
More philosophically, it can be useful to understand even individual interaction with a large language model (LLM) as a type of teamwork. In fact, “collective intelligence” is perhaps a more accurate term for technologies like LLMs than “artificial intelligence”. LLMs take knowledge from millions of people who have written web content or posted in places like Reddit and Wikipedia, interacted with chatbots, and generated other types of data, and make that available to individuals on demand. Every time you interact with an LLM, you’re interacting with the work of millions of people, without the impossible overhead of that scale of collaboration.
Thinking, learning and psychological influences
Generative AI is changing cognition and learning while also introducing new psychological dynamics. This is making design choices about agency, effort, and well-being increasingly consequential.
A central pattern emerging in generative AI is a shift from ‘thinking by doing’ (e.g. writing a document) toward ‘choosing from outputs’ (e.g. prompting AI to write a document). This may weaken the judgment and practices that sustain human expertise unless it is paired with user experiences that keep people cognitively engaged, and upskilling/reskilling to accommodate changes in available work. AI can also be designed to support thinking rather than substitute for it, for example by provoking reflection, scaffolding reasoning, and workflows that help people ‘decide how to decide’ through alternatives and critiques. For ideation and creativity, benefits can be fragile. Using LLMs at the wrong time can reduce originality and self-efficacy, and repeated cognitive offloading can carry over even when AI is removed. To avoid trading short-term accuracy for long-term capability, AI experiences should help users practice the judgment needed to challenge and refine AI outputs.
AI use in education is already widespread, but much of this activity runs through general-purpose tools rather than education-specific products, while training and policy are still catching up. In learning contexts, the speed and ease with which AI is being designed to meet workplace tasks may conflict with the needs of education. Learning often benefits from ‘desirable difficulties,’ and heavy reliance on summaries and syntheses may make learning shallower without thoughtful support. This may involve trying problems before turning to AI for help, and question-driven tutoring that requires students to justify and check outputs. Coding education remains essential, but needs to change focus from memorizing syntax to centering abstraction and accountability, such as problem framing and critical review. Workplace training can counter overreliance and ‘work-slop’ productivity problems by helping workers reframe AI as a thought partner, prompting reflective interaction and strengthening calibration and verification habits so workers retain responsibility for final decisions.
Finally, conversational AI is increasingly being used for social and emotional support, making empathy and psychological well-being core design and governance concerns, especially because effects can vary sharply by user context and interaction patterns. That variability also raises the stakes for anthropomorphic behaviors. Clearer definitions and measurement are needed to understand when systems appear human-like and what consequences follow. Broader mapping of the design space can help designers anticipate implications and choose alternatives.
Specific roles & industries
While much of the NFW report highlights broad work patterns such as collaboration, communication, and decision-making, we also examined specific professions that are seeing especially rapid disruption. Among those that stand out in this year’s edition are software engineering and science. To counter some of the misunderstandings around these fields, we address several myths, including:
- Counting AI-generated lines of code is a meaningful productivity metric
- Current tools will instantly turn every developer into a “10× engineer”
Adoption primarily depends on model capability. Beyond myth-busting, we see real shifts in the software lifecycle. Historically, PMs (product/program/project managers) focused on customer needs, telemetry, design, and feedback, while developers wrote the code. With generative AI, these boundaries are blurring. PMs report doing more technical work and writing more code, while developers increasingly engage in higher-level planning and conceptual thinking as they interact with AI agents.
This shift is illustrated by the rise of vibe coding—developing software through iterative prompting rather than directly writing and editing code. Studies show that experienced computer science students are better at vibe coding than novices, able to steer models with a smaller number of targeted prompts. As humans build trust with AI assistants, work becomes more co-creative, enabling engineers to stay “in flow” through continuous iteration.
Together, these changes point to a deeper transformation in how software is built—both the mechanics of code production and the ways teams coordinate, plan, and collaborate.
Science is also seeing significant AI-driven acceleration. AI is meaningfully accelerating scientific discoveries by assisting researchers in identifying promising ideas, retracing known results, and surfacing cross-field connections. Foundation models also make it easier to work with diverse data types and enable experiments at a previously impossible scale.
Benefits of increased research productivity and moderate quality gains appear to be most pronounced for early career researchers and non-English speaking scientists, for whom AI can act as both a collaborator and a form of access to advanced tooling.
However, AI introduces new risks. Issues of data provenance, accountability, and replication become more complex when generative systems are involved. Small variations in prompts can significantly change outcomes, making results harder to verify. Models may reproduce ideas without attribution or hallucinate entirely, increasing the burden of source-checking. And because many models tend toward sycophantic responses, scientists may overestimate the novelty or correctness of AI-generated insights.
Closing
Generative AI will not arrive in some distant future, it is reshaping work right now. Here are a few things to take away:
- AI isn’t just speeding up work—it’s changing how we work together.
This year’s research shows a real shift: AI is moving from automating tasks to actively shaping how people create, decide, collaborate, and learn. The organizations seeing the biggest gains are the ones treating AI as a collaborative partner—not a bolt‑on tool—and building the culture, norms, and confidence to experiment. - The benefits of AI are real, but they’re not evenly distributed—yet.
Adoption is rising fast across countries, professions, and industries, but the gaps in access, confidence, and usage are widening. Early evidence shows that who uses AI (and how) will determine who benefits. Industry leaders need to ensure AI expands opportunity rather than reinforces divides. - Human expertise matters more—not less—in an AI‑powered world.
Across software engineering, science, and knowledge work, AI is transforming roles: people are shifting from doing the work to guiding, critiquing, and improving it. The organizations that thrive will be the ones that invest in judgment, critical thinking, and responsible oversight—and design AI experiences that keep people thoughtfully engaged.
The research in this year’s New Future of Work report points to both opportunity and responsibility. The future is not predetermined. It will be shaped by the choices we make today—in how we build AI systems, how organizations adopt them, and how individuals learn to work alongside them. Microsoft remains committed to studying these changes as they unfold, grounding our understanding in evidence, and ensuring that the future we are collectively building is one where AI helps us all work better, together.