2025 is the year AI at work got real. What once felt mostly experimental is becoming embedded in daily routines. What once lived at the edges of organizations has migrated to the center. Business leaders have no doubt about whether AI matters. Now the challenge is how—and how fast—you adapt while the ground is still shifting.

I recently had a conversation with my friend, Harvard Business School professor Karim Lakhani, that captures the issues organizations—and individual leaders—are facing as we end one pivotal year and look forward to another. We explored four provocative questions that each force a choice for which there are solid arguments on both sides. 

Frontier Firm or fast follower?

The first question was about timing: Is it better to be the Frontier Firm that sets the curve or the fast follower that scales proven patterns faster than anyone else?  

On one side is the case for moving early: starting now to reinvent your organization around AI, building fluency, and learning by doing. Businesses that blaze this Frontier Firm trail gain experience that fast followers don’t. They develop intuition, uncover edge cases, and foster an agile mindset before patterns that optimize business value fully settle.

On the other side is the fast-follower argument: let the technology mature and blueprints emerge, avoiding potentially expensive churn as AI models and platforms continue their rapid evolution.

Both arguments are reasonable—and that’s exactly what makes the decision hard.

Where the conversation landed was less about speed and more about learning. The real risk isn’t moving too early or too late. It’s moving without learning. Frontier Firms aren’t defined by how quickly they scale deployments, but by how deliberately they run experiments, absorb lessons, and prepare their operating models for what’s coming next.

By the time the competitive advantage is obvious, it’s already gone.

Where’s the ROI of AI?

The next question was something a CEO recently asked me: AI is everywhere inside the company—chatbots, pilots, proofs of concept—but nowhere on the P&L. What gives? 

The frustration is understandable. Investment is real. Results feel abstract.

But that frustration often reflects a category error. AI isn’t a point solution. It’s a general-purpose technology with the potential to reshape how work gets done. The mistake is treating it as incremental change when it requires a mindset of adaptive change—new workflows, new roles, new forms of oversight. Expecting immediate, linear ROI misses the nature of the shift.

Early returns tend to show up unevenly: faster cycles, higher quality, better decisions, more leverage per employee. But the overall pattern follows a J-curve. When organizations ingest a technology this foundational, productivity almost always dips before it rises. That temporary drop isn’t a failure—it’s an investment phase where organizations pay a short-term price in lower productivity for long-term gains in compounding, AI-driven business value. Frontier Firms plan for the J-curve, knowing that as they climb out of the dip the pain will pay off in a real competitive edge. 

The organizations that stall are often waiting for proof of ROI before they start changing how work is organized. That delay is costly. By the time financial impact is undeniable, the window to lead has already closed—and their J-curve may become even deeper.

Technology shift—or something bigger?

The conversation shifted from adoption to consequence: Are we ready for a world where the real competitive edge is how fast you can teach an agent to beat the best human at the job?

AI integration is often framed as a technical problem: which models to use, how to connect systems, how to mitigate risk. The bigger challenge is centered on management. Karim noted that the model will unlock capabilities, but the humans inside organizations will make the choices on how those capabilities are deployed. It’s difficult to get this type of change—this level of change—to happen bottom-up.

Most failures in AI-integration efforts aren’t caused by insufficient model capability. They’re caused by management systems designed for a world where humans did all the work. Decision rights, review cycles, accountability, and trust all need to be rethought when outputs are co-produced by people and machines.

This is the beginning of what I think of as “the model eats the world”: when AI makes expertise abundant, workflows, roles, and organizations must reform around it.

Frontier Firms are discovering that human–AI collaboration doesn’t fit neatly into existing structures: Delegation changes. Oversight changes. The source of expertise changes. This isn’t a plug-and-play upgrade. It’s a reconfiguration of how organizations learn and operate, and companies need to be ready.

What should I tell my kids?

This question comes up every time I have a serious conversation about AI, and it’s usually where things get most personal: In a world where models can draft code, analyze data, and reason across disciplines, what do we tell our kids? Should they still go to college? And if they do, what should they study?

Karim’s answer starts with history: Education has faced moments like this before. Calculators were once banned from math classrooms; eventually they raised the floor and allowed students to learn more advanced concepts earlier. Circuit design shifted from hand-drawn work to computer-aided systems. Over time, schools learn how to turn new tools into intellectual leverage. His view is that we’re at that same stage again—except now it’s happening across every field at once, from law and business to engineering and the arts. Education still matters. The challenge is whether it can adapt at the accelerating pace of technological change.

That’s where my concern comes in. Universities, as they’re structured today, tend to move slowly, even as economic expectations reset quickly. In past technological shifts, education did adapt, but sometimes only after new institutions emerged to lead. We’ll need to see how things play out this time. One thing is a good bet: institutions that adapt slowest will stop being powerful drivers of economic mobility.

Karim is clear about what won’t work: abandoning education or reducing learning to prompt-writing. If all someone learns is how to ask a machine for answers, they haven’t developed the judgment that matters. So when leaders ask, what do I tell my kids? the most honest answer may be this: learn deeply, build judgment, and always be ready to reshape and evolve with the systems around you.

What it all means

The thread running through the conversation was simple: In addition to accelerating work, AI is compressing decision windows for leaders across industries.

Rather than betting on perfect foresight, Frontier Firms are building the capacity to learn, adapt, and reconfigure as conditions change. They take sides where it matters, stay flexible where it doesn’t, and understand that in moments like this, waiting is still a decision.

AI will change your organization. In the year ahead, will you help shape that change—or inherit someone else’s vision?

For more insights on AI and the future of work, subscribe to this newsletter.