Recent headlines—and proclamations from leaders like Anthropic CEO Dario Amodei—often frame AI as an existential threat that’s sure to trigger unchecked job loss, social disruption, and concentration of money and power. Some of the coverage is clearly designed to capture attention. But much of it reflects legitimate uncertainty about how the AI era is unfolding. The technology is improving rapidly, and the implications of that fact alone warrant serious attention. For some organizations and roles, AI is already reshaping daily work. In others, it is restricted by policy, limited to narrow use cases, or simply not a priority. So we’re not even sharing the same experience.

Acknowledging risk is crucial, but so is maintaining perspective. While AI has extraordinary potential to improve our lives, it raises issues that require deliberate choices—exercising our human agency to shape the future. We’ve seen the possible outcome of this scenario before. Social media saw rapid adoption, incentives optimized for growth and engagement, and consequences recognized only after the platforms had reshaped how people work and communicate, from mental health impacts to the erosion of trust and the spread of misinformation. Business models and operating norms solidified before the broader implications of social media were fully understood.

At the same time, we should be cautious about claims of certainty. We’re still in the infancy of this technology, and while AI may be driving a consequential shift, no one can fully map its downstream effects from this vantage point. Claims of certainty, from either end of the spectrum, go beyond what the evidence supports. But we can be certain that now is the time for human agency, for all of us to face the choices that will define our future with AI.

Between fight and flight

It’s easy to let fear narrow the conversation. When uncertainty rises, people default to fight or flight. But we all have the ability—the agency—to decide what happens between stimulus and action. That includes how we think, and in organizations, it includes how work is structured, evaluated, and rewarded. That agency is not limited to founders, CEOs, or AI researchers. Every employee makes decisions about how AI is used in their daily work. Leaders may redesign processes at scale, but individuals can rewire their own jobs in real time. Together, those decisions shape how work evolves.

If one person can now do work that previously required coordination across several roles, that raises legitimate economic questions. It also changes what a team can produce with the same number of people.  What happens next is not predetermined. It depends on how leaders integrate AI and how individuals use choose to use it.

None of this is happening “to” us. It is unfolding through the decisions people make every day about how AI is adopted and deployed.

Concentrated power is not a decision-breaker

A small number of companies train and operate the largest foundation models, giving them concentrated power at the top of the AI ecosystem. That reality increases the stakes, but it doesn’t dictate how AI reshapes work inside individual firms. Leaders across organizations decide how these tools are deployed, what gets automated, what remains driven by human judgment, and what tradeoffs are accepted.

Of course, agency isn’t evenly distributed. Leadership carries disproportionate responsibility because leadership controls the policies and systems. They also set the incentives and define what “good” looks like, no matter the model used.  

Where AI’s impact is actually decided

Model capabilities are determined by just a few companies, but how the technology reshapes work is determined by the decisions humans make as AI diffuses through organizations.

Infographic titled “Where AI’s impact is actually decided,” showing three connected stages: IT professionals redesign infrastructure, leaders redesign work processes, and individuals redesign their jobs—illustrating how human decisions shape how AI affects work.

Expansion is an option

But agency does not operate only at the policy level. It shows up in how people interpret their roles and how leaders respond to that evolution.

Consider Alex Farach, a data scientist on my team. On a basic level, we hired him to crunch numbers and generate reports. As AI systems began handling more of that mechanical synthesis, his role could have narrowed to supervising output. Instead, he expanded it.

Alex recognized that he wasn’t hired solely to perform the tasks of number-crunching and report generation. He was hired to provide an outcome: insight and perspective, an angle on what the data means for the organization. He began directing AI systems to surface patterns and test assumptions faster. In effect, he has built himself a team of agents that has expanded his role and deepened his impact. Alex has no direct reports, but he is now a highly effective manager.

That shift was not dictated by the model. It reflected how Alex chose to use the technology—and how we chose to value that contribution.  

Agency in organizations rarely appears as a formal declaration. It lives in the small, repeated decisions about how work gets done. AI doesn’t automatically elevate judgment, or make a person more curious. Instead, it can amplify whatever incentives are already embedded in the system. If speed is prioritized, speed scales. If volume is measured, volume scales. Those choices are managerial—leaders have the agency to decide what matters most.

Over time, those standards influence everything from hiring and advancement to how judgment shows up in the work and what the organization comes to value.

What it all means for leaders

Leadership in this era requires looking beyond immediate productivity gains to how automation reshapes the organization itself. Decisions about AI don’t only affect output. They influence how careers progress and who owns ultimately owns decisions.  

Organizations already manage delegation and layered systems of accountability. AI does not eliminate that responsibility; it intensifies it.  As it reshapes work, leaders are responsible for ensuring the redesign is intentional.

Agency at this level means designing the conditions in which AI operates. Leaders determine whether roles evolve toward judgment or narrow into oversight. They decide how performance is evaluated, how accountability is maintained when AI is involved, and how early-career talent develops the skills that organizations depend on.  

If those decisions are made deliberately, AI strengthens how the organization builds capability. If they are left implicit, efficiency could increase, but judgment won’t. Fewer people will practice decision-making. Accountability will become harder to trace. Over time, the organization might become faster at execution but weaker at direction.  

AI is powerful. But it doesn’t own your incentives, your standards, or your org chart. You do. So how will you use that agency to shape the future?  

For more insights on AI and the future of work, subscribe to this newsletter.