The Moment for Bold AI Leadership Has Arrived
What insights from McKinsey’s 2025 AI report show about velocity, creativity, and transformative intent.

I just finished reading McKinsey’s State of AI in 2025 report, and it’s one of those moments where you pause, take a breath, and realize just how quickly the world is shifting. Not in theory. In practice. Inside organizations, inside teams, inside every workflow we touch.
And as I went through the findings, I kept seeing the same thread: AI isn’t just a technology wave. It’s a leadership test.
What we choose to do in the next 12–24 months will determine which organizations accelerate, which ones adapt, and which ones quietly drift to the sidelines.
1. AI Is Everywhere, But Real Change Requires Courage
McKinsey’s report makes something unmistakably clear: AI has crossed into the mainstream. 88% of organizations now report using AI in at least one function. But only a third have gone beyond pilots into true enterprise-scale transformation.
Why?
Because the hardest part of AI isn’t adoption. It’s letting go of comfortable processes, familiar structures, and decades of assumptions. Most organizations aren’t constrained by technology. They’re constrained by the fear of stepping into the unknown.
Courage is now a leadership competency.
When we drafted our GenAI acceptable use policy, the easiest debates were about the obvious risks. The harder ones were about AI already embedded in tools we use every day — features shipping quietly inside platforms we’d already approved. You can’t govern what you haven’t noticed. That realization forced us to think differently about procurement, vendor relationships, and what “approved tooling” even means now.
How leaders can take the first step:
- Pull a list of your approved tools and check which ones have added AI features in the last 12 months. Pick one and decide whether your current policies cover it.
- Ask your security and legal leads: “What AI-related question have you been waiting for someone to raise?” Then raise it.
- In your next leadership meeting, name one process your team protects out of habit rather than value. Ask what it would take to let it go.
Real transformation begins when leaders give people permission to think beyond today.
2. The Rise of AI Agents Is a Signal of What’s Coming Next
AI agents are moving from concept to reality. 62% of organizations are experimenting with them, even if only 23% have begun scaling. This is the early tremor of a much larger shift.
Agents don’t just “answer questions.” They act. They plan. They take multi-step actions. And that forces organizations to rethink accountability, workflows, trust models, and how systems interact.
Organizations aren’t hesitant because agents aren’t ready. They’re hesitant because agents will reshape the operating model itself.
We’ve started integrating AI-based code reviews into our DevSecOps pipeline, and one of the sharpest debates was simple: can AI-generated code go into production without human review? For now, our answer is no — every piece of AI output gets human eyes. That’s a deliberate constraint, not a permanent one. But it reflects where we are in building trust with the technology and with each other.
We’ve also started discussing how our code and documentation need to change so that AI can understand it — not just humans. If agents are going to operate in our systems, read our APIs, and act on our behalf, we need to design for that legibility now. We don’t have all the answers yet, but we’ve scoped it for 2026. The organizations that wait until agents are mature to think about this will be retrofitting under pressure.
How leaders can prepare their organizations:
- Identify three decisions in your workflows that are currently human-only but low-stakes. Map what it would take to make them human-supervised instead.
- Pick one internal API and ask: “If an AI agent needed to use this, would it understand what to do?” Document the gaps.
- Run a tabletop exercise: “An AI agent takes an action that causes a production incident. Who’s accountable? What’s our playbook?”
Leadership readiness will determine agent readiness.
3. The First Breakthrough Isn’t Cost. It’s Creativity.
The AI conversation has long been dominated by automation and efficiency. But McKinsey’s research points to a deeper truth: the earliest and most meaningful benefits of AI are showing up in creativity, innovation, and speed.
Teams are producing work they never had the time or bandwidth to attempt before. They’re testing more ideas, exploring more possibilities, and iterating at speeds that fundamentally change the creative process.
AI doesn’t just increase productivity. It expands possibility. And possibility is the spark that ignites transformation.
One area where AI has clicked quickly for us is Gherkin test scenarios. The structured format — given, when, then — turns out to be something AI reads and generates well. It’s not glamorous, but it has the potential to free up time our team used to spend on tedious scaffolding work. That’s the quieter side of AI creativity: not just new ideas, but capacity recovered.
How leaders can fuel creative momentum
- Find one task your team does repeatedly that follows a predictable structure — test scenarios, status reports, data formatting — and run a two-day experiment using AI to generate the first draft.
- Cancel one recurring meeting and replace it with an async prompt: “Here’s the context. What would you recommend?” See what your team produces with AI assistance.
- Ask each team lead to bring one idea to your next planning session that they generated or refined using AI. Make it a norm, not a novelty.
Creativity is the ignition point for AI-driven momentum.
4. High Performers Think Differently — And They Lead Differently
McKinsey’s “AI high performers” — only 6% of organizations — share a mindset rather than a toolset. They pursue transformation, not tinkering. They redesign systems, not isolated tasks. They communicate ambition, not hesitation.
Get Nav Dhunay’s stories in your inbox
Join Medium for free to get updates from this writer.
But here’s the anchor that grounds it: High performers are 3.6 times more likely to pursue transformative change, not just incremental improvement.
They don’t wait for certainty. They create it. They don’t scale tools. They scale conviction. They don’t talk about the future. They build toward it.
Our 2026 OKRs will measure AI adoption directly — not as a proxy for something else, but as an explicit goal. More importantly, we’re identifying inefficient processes that have survived too long and asking whether AI changes the calculus. Not “how do we optimize this?” but “should this exist in its current form at all?”
How leaders can embody this mindset
- Write a one-paragraph “AI ambition statement” for your team. Not what you’re doing — what you’re trying to become. Share it and ask for pushback.
- List three processes that have survived because “that’s how we’ve always done it.” For each, ask: “If we were building this today with AI available, would we build it this way?”
- Add an AI adoption metric to your quarterly review — not just “are we using AI” but “what has AI changed about how we work?”
High performers aren’t defined by resources. They’re defined by posture.
5. Talent Is Not Being Replaced — It’s Being Elevated
One of the most persistent myths in AI is the fear of job loss. McKinsey’s findings show a different story: organizations are still hiring aggressively in engineering, data, and AI-adjacent roles. Talent isn’t shrinking. It’s evolving.
And from what I see across teams, the people thriving in this moment aren’t necessarily the most technical — they’re the most adaptive. They run toward AI. They experiment. They stretch. They use AI as a multiplier, not a threat.
AI doesn’t diminish human value. It amplifies judgment, creativity, and contribution.
This is a moment of reinvention, not reduction.
How leaders can elevate talent
- Identify two people on your team who have been experimenting with AI on their own. Give them 30 minutes at your next all-hands to show what they’ve learned.
- For one role you’re hiring, rewrite the job description assuming AI is embedded in the workflow. What skills become more important? What becomes less important?
- Ask each team member to draft a “future version” of their role — what it looks like in 18 months if AI adoption succeeds. Use it in your next 1:1s.
People become more valuable when leaders empower them to harness what’s coming.
6. With AI Comes Risk — And Responsibility
As AI adoption grows, so do the stakes. Over half of organizations report at least one AI-related incident — from inaccuracy to unintended actions to compliance exposure. High performers encounter more risks simply because they push harder.
This isn’t a sign to slow down. It’s a sign to mature.
We saw this play out recently. Someone used an AI tool to build an app in hours for a task they needed to solve. It worked. It got adopted. It became part of a process. It started storing important data. And then we realized: this app can’t be deployed within our environment, can’t be supported by IT, and no one had reviewed it for security or compliance. The speed that made AI valuable had outrun our ability to govern it.
That’s the new shape of shadow IT. It’s not someone sneaking in a SaaS tool — it’s someone building something faster than your review processes can see it. The risk isn’t malice. It’s momentum.
Responsible leadership means treating governance, transparency, and observability as first-order requirements, not afterthoughts. AI introduces new opportunities — but also new failure modes.
Leaders must design for both.
How leaders can scale AI responsibly
- Audit for “AI-built shadow apps” — tools or workflows someone created quickly using AI that are now embedded in processes or storing data. Ask: Can we support this? Can we secure it? Do we even know it exists?
- Before your next AI initiative launches, require a one-page brief answering: “What could go wrong? Who would notice? How would we respond?”
- Build a simple AI risk inventory — a shared doc listing every AI tool in use, what data it touches, and who owns it. Update it quarterly.
Risk doesn’t stop innovation. Responsible design enables it to scale.
A Leadership Call to Action
If there’s a single message from this year’s report, it’s this: AI value doesn’t emerge from technology. It emerges from leadership intent.
The organizations that will lead in the next decade won’t be the ones with the most tools; they’ll be the ones with the clearest ambition, the strongest guardrails, and the greatest conviction.
So if you’re leading a team, a platform, or an entire organization:
- Raise the ambition.
- Build the guardrails.
- Equip your people.
- Align your leadership team.
- Create the conditions for bold, responsible exploration.
Momentum in AI doesn’t start with a workflow. It starts with a leader willing to step forward.


