After 3–4 Years Deeply Engulfed in AI: A Closing Argument for 2025
- Mark Kendall
- Dec 29, 2025
- 4 min read
After 3–4 Years Deeply Engulfed in AI: A Closing Argument for 2025
For the last three to four years, I’ve been deeply embedded in AI work for my company — not experimenting on the edges, not playing with demos, but integrating AI into real teams, real pipelines, real delivery pressure, and real business constraints.
As we head into 2025, I don’t want to make predictions.
I want to make a closing argument.
Because we’re still in the infancy of this technology — but we’re already acting like the verdict is in.
1. AI Is Not the Problem. Narrative Authority Is.
AI’s biggest risk isn’t hallucination.
It isn’t incorrect answers.
It isn’t even automation.
The real risk is narrative dominance.
AI speaks confidently.
It speaks fluently.
It speaks first.
And humans are tired.
When a system starts telling the story of what the future looks like, what “good” looks like, and what decisions are “obvious”, reality starts bending around that story.
Not because it’s correct — but because it’s convenient.
That’s how self-fulfilling prophecy begins.
2. We Are Converging Too Fast
The most dangerous thing I see happening isn’t disagreement.
It’s agreement.
Teams converge quickly on AI-generated answers because:
• They sound reasonable
• They remove ambiguity
• They reduce friction
• They make meetings shorter
But premature agreement is how organizations lose judgment.
When everyone agrees too quickly, nobody is thinking anymore — they’re just approving.
3. AI Should Assist Judgment, Not Replace It
Good engineering has always been about tension:
• Tradeoffs
• Constraints
• Competing priorities
• Imperfect information
AI is excellent at optimizing within a narrative.
It is terrible at questioning whether the narrative itself is wrong.
If AI becomes the arbiter of meaning — not just execution — we’ve crossed a line we won’t easily walk back.
4. The Real Skill Gap Isn’t Prompting
The most valuable skill in 2025 won’t be:
• Prompt engineering
• Model selection
• Agent orchestration
It will be the ability to disagree well.
To pause.
To ask “why.”
To surface assumptions.
To say, “This feels too clean — what are we missing?”
Organizations that reward speed over reflection will move fast — and drift quietly.
5. Governance Isn’t the Enemy — Unquestioned Automation Is
We’ve trained ourselves to fear governance because it historically meant:
• More rules
• More process
• Less autonomy
But the absence of reflection is not freedom — it’s abdication.
The future isn’t about controlling AI.
It’s about preserving human agency in the presence of very persuasive systems.
6. My Closing Argument
AI is a powerful tool.
It will absolutely reshape how we work.
It will make teams faster, cheaper, and more capable.
But:
The organizations that win won’t be the ones that adopt AI fastest.
They’ll be the ones that preserve disagreement the longest.
If AI ever becomes the most important voice in the room — unquestioned, unchallenged, unopposed?Deeply Engulfed in AI: A Closing Argument for 2025
For the last three to four years, I’ve been deeply embedded in AI work for my company — not experimenting on the edges, not playing with demos, but integrating AI into real teams, real pipelines, real delivery pressure, and real business constraints.
As we head into 2025, I don’t want to make predictions.
I want to make a closing argument.
Because we’re still in the infancy of this technology — but we’re already acting like the verdict is in.
1. AI Is Not the Problem. Narrative Authority Is.
AI’s biggest risk isn’t hallucination.
It isn’t incorrect answers.
It isn’t even automation.
The real risk is narrative dominance.
AI speaks confidently.
It speaks fluently.
It speaks first.
And humans are tired.
When a system starts telling the story of what the future looks like, what “good” looks like, and what decisions are “obvious”, reality starts bending around that story.
Not because it’s correct — but because it’s convenient.
That’s how self-fulfilling prophecy begins.
2. We Are Converging Too Fast
The most dangerous thing I see happening isn’t disagreement.
It’s agreement.
Teams converge quickly on AI-generated answers because:
• They sound reasonable
• They remove ambiguity
• They reduce friction
• They make meetings shorter
But premature agreement is how organizations lose judgment.
When everyone agrees too quickly, nobody is thinking anymore — they’re just approving.
3. AI Should Assist Judgment, Not Replace It
Good engineering has always been about tension:
• Tradeoffs
• Constraints
• Competing priorities
• Imperfect information
AI is excellent at optimizing within a narrative.
It is terrible at questioning whether the narrative itself is wrong.
If AI becomes the arbiter of meaning — not just execution — we’ve crossed a line we won’t easily walk back.
4. The Real Skill Gap Isn’t Prompting
The most valuable skill in 2025 won’t be:
• Prompt engineering
• Model selection
• Agent orchestration
It will be the ability to disagree well.
To pause.
To ask “why.”
To surface assumptions.
To say, “This feels too clean — what are we missing?”
Organizations that reward speed over reflection will move fast — and drift quietly.
5. Governance Isn’t the Enemy — Unquestioned Automation Is
We’ve trained ourselves to fear governance because it historically meant:
• More rules
• More process
• Less autonomy
But the absence of reflection is not freedom — it’s abdication.
The future isn’t about controlling AI.
It’s about preserving human agency in the presence of very persuasive systems.
6. My Closing Argument
AI is a powerful tool.
It will absolutely reshape how we work.
It will make teams faster, cheaper, and more capable.
But:
The organizations that win won’t be the ones that adopt AI fastest.
They’ll be the ones that preserve disagreement the longest.
If AI ever becomes the most important voice in the room — unquestioned, unchallenged, unopposed?

Comments