
From Prompt Engineering to Cognition Engineering
- Mark Kendall
- Feb 5
- 2 min read
From Prompt Engineering to Cognition Engineering
A quiet shift in how high-leverage work actually gets done
Over the last year, we’ve all heard the term prompt engineering.
How to ask better questions.
How to structure inputs.
How to get better outputs from generative AI.
That framing made sense early on. It helped teams move from curiosity to productivity. But lately, I’ve noticed something interesting in my own day-to-day work as a cloud architect:
The real leverage isn’t coming from better prompts.
It’s coming from better thinking systems.
When the work stops being the hard part
Recently, I spent two minutes using generative AI to help finalize a set of Jira stories.
The thinking was already done. The intent was clear. The structure snapped into place quickly.
Then I spent the next hour manually pasting, formatting, and translating that work into Jira.
That hour wasn’t engineering.
It wasn’t design.
It wasn’t decision-making.
It was cognitive friction.
And that’s when it clicked: the bottleneck wasn’t output quality — it was how cognition flows from intent to execution.
Prompt engineering describes
inputs
Cognition engineering describes systems
Prompt engineering focuses on:
how we ask
what we say
how we structure a single interaction
Cognition engineering focuses on:
how thinking persists across time
how context is reused instead of recreated
how abstraction shifts between strategy and execution
how mental energy is protected, not burned
In practice, this looks less like “write a better prompt” and more like:
designing thinking loops
offloading mechanical cognition
keeping humans focused on judgment, not transcription
using AI as a stateful thinking surface, not a vending machine
This isn’t about replacing people
It’s about removing unnecessary cognitive tax.
Senior engineers, architects, and tech leads don’t burn out because the work is hard.
They burn out because their cognition is spent on tasks far below the level of intent they’re operating at.
When the thinking is done but the system still demands manual translation, something is broken — not in the person, but in the workflow.
Why this matters now
Prompt engineering is maturing.
The novelty is fading.
The value is flattening.
The next wave isn’t about clever prompts — it’s about intent-to-execution systems that respect human cognition.
I suspect we’ll start seeing:
tools that accept intent, not instructions
workflows that preserve context instead of resetting it
roles that optimize thinking, not just delivery
Maybe cognition engineering is the right name for that shift.
Maybe it’s just a better lens.
Either way, I think we’re moving beyond asking AI better questions —
and toward designing how thinking itself is engineered.
Curious if others are feeling this shift too.
Comments