Why Human Intent Matters More as AI Capabilities Grow

Remember when AI coding meant tab autocomplete? The human did 95% of the work: navigating the codebase, finding the right files, locating the exact spot to edit, beginning to type, and only then could AI offer a helpful suggestion. The human was the driver, AI was barely a passenger.
Today's agentic AI can search codebases, read files, write entire modules, refactor systems, and orchestrate complex changes across multiple files. With tools like Cline, AI has eaten up nearly the entire coding pie. Which raises a fascinating question: if AI can do almost everything, what's left for humans?
The answer is deceptively simple: intent.
The Last Mile is the Most Important Mile
As AI capabilities expand, the human role doesn't diminish; it transforms. When AI can handle the mechanics of coding, the critical question becomes: What are we building and why?

Think of it as a pie chart of effort. In the tab-autocomplete era:
- Human effort: 95% (navigation, planning, typing, debugging)
- AI effort: 5% (suggestions)
In the agentic era:
- AI effort: 95% (searching, reading, writing, refactoring)
- Human effort: 5% (intent)
But that 5% is everything. It's the difference between AI that builds the wrong thing perfectly and one that builds exactly what you need.
Why We Built the Plan-Act Paradigm
Here's the problem: AI is eager. Give it tools to read and write files, and it wants to immediately start coding. It's like having an incredibly talented developer who starts building before fully understanding the requirements.
This eagerness isn't a bug; it's a feature of how these models work. They're designed to be helpful, to take action, to solve problems. But without clear intent, that action can be misdirected.

The Plan-Act paradigm isn't about creating different "modes" for AI; it's about recognizing that effective collaboration has natural phases. During the Plan phase, we create deliberate space for mutual understanding. We've designed the AI to be as inquisitive about human intent as possible. Not because AI lacks capability, but because intent is the one thing that can't be automated.
Then comes Act, where we get out of the way. Same AI, same capabilities, but now optimized for uninterrupted execution. It's a paradigm shift in how we interact, not a personality switch.
The Anti-Persona Philosophy (And Why Plan-Act Isn't "Modes")
You might have seen coding assistants with specialized "agents": a debugger, an architect, a code reviewer. We deliberately avoid this approach, and here's why.

Richard Sutton's "Bitter Lesson" in AI research shows that general methods leveraging computation consistently outperform specialized, human-engineered solutions. When Deep Blue beat Kasparov, it wasn't using grandmaster strategies; it was using brute force search. When AlphaGo conquered Go, it wasn't following ancient wisdom; it was using general learning algorithms.
Modern language models like Claude Sonnet 4 and Geminin 2.5 Pro are trained on the entire coding process. They've seen millions of examples of architecture decisions, debugging sessions, and code reviews. Creating artificial personas constrains what they naturally know how to do.
This is crucial: Plan and Act aren't "modes" in the sense of different AI personalities or capabilities. We're not switching between a "planning AI" and a "coding AI." It's the same AI with the same capabilities throughout.
What changes is the interaction paradigm. In Plan, we optimize for intent gathering; the AI asks questions, explores context, ensures understanding. In Act, we optimize for execution; the AI uses its full capabilities uninterrupted. It's about recognizing the natural rhythm of human-AI collaboration, not constraining the AI into different roles.
Think of it like a conversation versus focused work. You wouldn't say a developer has different "modes" when they're in a requirements meeting versus coding. They're the same person, just engaged in different phases of the work.
Don't Interrupt AI
Here's something we've observed that resonates deeply: LLMs are exceptional storytellers. They're predictive text engines designed to craft coherent narratives: this happens, then this, then this.
When you interrupt AI mid-task to correct course, you break that narrative flow. The model now has to reconcile your intervention with its previous trajectory, often leading to confusion or suboptimal solutions.
Research from Stanford's "Lost in the Middle" study (Liu et al., 2023) demonstrates that LLMs experience significant performance degradation when processing interrupted contexts. The study found that models' accuracy drops by nearly 20% when relevant information appears in the middle of long contexts compared to the beginning. Fresh starts, while seemingly inefficient, often produce better results because the model can construct a clean narrative from clear intent without having to navigate through degraded middle-context information.
This is why starting over with clearer instructions often outperforms trying to course-correct. Save your state, let the AI work, then either accept the result or start fresh rather than wrestling with corrections.
As inference becomes cheaper, we can even run multiple attempts in parallel and choose the best outcome, treating it quite literally like a slot machine where fresh pulls have independent probabilities of success.
Let AI Cook
Once you've shared your intent during the Plan phase, the next principle is equally important: get out of the way.
This connects directly to our anti-persona philosophy. Just as we don't constrain AI into artificial roles like "debugger" or "architect," we don't constrain it during execution. The Act phase is about letting the model use all its capabilities: architecture decisions, debugging, refactoring, optimization, as one unified intelligence.
We've observed that the most recent Claude 4 models perform well when running on auto-approve. Every interruption breaks the narrative flow. The most effective sessions are when AI can write uninterrupted.
Caveat: the best performance from the models comes from letting them build, uninterrupted. However, the results may not always align with your intent. For this reason, if you are doing complex engineering work we still recommend you stay in the loop, and more importantly, define your plan in DETAIL.
This might feel counterintuitive. Don't we need control? Oversight? The ability to intervene?
Yes, but not in the moment. That's why we built:
- Checkpoints: Save states you can return to
- Full visibility: See everything AI is doing
- Undo capabilities: Revert any changes
- Safety boundaries: Define what AI can and cannot touch
The key insight is timing. Gather requirements, share intent, set boundaries, then let AI work. Review and iterate after, not during.
The future of coding isn't about humans competing with AI or trying to remain relevant through increasingly narrow specializations. It's about humans doing what only humans can do: providing intent, meaning, and purpose, while AI handles the implementation.
That's not a diminished role. It's the most important role of all.
-Nick