The Worst Instructions You Can Give an AI Coding Agent
I watched a developer waste $200 in API tokens one afternoon. Not because they were building something trivial – they were tackling important infrastructure work. Not because they were inexperienced – they had 15 years of backend development under their belt. The problem was simpler and more frustrating: they weren't wielding their tools effectively.
Every few minutes, they'd hit revert. The AI would generate code that almost worked but missed the mark. They'd course-correct with another prompt, burning more tokens, getting slightly different but equally wrong results. By the end of the session, they'd produced thousands of lines of code and kept maybe 200.
This wasn't an isolated incident. It's a pattern I see constantly: expert developers who can architect complex systems, debug obscure race conditions, and write elegant code struggle with something that seems trivial – giving clear instructions to an AI coding agent.
The gap isn't technical knowledge. It's a mindset shift. Decades of training taught us to communicate with compilers: be precise, be minimal, let the machine fill in the gaps. But AI agents aren't compilers. They need context, not just commands. They need to understand the "why," not just the "what."
Here are the five worst instructions I see expert developers give – and what to do instead.
"Make it better"
You know what "better" means. Your code works but feels clunky. The architecture could be cleaner. The function names could be more descriptive. So you tell Cline: "make this better."
The AI generates something. It is indeed different. Maybe it added TypeScript types. Maybe it extracted a helper function. Maybe it renamed variables. But is it better by your definition? Probably not, because the AI doesn't share your mental model of what "better" means in this context.
Vague objectives like "better," "cleaner," or "more efficient" force the AI to guess. Sometimes it guesses right. Usually it doesn't. Either way, you're playing a lottery with your tokens.
The fix is specificity. Not "make it better" but "extract the validation logic into a separate function" or "add error handling for the database connection timeout case." When you can't articulate what you want changed, you probably shouldn't be asking an AI to change it yet. Take a moment to think through what "better" actually means for this specific code at this specific time.
This connects to something we've written about before: planning matters more than perfect prompts. Expert developers who take time to clarify their objectives before coding get dramatically better results. Use Plan mode to think through what you're actually trying to achieve. The few minutes spent planning will save you from hours of generating and reverting code that's "better" in ways you didn't want.
"Add OAuth but keep it simple"
You're being thorough. You want authentication, but you know OAuth can spiral into complexity. So you give both requirements: "add OAuth support but keep it simple."
The AI reads this and picks one. Usually the wrong one. It might implement a bare-bones OAuth flow that's technically simple but missing critical security features. Or it might build a comprehensive OAuth implementation with refresh tokens, PKCE, and state validation while you just needed basic authentication.
Contradictory constraints don't make the AI smarter; they make it confused. When you say "add X but keep it Y," the AI doesn't negotiate tradeoffs with you. It makes a choice and runs with it.
The better approach is to be explicit about priorities. "Add OAuth support. I care more about security best practices than implementation simplicity, but avoid introducing new dependencies." Or use Plan mode to actually negotiate these tradeoffs before writing code.
This is where Cline's Plan and Act modes shine. In Plan mode, you can have a conversation about requirements. "I need authentication. What are the tradeoffs between OAuth, JWT, and session-based auth for my use case?" The AI can explain the options, you can clarify what matters, and then Act mode can execute with clear direction instead of contradictory constraints.
Trying to course-correct a derailed conversation
Ten messages in, you realize the conversation has gone sideways. What started as "add basic form validation" has somehow become a discussion about implementing a custom validation framework with schema definitions and error message localization.
Your instinct is to steer it back. "No, let's keep it simple. Just basic validation. Remember the original requirements." The AI acknowledges your feedback. It generates new code. The code still feels wrong, still overengineered, just in a slightly different way.
You try again. "Even simpler than that." The AI tries again. You're now fifteen messages deep, having burned through your context window, and the code quality is degrading with each iteration.
Here's what's actually happening: research from Laban et al. shows that LLMs experience a 39% average performance drop when instructions arrive across multiple conversation turns versus receiving complete context upfront. Every course-correction adds more "context pollution" – the AI's previous assumptions, your corrections, the implicit validation of parts you didn't correct. You're not debugging code; you're trying to debug a conversation, and conversations don't have stack traces.
The solution isn't more corrections. It's checkpoints. Find the exact message where things went wrong. Edit that prompt to include the context that was missing. "Add basic form validation – just required fields and email format checking, no custom framework." Restore the conversation state to that point. Now the AI starts fresh with complete information instead of trying to reconcile contradictory instructions.
This is how you hack the "happy path" and give the AI complete context upfront, instead of trying to patch incomplete instructions through conversation. Checkpoints aren't just for undoing file changes; they're for rewinding conversations to before they went wrong.
"Fix this bug"
You've been debugging for an hour. You've checked the logs, stepped through the debugger, added console statements, and tried three different fixes. Nothing worked. Time to let the AI take a crack at it: "fix this bug in authentication flow."
The AI reads the code. It suggests the same solution you tried an hour ago. You tell it that didn't work. It suggests another approach you already attempted. This goes on until you're explaining your entire debugging history in chat messages.
The problem is you assumed the AI has your mental model – that it knows what you've tried, what didn't work, why you suspect certain parts of the code. It doesn't. It's working with whatever context it can extract from the code itself, which usually isn't enough for complex bugs.
The better prompt is: "Auth flow fails when session token expires. I've tried refreshing the token in the middleware (didn't work, still getting 401s) and checking if the token is malformed (it isn't, I logged the decoded payload). I suspect the issue is in how we're handling the refresh token rotation, specifically around line 145 in auth.service.ts where we update the database. Can you review that logic?"
This gives the AI everything: the symptom, what you've tried, what you've ruled out, where you suspect the problem. It's not about making the AI's job easier; it's about making its job possible. Without this context, even the smartest model is guessing.
This is also where Cline's exploration capabilities come in. Instead of trying to package all context into your prompt, let Cline read relevant files:
@auth.service.ts @middleware/auth.ts fix the token refresh bug, I suspect it's in how we're handling rotation.
Cline will read those files, understand the context, and provide informed suggestions instead of generic debugging advice.
Being too terse, like you're talking to a compiler
You've spent 20 years writing code. You know how to be precise. So you prompt like you're writing a git commit: "refactor UserService.authenticate() to use async/await."
The AI does exactly what you asked. It converts the promise chain to async/await. The code looks different but behaves the same. You meant for it to also add proper error handling, update the error messages to be more specific, and maybe extract the database query into a repository pattern while it was at it. But you didn't say that.
This is the compiler mindset: minimal instructions, let the machine infer the rest. It works great for compilers because they have strict rules about what to infer. It fails for AI because the AI doesn't know which assumptions to make.
The fix isn't to write longer prompts; it's to write prompts with context. Not "refactor UserService.authenticate()" but "refactor UserService.authenticate() to use async/await. The current promise chain makes error handling unclear, and we've had bugs where database errors aren't properly caught. We should also move the actual database query into the UserRepository to maintain separation of concerns."
This gives the AI your reasoning, not just your command. It understands not just what to change, but why you want it changed. That context lets it make better decisions about edge cases you didn't explicitly mention.
There's an entire framework for thinking about this: zero-shot, one-shot, and chain of thought prompting. The key insight is that AI agents aren't compilers waiting for precise commands. They're more like junior developers who need to understand the problem, not just the task.
The shift you need to make
These patterns aren't failures of intelligence. They're habits formed over decades of working with tools that require precision and penalize verbosity. Compilers don't want your reasoning; they want your syntax. Debuggers don't care about your theories; they want your breakpoints.
AI agents are different. They need the context, the reasoning, the "why" behind the "what." They work better with complete information upfront than with corrections after the fact. They thrive on specificity but struggle with contradiction.
The good news is this is a learnable skill. Every expert developer I know who struggled with AI agents initially now uses them as effective force multipliers. The difference isn't that they learned prompt engineering tricks; they learned to treat AI agents as collaborators who need context rather than compilers who need commands.
Start with planning. Use Plan mode to clarify requirements before generating code. Be specific about success criteria instead of using vague terms like "better." When constraints conflict, prioritize them explicitly. When conversations go wrong, checkpoint and rewrite instead of course-correcting. When debugging, share your mental model. When refactoring, explain your reasoning.
These instructions will generate better code, waste fewer tokens, and require fewer reverts. More importantly, they'll let you build better software faster, which is the whole point of using AI agents in the first place.
Ready to improve your prompting? Download Cline and start treating your AI agent like a collaborator who needs context. Share your prompting experiences and learn from other developers in our community on Discord and Reddit.