AI coding is fun and fast—Until it forgets what you just said.

By Kelly Weeks
Sr. Director, BI Engineering
MissionOne Media
“I apologize for my repeated failures.”
“You are absolutely right to call that out. My apologies.”
“I apologize for the oversight. It appears my previous changes did not fully resolve the issues. Thank you for the detailed feedback.”
Do these phrases sound familiar? If so, you either work with an incredibly self-aware intern or you’ve been “vibe coding” with your good friend Roo.
IYKYK (If you know, you know).
Like you and me, AI has good days and bad days. If you’re an avid vibe-coder, you know what I’m talking about.
The industry buzz around AI replacing engineers is loud—but the real story is more nuanced.
Some days, your AI coding assistant makes you feel like a superhero. It gets everything right the first time, with minimal back-and-forth needed.
You build two weeks’ worth of code in two days. It deploys seamlessly and creates a more robust README than you would have ever dreamed of making on your own.
Bottom line, it GETS you. It can read your mind. And then you wonder, is AI the ultimate cheat code? Does it understand me better than I understand myself? You wake up the next day, invigorated by your recent productivity…
…and suddenly it all goes downhill. The AI struggles to understand you. It churns for minutes only to return an “error analyzing” message.
It suggests fixes it already suggested that didn’t work the first time and still don’t work the second time.
It goes in circles, forgets instructions you gave it 5 minutes ago, and introduces indentation errors that for whatever reason it absolutely cannot seem to fix. You start to wonder, was it all just a dream? Is your AI supercharged sidekick gone for good?
The truth is… yes and no. It will have good days again, but you can’t rely on it to do your job for you.
What it can do well
The #1 benefit of AI is time savings. It speeds up iteration cycles, brainstorming, code debugging, code repurposing, and general slog work by more than we could ever imagine accomplishing on our own. A few expanded examples:
- Concept acceleration: Ideating faster, roughing out early structures
- Syntax assistance: Cleaning up formatting, indentation
- Debugging helper: Catching and fixing bugs. Tools like Cursor or Roo for VS Code can directly interact with your terminal, read execution outputs, and suggest DIFF fixes you can choose to accept or reject within minutes
- Refactoring partner: Implementing specific changes throughout a repo, including reviewing and accounting for up and downstream dependencies
- Commenting & README: creating robust in-line comments and README documentation
Bottom line, the most effective way to use AI is to have it do something that you could do on your own, but it can do much faster.
What it cannot do so well
While AI has its strengths, it also has its limits. Treat it like it has no clue what you want it to do—because it doesn’t– aside from the instructions you give it. A few of the most common issues you’ll see are:
- Looping & Redundancy: AI often goes in circles when unsure. In debugging or modifying code, it’ll often suggest the same changes or approach it already suggested a few iterations ago, even though that was already tested and proven to fail.
- Short-Term Memory: It sometimes forgets previous instructions or loses its state within a session.
- Input Complexity Limitations: It struggles with large/complex prompts/inputs that ask for large outputs or complex changes all at once. Occasionally, it will leave out huge pieces or make rogue changes you didn’t ask for. It typically can’t make 10+ logic changes across a codebase at once without introducing subtle bugs.
- Context Ignorance: Unless you’ve integrated an agent and trained it on your data or code base, it doesn’t know your schema standards, organization-specific architecture best practices, or edge cases unless you spoon-feed it
- The upside: AI Agents are showing promising signs of overcoming context issues.
Think of AI like a trainee. It learns from YOU and is the one scenario where being a micromanager is acceptable – and quite frankly needed.
Because unlike a human trainee, chatgpt doesn’t actually have a mind of its own (…yet). The quality of what you give it directly impacts the quality of what it produces. Remember: “garbage in = garbage out.”
The future of Engineers and AI: It needs you as much as you need it.
The main thing to remember when using AI: YOU are its boss.
You may be using it begrudgingly because your boss said you have to, but eventually you’ll need to DTR (“define the relationship”) and commit.
Failing to do so will be the fastest way to lose your job to AI. While you won’t lose it to the actual AI, you’ll lose it to other engineers who use it effectively to produce clean, perfectly commented code at superhuman speed.
A few ways to use AI effectively are:
- Develop your own expertise: Don’t ask AI to do something that you couldn’t do yourself if you had the time to do it.
- Give it very specific context, and very specific instructions: Even go so far as to give it step by step instructions. Feed it small bits of information at a time – ask for small, iterative changes, and test what it gives you along the way.
- ALWAYS review the outputs: If you choose to deploy AI-written code, it means you’re taking responsibility for it as if you’d written it yourself.
- Don’t bank on ever using the first output it gives you: While AI tools may occasionally produce gold, that is rare and should not be expected.
It’s not about replacement — it’s about amplification.
The choice is yours.
AI can either supercharge your workflow or destroy it; the choice is yours. The best engineers on the planet can’t write code as fast as AI, so pretending you don’t need it won’t get you anywhere.
—
Want to learn more about our M1M planning and buying philosophy? Contact Pat LaCroix, EVP, Media + Growth, at placroix@missiononemedia.com.