Back to blog

The 40+ Year-Old Rule Every Vibe Coder Is About to Learn the Hard Way

A four-decade-old lesson explains why AI-generated code gets expensive fast: the later you discover the mistake, the more you pay to unwind it.

AIvibe-codingsoftware-engineeringarchitecturefundamentals

Header image

The 40+ Year-Old Rule Every Vibe Coder Is About to Learn the Hard Way

There's a chart from the early days of modern software engineering that predicts what's about to happen to a lot of teams shipping AI-generated code.

It's not about AI.

It's about when you find out you're wrong.

Back in the Boehm-era lifecycle studies, the industry put numbers to something everyone already felt -- fix a bad assumption early and it's cheap, fix it late and it's brutal.

The folk math goes like this: $1 to fix in design, $10 in development, $100 in testing, $1,000+ in production.

Don't get religious about the exact multipliers. The direction is what matters -- late discovery costs more because you're not "changing code," you're unwinding dependencies.

I've been building production software for over 30 years. Healthcare, retail, hospitality, transportation. Different industries, different stakes, same story.

And now AI has arrived, and people are acting like dependency math is optional.

The Cost-of-Change Curve Isn't a Suggestion

Let's get precise about what the research actually supports.

Early on, you can change assumptions without paying a toll. Nothing else depends on them yet.

Later on, an "easy fix" isn't easy, because you have to update the implementation, update tests, update documentation, update integrations, re-validate behavior -- sometimes with auditors, customers, or regulators.

A widely cited normalized set of multipliers looks more like requirements at ~1×, design at ~5×, code at ~10×, test at ~50×, post-release at ~100×+.

The internet loves the clean $1 → $10 → $100 ladder because it's memorable. Treat it as a mnemonic, not a promise.

The only point that matters is the one you can't cheat -- late mistakes are expensive because you have more to undo.

The cost curve isn't a theory. It's gravity with better branding.

Why the Curve Exists

Some people hear "cost-of-change curve" and assume it's about process overhead. Too many meetings, too much documentation, too many approvals.

Wrong.

The curve exists because dependencies accumulate over time.

Early in a project, a mistake is isolated. You change a design decision, you update a couple files, you're done.

Later, that same mistake is braided into everything -- database schema, API contracts, UI workflows, caching behavior, metrics and dashboards, test suites, integrations, training materials, customer expectations.

Now you discover the original assumption was wrong. You're not fixing one thing anymore. You're unwinding a chain of dependencies, each of which has its own dependencies, plus whatever duct tape got applied along the way.

If the mistake makes it to production? Now you're dealing with real user data created under the old assumption, integrations expecting the old behavior, contracts, audit trails, compliance requirements, support tickets and angry customers who discovered your edge cases for you.

That's not bureaucracy. That's the bill coming due.

What "Vibe Coding" Actually Is

The term "vibe coding" got popularized in early 2025 -- leaning into AI-generated code and steering by intent, often without really reading or understanding what the model produced.

In practice it looks like this: You tell an AI "Build me a meal planning app." It generates thousands of lines across dozens of files. You say "Add auth." It wires up login flows and sessions. You click around, the happy path works, and you ship.

The promise is seductive -- anyone can build software now.

And to be fair, AI-assisted development is genuinely transformative. I use AI heavily. It's a massive productivity multiplier.

But there's a hard line between AI-assisted development -- you use AI to go faster, and you still understand what you're shipping -- and vibe coding -- you ship output you can't explain.

That line matters because the cost-of-change curve doesn't care how the code was written. It only cares when you discover the problems.

What Vibe Coding Could Be

Here's what frustrates me about the vibe coding discourse -- it could be genuinely useful, and a lot of people are using it exactly wrong.

If late mistakes are expensive, the correct move is obvious: Use AI to move the "moment of truth" earlier.

Vibe coding is fantastic at rapid validation.

If you're not sure what product to build, or you're unsure whether your workflow matches how people actually work, AI can get you to a realistic prototype in hours.

Not production-ready. Prototype-ready. That's a feature.

Because it means you can put something in front of users quickly, discover wrong assumptions in the design phase, iterate when changes are cheap.

I've done this myself. When I was building ThisWeekEats, AI helped me prototype interface and workflow variants fast.

Those prototypes weren't production code, and I never treated them that way. They were validation tools. That's vibe coding used correctly.

What Vibe Coding Turns Into in Production

Here's what actually happens.

Someone builds an app with AI in a weekend. It works when they click around. The happy path is smooth. They demo it to friends or investors. Everyone's impressed.

So they ship it.

No code review, because they don't really understand the code. No security audit, because they wouldn't know what to look for. No load testing, because it worked fine with one user. No edge-case testing, because they don't know what edge cases exist.

Then reality shows up.

The auth system stores passwords in plain text. The caching logic lies to users with stale data. The database queries work at 100 rows and melt at 10,000. The payment integration handles successful charges and fails weirdly on declines.

Now they're in the expensive zone.

Except it's worse, because they can't fix it efficiently. Every "fix" becomes another vibe coding session -- "Make it work." "Ok now make it faster." "Ok now make it secure." "Ok now stop corrupting data."

You're not building a system. You're stacking patches.

They skipped the entire middle of the lifecycle -- design review, code review, threat modeling, testing -- and landed directly in the most expensive place to discover defects: the customer's hands.

That's not "moving fast." That's hitting the wall faster.

AI Accelerates Whatever You Are

Here's the line I keep coming back to: AI accelerates whatever you are.

If you understand systems end-to-end -- how the frontend talks to the backend, how the backend talks to the database, how auth and caching work, how failures propagate, how integrations break -- AI makes you faster at all of it.

You can generate boilerplate instantly, explore approaches quickly, prototype in hours what used to take days.

If you don't understand those things, AI helps you dig a deeper hole at record speed.

And yes, this bites experienced people too. I've watched senior engineers ship something that looked complete but fell apart at the seams once it touched real legacy data.

The Industry Forgot the Fundamentals

How did we get here?

Boehm's work is from 1981. The Mythical Man-Month is from 1975. Code Complete first came out in 1993. These aren't obscure texts. They're classics.

But the industry has a short memory.

"Move fast and break things" became a mantra, and too many people stopped asking -- break things for whom?

Move fast on low-stakes internal tools? Sure. Move fast on systems that handle money, health, or critical operations? That's not bold. That's expensive.

Agile got adopted and then misread. "Respond to change" mutated into "skip design entirely." "Working software" mutated into "no docs, no tests, vibes only."

Yes, people have argued the curve flattened thanks to modern tooling, CI, automated tests, and iterative delivery. Tooling helps. Feedback loops help.

But dependencies still compound. And the biggest failures still show up when you discover an upstream mistake after the system is already in motion.

AI didn't repeal the fundamentals. It just made it easier to ignore them until the invoice arrives.

The Reckoning That's Coming

Here's what I think happens next.

Over the next couple years, a massive amount of AI-generated code will hit production. Some of it will be fine -- simple CRUD apps, limited scale, low stakes, internal tools.

But some of it will fail spectacularly.

Someone will ship a vibe-coded payments system and discover the edge cases the hard way. Someone will ship a health-related app with subtle logic bugs and learn what "liability" means. Someone will integrate a vibe-coded system with a legacy platform and watch it corrupt data that took years to accumulate.

The analogy I can't shake -- imagine building a house by asking AI to generate each room separately, then duct-taping the rooms together.

The living room looks great. The kitchen looks great. But the plumbing doesn't connect, the wiring is a fire hazard, and the foundation was designed for a different floor plan.

The photos look amazing. You still can't live there.

That's what's coming -- software that passes the demo and fails the deployment.

Who Survives

When all this breaks, someone has to fix it.

That someone understands how systems actually work together. They understand integration and legacy constraints, security as engineering not as a checklist, failure modes and operational reality, architecture as a living thing not as a diagram.

This is what I call the Conductor thesis -- AI is the instruments, not the orchestra.

A conductor doesn't play every instrument. But they understand how each one contributes to the whole, and they know when something sounds wrong.

Production software needs conductors. Not people who can merely prompt. People who can tell the difference between "it works" and "it will keep working."

Those people are about to get more valuable. Not because AI can't code. Because most codebases don't fail at coding. They fail at seams, assumptions, and reality.

How to Use AI Without Hitting the Wall

I'm not anti-AI. I'm not even anti-vibe coding. I'm anti-shipping things you can't maintain.

Here's how to use AI without face-planting into the cost curve:

Use vibe coding for validation, not production. Prototype fast. Put something in front of users quickly. Catch wrong assumptions early, when they're cheap.

Understand before you ship. If you can't explain why the code does what it does, you're not ready to ship it. Read what the AI generated. If that feels like work, remember -- it's way less work than debugging production issues in code you don't understand.

Have someone with end-to-end knowledge review the seams. Vibe coding fails at integration points -- where your assumptions meet external reality. If you don't have integration expertise, borrow it before you ship.

Test with real data and real edge cases. The happy path always works. Production is where malformed inputs, retries, concurrency, partial failures, and stale caches go to party.

Remember the curve exists. AI didn't repeal the cost-of-change curve. It just made it possible to hit the wall faster.

The cost curve

The Integration That Looked Complete

A private equity firm brought in a senior engineer to build a system for one of their portfolio companies. Strong resume. Big-name tech background. Deep expertise in AI and backend systems.

He shipped in six weeks. The client was thrilled. The interface was clean. Core features worked. The demo was smooth.

I was brought in later to assess the system's integration with the client's existing ERP -- a legacy platform they'd been running for nearly two decades.

The broker layer was a mess. AI-generated code that worked beautifully in isolation, but made assumptions that didn't hold with real data, real workflows, and real edge cases.

Some failures were loud. The dangerous ones were quiet -- silent failures and subtle data corruption.

The engineer wasn't stupid. He wasn't lazy. He was a specialist. He understood AI deeply. He understood backend systems deeply.

What he didn't understand was integration -- how to bridge a shiny new system to a 20-year-old platform full of accumulated business logic and undocumented edge cases.

AI helped him move fast. It also helped him build six weeks of work that needed substantial rework before it could safely run in production.

The cost-of-change curve doesn't care about your resume. It only cares when you find the problems.

Mini Checklist: Using AI Without Eating the Cost Curve

  • [ ] Prototype freely with AI -- design-phase validation means cheap mistakes, fast learning
  • [ ] Validate prototypes with real users before assuming you're "done"
  • [ ] Read AI-generated code before shipping -- if you can't explain it, you can't maintain it
  • [ ] Have someone with end-to-end understanding review seams and integration points
  • [ ] Test with real data and real edge cases -- bad inputs, retries, concurrency, partial failures
  • [ ] Treat "it works in the demo" as the starting point, not the finish line
  • [ ] Check auth and security explicitly -- models will take shortcuts if you let them
  • [ ] Budget time to turn prototype-quality code into production-quality code
  • [ ] Ask yourself -- if this breaks at 2am, do I understand it well enough to fix it?
  • [ ] Remember -- AI accelerates whatever you are, including your gaps

The curve is real. AI is real. Use one to navigate the other.


Further Reading

  • Barry W. Boehm, Software Engineering Economics (1981)
  • Fred Brooks, The Mythical Man-Month (1975)
  • Steve McConnell, Code Complete (1993; 2nd ed. 2004)
  • Andrej Karpathy (Feb 2025) on "vibe coding"