RSVP Event

Meet us at HIMSS26

Booth #10224 | Venetian Level 1 | AI Pavilion to explore our secure, scalable, and compliant AI solutions for healthcare’s toughest challenges.

Vibe Coding vs AI-Assisted Development: Why Human Theory Still Matters

profileImg
Varun Srinivas
27 August 20255 min

Vibe Coding vs AI-Assisted Development: Why Human Theory Still Matters

AI is changing how we write code. From autocomplete to full-function generation, developers are increasingly working alongside large language models. It feels fast. It feels powerful. And for many, it feels like we’re entering a new era of software creation.

One of the flashiest trends in this space is something called “vibe coding.” It’s all over social media, and it looks effortless. You describe what you want, the AI writes the code, and somehow, things just work.

But does this approach hold up beyond demos and prototypes?

In this piece, we’ll explore what vibe coding really means, why it’s exciting, and where it starts to break down. Most importantly, we’ll make a case for why human theory, the mental model a developer builds about a system, still matters more than ever.

What Is Vibe Coding?

Lately, there’s been a lot of buzz about “vibe coding.” Coined by AI researcher Andrej Karpathy, the term refers to a style of programming where you “fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

How It Works

In vibe coding, you describe what you want to an AI, often a large language model, accept its code suggestions with minimal review, and prompt again whenever something breaks. Enthusiasts have shown it’s possible to build apps and websites this way. Even non-programmers can get working software “just by typing prompts into a text box.”

As The New York Times put it, you “don’t have to know how to code to vibe code, just having an idea… is usually enough.”

In Karpathy’s own words, “it’s not really coding… I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works.”

Why It’s Popular

This approach is incredibly fun and fast for low-stakes projects. Why not let an AI churn out 100x more code than you could write yourself for a quick prototype? Not surprisingly, vibe coding has exploded in popularity and even hit mainstream media.

Where It Gets Risky

As an engineering leader, I want to draw a critical distinction: using AI to assist coding is not the same as vibe coding. In my view, there’s a world of difference between blindly accepting whatever the AI writes versus strategically harnessing AI to implement a design you, the human, have carefully thought through.

Why Theory Still Matters

The key is who holds the “theory” of the program, the mental model of how it works and why. That theory must live in the developer’s mind, not just in the AI’s output. If we lose sight of that, we risk creating heaps of code that work today but become unmaintainable tomorrow.

Vibe Coding: Great for Prototypes, Risky for Production

Vibe coding might feel magical when you’re racing toward a demo or spinning up a side project. The speed is real, and the thrill of watching code appear from thin air is hard to ignore. For quick wins, personal tools, or throwaway experiments, it works. You get something usable without worrying too much about maintainability.

Why It Fails in Long-Term Projects

But once that same code enters systems meant to last, the cracks begin to show. Karpathy himself notes vibe coding works best for “weekend projects.” Steve Krouse puts it bluntly: “It’s only legacy code if you have to maintain it.” That kind of tech debt is tolerable in a prototype, especially if you get a quick win and do not plan to revisit the code.

When developers ship code they barely understand, it’s like spending on an unlimited credit card and ignoring the bill. Eventually, someone pays. That level of abstraction may be fine when the stakes are low. But in production, where the code needs to evolve, scale, and stay secure, not fully understanding what the AI-generated becomes a real liability.

Security and Accountability Risks

There is also the matter of accountability and security. AI-generated code can introduce subtle bugs or vulnerabilities that a human might catch if they were writing or reviewing the code line by line. When developers use AI without fully understanding the output, they may unknowingly ship errors or security flaws.

This is why vibe coding in production settings concerns many of us. It feels like flying on autopilot with zero visibility. That may be fine on a sunny day for a short hop, but it is not a safe way to cross the Atlantic.

Ultimately, vibe coding is not a shortcut to professional-quality software. It is a useful sandbox for experimentation and a valuable learning tool, but it does not replace the hard work of engineering when reliability and longevity matter.

Programming as Theory-Building: The Human Factor

To understand why careless AI-generated code becomes such a liability, we need to step back and look at what programming really is. Beneath the syntax and tooling, programming is about building an understanding, a mental model of how a system works. This idea has been around for decades, but it’s especially relevant in a world where AI can generate code without ever grasping what it means.

Programming Is Not Just Code

Decades ago, computer scientist Peter Naur argued that programming is fundamentally theory building, not just the act of producing code.

What he meant was that the real value of a software project lies in the internal theory that developers construct in their minds. Source code and documentation are merely partial expressions of that mental model.

As one modern essay puts it, “The source code is merely a written representation of this theory, and like all representations, it’s lossy.” Critical design choices, trade-offs, and intentions often live only in the heads of the people who built the system.

In short, a program is not its code. A program is the knowledge and reasoning that explain how the code works.

What Happens When Theory Is Lost

When the human-held theory disappears, the code starts to lose meaning. Naur observed that when the original team disbands, the “theory of the program” often vanishes. The software might still run, but no one knows why it was built that way or how to modify it safely.

Anyone who has maintained an old, undocumented system knows this feeling. You have the code, but no context. Every change becomes a risky guessing game.

Unchecked AI-generated code introduces that same risk. If a language model produces a block of code that no team member understands, it lacks theory. There was no reasoning behind it, only statistical prediction.

LLM-generated code isn’t just theory-less. It’s nobody’s theory. The AI does not understand the problem domain. And if the developer does not either, the result rests on a fragile, unexamined foundation.

Why Shared Mental Models Matter

Software teams work best when they share a clear theory of the system. That shared understanding helps them make informed changes because they see how each part fits into the whole.

But when code is added without that theory, or worse, when no theory exists at all, the system becomes inconsistent. You start seeing modules that function in isolation but do not conceptually align with the rest of the codebase.

Over time, these mismatches build up. The project becomes a collection of disjointed implementations that no one fully understands. That is technical debt in its purest form: code without comprehension.

The Human Factor Still Matters

If vibe coding continues to scale without guardrails, we risk producing more and more theory-less code. That leads to bloated, brittle systems that no one can confidently maintain.

In contrast, when a senior developer writes or integrates code, they are usually doing so with an overarching theory in mind, whether explicitly articulated or not.

That is the human factor we must preserve, even as AI tools become more powerful. Without that layer of understanding, we are simply piling code on top of code and hoping it does not collapse later.

AI as a Coding Partner, Not a Replacement

None of this is to say we should shun AI coding tools, far from it. As an engineering lead, I’m excited about using LLMs to boost productivity and handle the drudge work. The point is that we must remain the directors of what the code should do and how it should be structured.

AI Is Like a Junior Developer, Not an Architect

Think of an AI pair programmer as a super-smart but junior developer on your team. They can generate code at lightning speed and possess encyclopedic knowledge of frameworks, but they lack true understanding and sound judgment.

As Karpathy joked, it is like having an “over-eager junior intern” who sometimes bullshits and has “little to no taste for good code.” You, the senior engineer, are there to keep this eager helper in check by reviewing every change, insisting on quality, and steering the implementation toward your design vision.

Code Review Still Matters

Using AI responsibly means never leaving the code unread. My rule, and one shared by many colleagues, is that I will not accept or commit any AI-generated code unless I fully understand it and can explain it to another human.

If the AI suggests a solution, I treat it like a pull request from a teammate. I read through it, think about edge cases, run tests, and evaluate whether it fits our requirements. Often, the AI’s output is a draft that I revise to match the system’s architecture and style. The AI saves me keystrokes, not thought.

As Simon Willison put it, if you thoroughly review and test what the LLM produces, “that’s not vibe coding, it’s software development.” The AI becomes just another tool in the stack, like an IDE or a compiler.

Prompting with Intent, Not Passivity

The real skill for developers going forward will be integrating AI into the workflow without losing the mental model of the system. This means prompting and guiding LLMs in a way that aligns with the design you have already thought through.

For example, a senior engineer might sketch out a module’s interface and invariants, then ask the AI to generate boilerplate or an initial implementation. That output is then reviewed against the mental model they already hold.

We remain responsible for architectural decisions, security concerns, performance trade-offs, and other context-heavy choices. The AI can help explore options or accelerate repetitive tasks, but the human decides what makes sense.

Human-in-the-Loop Is the Real Advantage

Adopting this human-in-the-loop approach gives us the best of both worlds. You get faster output with better code quality. The speed of AI is real, but it does not come at the cost of thoughtful software engineering.

Many of us experiment with vibe-style coding on toy projects to explore how far these tools can go. That exploration builds intuition. It helps us understand where LLMs are useful, where they fall short, and how to apply them more intentionally in real projects.

The bottom line is that AI will not replace developers. Developers who know how to use AI well will replace those who do not. The value of a senior engineer today is not in typing every semicolon by hand. It is in making high-level decisions, understanding the business domain, preserving conceptual integrity, and using the right tools, including AI, to bring that vision to life.

The Future Is Human-Guided, Not AI-Driven

We are entering a new age of software development where English might become the most widely used programming language. Anyone with an idea can now create software by chatting with an AI. This shift lowers the entry barrier and opens the door for countless new innovators.

But for those of us leading real engineering efforts, we need to remember what our role is. We are not paid to produce lines of code. We are paid to deliver solutions, and that requires understanding. Code without comprehension has no value.

Whether the code comes from a person or a machine is irrelevant. What matters is that a capable human has wrestled with the problem, shaped the solution, and built a mental model they can carry forward.

Programming may evolve in tools and syntax, but the essence remains unchanged. It is still about thinking clearly, making decisions, and understanding the system deeply.

Conclusion

As long as we keep that human-centered practice at the core of our work, AI will remain a powerful ally, not a liability. Programming is still about the brain, not the silicon. The tools we use may change, but the responsibility of understanding, judgment, and conceptual clarity will always rest with us.

AI can speed us up, offer shortcuts, and even surprise us with clever suggestions. But it cannot replace the thinking, the reasoning, or the deep context that real software demands.

So go ahead and use AI coding assistants to supercharge your workflow. Let them handle the boilerplate, suggest ideas, and explore options. Just remember to stay in control. Keep your hands on the wheel, your architect’s hat on, and your mental model intact.

Vibe with your AI, but never stop building the theory that makes your software truly work.

Featured Blog Posts
Healthcare, Health IT, Generative AI, Artificial Intelligence, MCP, Technology Trends
The Missing Link in Enterprise AI? It’s Probably MCP
profileImg
Abhishek Ghosh
9 Jun 20256 min read
Healthcare, Health IT, Generative AI, Artificial Intelligence, MCP, Technology Trends
What’s Breaking Agentic AI and How MCP Solves It
profileImg
Nameera Uzair
21 July 20255 min read
Artificial Intelligence, Generative AI, UX Design, UI Design, Human-Centered Design, Technology Trends, Enterprise AI, Product Design, Digital Transformation, AI in Business, AI in Healthcare, AI Governance, Workforce Transformation
Your REST API Isn’t Agent-Ready—Here’s How MCP Transforms It
profileImg
Varun Srinivas
22 August 20255 min read

Need help with your Business?

Don’t worry, we’ve got your back.

0/1000