PredictionPractitioner Truth

406 Findings in 35.2 Seconds — The Vibe Coding Crisis No One Is Talking About

April 2026·7 min read·atin-agarwal.com
← Back to Blog

406 findings. 35.2 seconds.

That is what happened when I pointed an AI code quality scanner at a codebase that was largely AI-generated. Not a client's code — my own. A tool I built to catch exactly these problems, scanning itself.

75 security issues. 244 code smells. 53 test gaps. 27 architecture problems. 7 dependency vulnerabilities. All discovered in 35.2 seconds.

The code worked. It passed its tests. It ran in production. And it was riddled with problems that no human reviewer had caught — because no human reviewer was looking for the patterns AI leaves behind.

We have a word for how this code got written: vibe coding.

Prediction · April 2026

"By December 2028, at least 3 major security breaches — each exceeding $100 million in damages — will be publicly traced to vulnerabilities in AI-generated code. These incidents will trigger the first regulatory response specifically targeting AI code quality."

Deadline: December 31, 2028 · Measurable via: Publicly reported breaches with AI-generated code attribution, regulatory guidance from SEC/EU AI Act/CERT/OWASP

I do not make this prediction lightly. I make it because I have seen the evidence, and the evidence is damning.

What "Vibe Coding" Actually Produces

Here is what vibe coding looks like in practice: you describe what you want in natural language. The AI generates the code. You glance at it, maybe run it once, and ship it. The feedback loop is instant. The dopamine hit is real. You feel productive.

Now here is what vibe coding looks like under a microscope.

When I scanned that AI-generated codebase, the 406 findings broke down like this:

  • 244 code smells — structural problems that make code fragile, hard to maintain, and prone to regression
  • 75 security issues — vulnerabilities that an attacker could exploit, from hardcoded secrets to missing input validation
  • 53 test gaps — untested code paths, missing edge case coverage, functions that look correct but have never been verified
  • 27 architecture problems — systemic design flaws, tight coupling, dependency tangles that will compound over time
  • 7 dependency vulnerabilities — packages with known exploits sitting in the dependency tree

These are not random bugs. They are systematic. AI-generated code does not make the same kinds of mistakes humans make. It makes its own kinds of mistakes — and it makes them with remarkable consistency.

I have catalogued the patterns. Hardcoded secrets that should be environment variables. Input validation that is simply missing — not forgotten, never considered. Phantom dependencies: imports referencing packages that do not exist. Over-permissive CORS configurations that open the door to cross-origin attacks. Unsafe deserialization that trusts whatever data comes in.

Traditional code review was designed for human error patterns: typos, logic mistakes, edge cases a developer forgot. AI errors are different. They are structural. They are repeatable. And they are invisible to reviewers who are not looking for them — which, right now, is nearly everyone.

Why This Will Get Worse Before It Gets Better

GitHub reported that more than 46% of code on its platform is now AI-generated. That number is from 2025. It is rising.

Nearly half of the code running the world's software is being produced by systems that do not understand security. They do not understand context. They generate what is statistically likely, not what is safe.

And the gap between how fast we generate code and how fast we verify it is widening. Code generation has become almost instantaneous. Code quality assurance is still largely manual, still largely designed for human-written code, and still largely optional.

This is not new. Every major technology shift follows the same arc.

Cloud computing went mainstream around 2010. By 2012–2013, we saw a wave of major cloud breaches — companies that had moved fast, configured poorly, and learned expensive lessons about shared responsibility models.

Mobile applications exploded after 2012. By 2014–2016, mobile security was a crisis — insecure data storage, broken authentication, unprotected APIs. It took years and billions in breach damages before mobile security practices matured.

IoT devices proliferated through 2015–2017. By 2016–2019, botnets like Mirai were conscripting millions of insecure devices. The pattern was identical: rapid adoption, lagging security, inevitable reckoning.

AI-generated code is following the same curve. But the volume is orders of magnitude larger. When 46% of all code is AI-generated and rising, the attack surface is not just growing — it is exploding.

The $100 Million Question

Some will argue that $100 million per breach is hyperbolic. It is not.

AI-generated code is not confined to side projects and hackathons. It is in fintech applications that move money. In healthcare systems that handle patient data. In infrastructure software that controls critical operations. In SaaS products used by millions.

When — not if — a major breach is traced to AI-generated code, the forensic question will be simple: "Was this vulnerability in human-written code or AI-written code?" Companies will not be able to answer, because most of them are not tracking the distinction.

The damages will be real. Settlements. Regulatory fines. Customer attrition. Remediation costs. Stock price drops. For a fintech company processing billions in transactions, $100 million in total damages from a single breach is not an outlier — it is a baseline.

And the regulatory response is inevitable. The SEC is already expanding cybersecurity disclosure requirements. The EU AI Act is creating enforcement mechanisms for AI systems. CERT and OWASP are tracking emerging threat categories. When AI-generated code breaches start making headlines, regulators will act. They always do. The only question is how many breaches it takes.

My estimate: three major ones, closely spaced, before the regulatory machinery engages.

What to Do About It

I am not writing this to sell you something. I am writing this because the problem is real, it is urgent, and most engineering teams are not thinking about it.

Three things every team shipping AI-generated code should do now:

1. Scan differently. AI-generated code needs AI-specific scanning. Traditional static analysis tools (SAST) are designed to catch human mistake patterns. They miss AI anti-patterns because they were never trained to look for them. You need tools that understand the systematic error signatures AI models produce — hardcoded secrets, phantom dependencies, missing validation patterns. These are not edge cases. They are the norm.

2. Audit your AI code ratio. Do you know what percentage of your production codebase was AI-generated? Most companies do not. That is like not knowing what percentage of your building was constructed without a licensed contractor. The first step is visibility — understand what you are running before you can assess the risk.

3. Stop trusting. Start verifying. Vibe coding is fine for prototypes. It is fine for exploring ideas. It is not fine for production code that handles money, data, or critical operations. The mantra should be simple: AI writes, humans verify, automated scanning catches what humans miss. Any other workflow is a liability.

I built a tool that does exactly this kind of AI-specific code scanning. More on that another time.

The Bet

Let me restate this clearly.

By December 31, 2028, at least 3 major security breaches — each causing more than $100 million in damages — will be publicly attributed to AI-generated code. At least one regulatory body will issue formal guidance or regulation specifically addressing AI code quality.

I will revisit this prediction publicly every year. I will track the evidence for and against. If I am wrong, I will say so — openly, with the same specificity I use to make the claim.

But I do not think I will be wrong. The code is already compromised. The scanning infrastructure does not exist at scale. And the volume of AI-generated code is doubling while the industry debates whether this is even a problem.

The history of technology security is unambiguous: speed without verification always ends in crisis. Cloud taught us this. Mobile taught us this. IoT taught us this.

Vibe coding will teach us this next.

I am writing a book that goes deeper — on why AI-generated code is not just a security problem but a structural shift that will reshape the entire software industry. Follow for prediction tracking updates, or subscribe for early access when the book launches.

prediction vibe-coding ai-code-quality security practitioner-truth

Get predictions before they're published

Weekly AI Agent Economy insights. Book chapters delivered to subscribers first.

Join the community of technology leaders.