AI-generated code will cause at least three major security breaches by 2028, each exceeding $100 million in damages.
Timeframe
December 2028
Confidence
Near certain (5/5)
Category
Technology
How I'll know
At least three major security breaches — each causing $100M+ in direct financial damages (settlements, fines, remediation, lost revenue) — publicly attributed to vulnerabilities in AI-generated code. Common patterns: hardcoded secrets, missing input validation, phantom dependencies, over-permissive CORS, unsafe deserialisation. At least one regulatory body issues formal guidance addressing AI-generated code risks.
Why I believe it
The V5 vibe code quality scanner found 406 findings in 35.2 seconds on its own AI-generated codebase — 75 security issues, Grade D. Forty-six percent of code on GitHub is AI-generated, most shipped without AI-specific security scanning. OWASP has no dedicated AI-generated code vulnerability taxonomy. The conditions for major breaches are already in place. Every major tech shift (cloud, mobile, IoT) produced security crises within 2–3 years of mass adoption. AI-generated code reached mass adoption in 2024–2025.
What would make me wrong
If by December 2028, fewer than two breaches are publicly attributed to AI-generated code, or if no regulatory body addresses AI code quality in any formal guidance, this prediction is wrong.
This is the falsification trigger. If this condition is met, the annual verification review will say so publicly.
Read the full analysis
Full reasoning is in Book 1, Chapter 9
The AI Agent Economy develops each of the 15 predictions from the frameworks built across the preceding eight chapters — the dependency layer thesis, agent economics, the trust problem, India's structural advantages, and the dharma framework for ethical building.