Glossary of the AI Agent Economy
These are the terms I use consistently across the book, the blog, and public talks. Some I coined. Some I operationalised from existing concepts. All of them carry specific technical or strategic meanings that do not always match how other writers use them. Use this page as the canonical reference.
Each entry is anchor-linkable — copy the link icon next to any term to deep-link into a definition.
Dependency Layer
The infrastructure, protocols, and services that AI agents cannot function without — the invisible layer everything in the agent economy depends on. Analogous to how TCP/IP is the dependency layer of the internet, intelligence depends on this layer but cannot generate it. Two complementary framings: (1) the philosophical frame — four dependencies agents cannot self-generate: trust, identity, physical attestation, governance; (2) the market-taxonomy frame — five infrastructure sub-segments tracked for market sizing: orchestration platforms, trust and attestation services, agent identity systems, monitoring tools, and inter-agent communication protocols. Both framings describe the same layer from different vantage points.
First used: Coined by Atin Agarwal in the thesis essay that became Chapter 2 of The AI Agent Economy (2026). Market-segment framing developed in Chapter 9, PRED-003.
Read the full dependency layer thesis — or see the market-segment framing in PRED-003 →Trust Layer / Attestation Layer
The subset of the dependency layer concerned with verifying that an agent did what it claims to have done. It has three sub-layers: output verification (did it produce correct results?), process attestation (did it follow the right steps?), and identity authentication (is this the agent it claims to be?). All three are required for complete trust.
First used: Used by Atin Agarwal throughout The AI Agent Economy (Chapter 5). Builds on the book's canonical "Three layers of trust" framework — output verification, process attestation, identity authentication.
Why India will build the agent trust layer →Cryptographic Attestation
A digitally signed, timestamped, tamper-proof record that links an agent's claimed actions to verifiable evidence. Uses secure-element keys (e.g., Ed25519) so any modification invalidates the signature. Distinct from log files: logs can be edited, cryptographic attestations cannot without detection.
First used: Used by Atin Agarwal as the technical foundation for Physical Bridge, one of three venture paths in the portfolio.
Prediction: attestation becomes mandatory by 2030 →One-Person Conglomerate
A business model where one operator runs 10+ vertical businesses simultaneously using AI agents for execution. Humans decide and orchestrate. Agents execute. The 80%+ of operational work that used to require employees is now done by inference. Not passive income — active businesses with customers, revenue, and agent-powered delivery. The thesis behind the AI Conglomerate venture path.
First used: Coined by Atin Agarwal in a January 2026 blog post and developed in Chapter 3 of The AI Agent Economy.
Read the original one-person conglomerate essay →Dharma of Technology (Dharma Lens)
A lens for examining technology decisions through the ethical and philosophical framework of the Bhagavad Gita — centred on three concepts: dharma (duty to the people affected by what you build), nishkama karma (action performed without attachment to metrics), and viveka (discernment about when to automate and when to keep humans in the loop). Applied to AI agents: build systems that act with duty, not just with capability.
First used: Developed by Atin Agarwal across public essays and Chapter 8 of The AI Agent Economy. Canonical short form in the book's internal vocabulary is "Dharma lens".
Read the dharma of technology essay →Svadharma (in AI context)
Sanskrit स्वधर्म — "one's own duty or path", from the Bhagavad Gita (3.35, 18.47). Applied to India's strategic positioning in the agent economy: India's svadharma is to build agent infrastructure rather than imitate Silicon Valley's model-building path. "Better is one's own dharma, though imperfect, than the dharma of another well performed."
First used: Applied to AI strategy by Atin Agarwal in Chapter 6 of The AI Agent Economy.
India's unique position in the dependency layer →Agent Economy
The emerging economic system where AI agents perform work, transact with each other, and create value chains — distinct from the current human-labour economy. Characterised by a fundamentally different cost structure: the dominant cost line (human salaries) is replaced by API inference costs at a small fraction of the level, inverting the P&L shape of every business built around it.
First used: Central term of The AI Agent Economy, the first book in Atin Agarwal's ongoing series.
The AI Agent Economy — Book 1 →Falsifiable Prediction
A prediction that includes specific measurable criteria and a timeframe, making it possible to definitively prove it true or false. Karl Popper's standard: a claim is scientifically meaningful only if it can be proven wrong. "AI will change everything" is unfalsifiable and intellectually empty. "By December 2028, three $100M+ breaches will be attributed to AI-generated code" is falsifiable.
First used: Applied to AI predictions by Atin Agarwal in Chapter 9 of The AI Agent Economy.
See all 15 falsifiable predictions →Falsification Trigger
The specific condition under which a prediction is proven wrong. Every prediction in the AI Book Series includes an explicit falsification trigger — the intellectual honesty mechanism that distinguishes predictions from wishes. If the trigger condition is met, the annual verification review must say so publicly.
First used: Structural component of every prediction scorecard in The AI Agent Economy, Chapter 9.
See the 15 falsification triggers →Prediction Scorecard
A structured format for technology predictions with six required components: statement, timeframe, measurable criteria, confidence level, evidence basis, and falsification trigger. Designed for unambiguous annual verification. The format used for all 15 predictions in The AI Agent Economy.
First used: Format developed by Atin Agarwal for the AI Book Series prediction tracking system.
The 15-prediction scorecard →Agent-Native
Describing job roles, business models, or workflows that exist only because of AI agents and have no human predecessor — as opposed to roles that are augmented or replaced by AI. Examples: agent fleet manager, agent output auditor, multi-agent orchestration architect, agent trust officer.
First used: Used by Atin Agarwal throughout The AI Agent Economy and in PRED-002.
Prediction: 5 new agent-native job categories by 2028 →Vibe Coding
The practice of describing desired functionality in natural language, letting an AI model generate the code, and shipping it with minimal or no human review — prioritising speed and feel ("vibes") over verification. Responsible for systematic security anti-patterns: hardcoded secrets, missing input validation, phantom dependencies, over-permissive CORS, unsafe deserialisation.
First used: Term adopted and operationally defined by Atin Agarwal after scanning an AI-generated codebase and finding 406 issues in 35.2 seconds.
406 findings in 35.2 seconds — the vibe coding crisis →Structural Profitability
The condition where a business's cost structure allows profitability at revenue levels too low for competitors in higher-cost environments to sustain. Distinct from "cheap" in that the advantage is architectural (lower base costs), not labour-based (lower wages). An Indian agent-economy founder is structurally profitable at ₹5–10L annual revenue where a US-based competitor is still burning capital.
First used: Used by Atin Agarwal in Chapter 6 of The AI Agent Economy to describe India's structural cost advantage.
Economics of the one-person conglomerate →Invisible Workforce
The distributed network of AI agents already performing business tasks (coding, security, sales, support, research, monitoring, writing, data analysis) at production scale. Invisible because it operates through API calls and background processes rather than physical presence. The thesis of Chapter 1: the workforce is already here, most people just can't see it.
First used: Coined by Atin Agarwal for Chapter 1 of The AI Agent Economy.
The AI Agent Economy →Visionary Pattern
A five-phase intellectual authority playbook: predict specifically → educate simply → build what you predicted → point back → go abstract. The compounding credibility arc that separates lasting voices from passing commentators. Each phase requires the previous one.
First used: Framework developed by Atin Agarwal as the meta-structure of the AI Book Series.
The thesis driving the portfolio →From the book
Every term in full context
These definitions are short by design. The AI Agent Economy, Book 1 of an ongoing series, develops each term across ten chapters — with the frameworks, the evidence, and the 15 falsifiable predictions that follow from them.