Skip to content
⚠️

Agentic AI Risk Taxonomy

Comprehensive catalog of security vulnerabilities unique to AI agents. From token bombing to infinite loops, understand the attack surface of autonomous systems.

Understanding Agentic AI Risks

AI agents operate autonomously, making decisions and taking actions without constant human oversight. This autonomy introduces unique security challenges that traditional application security frameworks weren’t designed to address.

Unlike conventional software, AI agents can:

  • Generate and execute code based on natural language instructions
  • Access external resources like APIs, databases, and file systems
  • Make sequential decisions that compound in unexpected ways
  • Process and act on untrusted input from users and external sources

Vulnerability Categories

Token Bombing (CWE-770)

Token bombing occurs when an AI agent enters an unbounded loop of LLM API calls, rapidly consuming tokens and incurring massive costs.

Attack Pattern:

# Vulnerable pattern
while True:
    response = llm.generate(user_input)
    history.append(response)

Impact:

  • Cloud bills exceeding $10,000/hour
  • Service degradation and outages
  • Resource exhaustion attacks

Mitigation:

  • Implement token budgets per session
  • Add iteration limits to all loops
  • Monitor and alert on API call rates

Infinite Loops (CWE-835)

Multi-agent systems are susceptible to infinite loops when agents continuously delegate tasks to each other without termination conditions.

Common Scenarios:

  • Writer-Reviewer cycles without max revision limits
  • Recursive task decomposition without depth limits
  • Agent handoffs without cycle detection

Detection: Inkog’s static analysis detects potential infinite loops by analyzing:

  • Control flow graphs for cycles
  • Agent communication patterns
  • Missing termination guards

Code Injection (CWE-94)

AI agents that execute generated code are vulnerable to code injection attacks, where malicious input causes the agent to execute harmful code.

Attack Vectors:

  • Prompt injection leading to code generation
  • Unsanitized LLM output passed to eval() or exec()
  • Shell command injection through agent tools

Example Vulnerability:

# Dangerous: LLM output directly executed
command = llm.generate(user_prompt)
result = eval(command)  # RCE vulnerability

Data Exfiltration (CWE-200)

Agents with access to sensitive data can be manipulated into leaking information through their responses or external tool calls.

Risk Factors:

  • Agents with database access
  • RAG systems with sensitive documents
  • Tools that can make external HTTP requests

CWE/OWASP Mapping

VulnerabilityCWE IDOWASP Category
Token BombingCWE-770A04:2021 Insecure Design
Infinite LoopsCWE-835A04:2021 Insecure Design
Code InjectionCWE-94A03:2021 Injection
Data ExfiltrationCWE-200A01:2021 Broken Access Control
Prompt InjectionCWE-77A03:2021 Injection

Framework-Specific Risks

LangChain / LangGraph

  • Uncontrolled tool execution
  • Memory injection vulnerabilities
  • Graph cycle vulnerabilities

CrewAI

  • Inter-agent trust assumptions
  • Role escalation through delegation
  • Shared memory tampering

n8n / Flowise

  • Workflow infinite loops
  • Credential exposure in nodes
  • Unvalidated webhook inputs

Next Steps