Skip to content
Security Research Hub

Secure Agentic AI
By Design

Deep security research, architecture patterns, and executive guidance for teams building AI agents.

inkog-cli v1.0.4
inkog scan --agent ./finance-bot --framework langchain
Security Check:
EU AI Act:FAIL
NIST AI RMF:FAIL

Token Burn Attack Detection for LangChain Agents

Inkog detects Token Burning attacks in LangChain and LangGraph agents where unbounded API loops drain your budget. Static analysis for LLM API calls in while True loops without exit conditions. CWE-770 vulnerability detection for LangChain, OpenAI, and enterprise AI applications.

Infinite Loop Detection for n8n Workflows

Inkog scans n8n no-code automation workflows for infinite loops in agentic systems. Detects missing termination guards like Max Revisions checks in Writer-Reviewer agent cycles that cause stuck processes and 100% CPU resource drain. CWE-835 vulnerability detection for n8n, Flowise, and Langflow AI workflows.

Code Injection and RCE Detection for CrewAI Agents

Inkog traces data flow in CrewAI agents to detect unvalidated code execution vulnerabilities. Identifies dangerous patterns like eval() calls with user or LLM-generated input without proper validation. CWE-94 vulnerability detection for CrewAI, AutoGPT, and Python AI agents.

The Four Pillars

Comprehensive security guidance organized by audience and use case.

Ready to Secure Your AI Agents?

Get started with Inkog Verify to scan your agents for security vulnerabilities before they reach production.