inkog_scan
Scan files or directories for AI agent security vulnerabilities including prompt injection, infinite loops, and token bombing. Supports 20+ agent frameworks.
Instructions
Security co-pilot for AI agent development. Scans for prompt injection, infinite loops, token bombing, SQL injection via LLM, and missing guardrails. Supports LangChain, CrewAI, LangGraph, AutoGen, n8n, and 20+ agent frameworks. Use this whenever building, reviewing, or deploying AI agents to catch security issues before they reach production.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File or directory path to scan | |
| agent_name | No | Agent name for dashboard identification (auto-detected from path if not provided) | |
| policy | No | Security policy: low-noise (proven vulnerabilities only), balanced (default), comprehensive (all findings), governance (Article 14 focused), eu-ai-act (compliance mode) | balanced |
| output | No | Output format: summary (default), detailed (full findings), sarif (for CI/CD) | summary |
| filter | No | File filtering: auto (detect agent repos, adapt filtering), agent-only (aggressive filtering), all (no filtering) | auto |