inkog
Inkog is a security co-pilot for AI agent development, providing real-time vulnerability scanning, governance verification, and compliance analysis directly within your AI coding environment (Claude Desktop, Cursor, Claude Code).
Static & Deep Security Scanning: Analyze AI agent code for prompt injection, infinite loops, token bombing, SQL injection via LLM, missing guardrails, and complex logic flaws across 20+ frameworks (LangChain, CrewAI, LangGraph, AutoGen, n8n, Flowise, etc.).
Governance Verification: Validate that
AGENTS.mddeclarations match actual code behavior (e.g., "read-only declared but code writes data"), essential for EU AI Act Article 14 compliance.Compliance Reporting: Generate reports mapped to EU AI Act, NIST AI RMF, ISO 42001, and OWASP LLM Top 10, with customizable output formats (SARIF, markdown, JSON, PDF).
MCP Server Auditing: Security audit any MCP server from registries or GitHub before installation, checking for tool poisoning, privilege escalation, and data exfiltration risks.
Skill Package Scanning: Scan
SKILL.mdpackages and agent tool definitions for vulnerabilities mapped to OWASP Agentic Top 10 and OWASP MCP Top 10.Multi-Agent System Auditing: Analyze Agent-to-Agent (A2A) communications for infinite delegation loops, privilege escalation, data leakage, and unauthorized handoffs.
MLBOM Generation: Create a Machine Learning Bill of Materials listing all AI models, tools, data sources, and dependencies in CycloneDX or SPDX format, optionally including known vulnerabilities.
Finding Explanation & Remediation: Get plain-English explanations and step-by-step remediation guidance for specific security findings.
Supports security analysis and auditing for CrewAI multi-agent systems, detecting delegation loops, privilege escalation, and security risks in CrewAI agent implementations.
Enables auditing of GitHub MCP servers for security vulnerabilities and integrates with GitHub Actions for automated security gates on pull requests.
Provides integration with GitHub Actions for automated security scanning and compliance gates on every pull request in CI/CD pipelines.
Supports security scanning and vulnerability detection for LangChain agent implementations, including prompt injection, infinite loops, and missing guardrails.
Provides security auditing for LangGraph multi-agent systems, detecting delegation vulnerabilities and security risks in LangGraph implementations.
Supports security analysis for n8n workflow automation agents, detecting vulnerabilities and security risks in n8n-based AI agent implementations.
Generates compliance reports mapping security findings to OWASP LLM Top 10 and OWASP Agentic Top 10 frameworks for AI agent security standards.
Enables security auditing of Slack MCP servers for vulnerabilities before installation, analyzing tool permissions and data flow risks.
Provides complementary AI-specific security scanning that understands AI agent vulnerabilities not covered by traditional code scanners like Snyk.
Offers specialized AI agent security analysis that complements traditional code quality tools like SonarQube with AI-specific vulnerability detection.
Generates Machine Learning Bill of Materials (MLBOM) in SPDX format for documenting AI agent dependencies and supply chain compliance.
Inkog MCP Server
Security companion for AI agent development in Claude, Cursor, and Claude Code.
Ask your AI pair-programmer to build an agent. Inkog checks it as you code — scanning for vulnerabilities, explaining findings in plain English, verifying AGENTS.md governance, and auditing agent-to-agent delegation. All inside the same conversation, no context switch.
Available in Claude Desktop, Cursor, Claude Code, ChatGPT, and any MCP-compatible client.
The Dev-Flow Loop
Inkog is designed to live inside the conversation where you build the agent — not as a post-hoc gate:
Ask Claude to build a piece of agent logic.
Ask Claude to scan it with Inkog —
"Scan this with Inkog and show me any CRITICAL or HIGH findings."Ask Claude to explain each finding in plain English —
"Explain the top finding. What's the risk, and how do I fix it?"Ask Claude to apply the fixes. Review the diff, approve, re-scan.
Before shipping, verify governance —
"Verify my AGENTS.md against the code"and"Audit the agent-to-agent delegation".
Read the full walkthrough: Building Secure AI Agents with Claude Code and the Inkog MCP.
Recommended prompts
"Scan the current directory with Inkog and show me any CRITICAL or HIGH findings."
"Explain the top finding in plain English. What's the risk, and how do I fix it?"
"Verify my AGENTS.md against the code."
"Audit the agent-to-agent delegation in this crew."
"Run a compliance report and map the findings to EU AI Act Articles 12, 14, and 15."
"Audit the MCP servers I'm integrating with."
When to Use Inkog
Building an AI agent — Scan during development to catch infinite loops, prompt injection, and missing guardrails before they ship
Adding security to CI/CD — Add
inkog-io/inkog@v1to GitHub Actions for automated security gates on every PRPreparing for EU AI Act — Generate compliance reports mapping your agent to Article 14, NIST AI RMF, OWASP LLM Top 10
Reviewing agent code — Use from Claude Code, Cursor, or any MCP client to get security analysis while you code
Auditing MCP servers — Check any MCP server for tool poisoning, privilege escalation, or data exfiltration before installing
Verifying AGENTS.md — Validate that governance declarations match actual code behavior
Building multi-agent systems — Detect delegation loops, privilege escalation, and unauthorized handoffs between agents
What Inkog Does
Logic Flaw Detection: Find infinite loops, recursion risks, and missing exit conditions
Security Analysis: Detect prompt injection paths, unconstrained tools, and data leakage risks
AGENTS.md Governance: Validate that code behavior matches governance declarations
Compliance Reporting: Generate reports for EU AI Act, NIST AI RMF, OWASP LLM Top 10
MCP Server Auditing: Audit any MCP server before installation
Multi-Agent Analysis: Audit Agent-to-Agent communications for logic and security issues
Installation
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"inkog": {
"command": "npx",
"args": ["-y", "@inkog-io/mcp"],
"env": {
"INKOG_API_KEY": "sk_live_your_api_key"
}
}
}
}Cursor
Add to your Cursor MCP settings:
{
"mcpServers": {
"inkog": {
"command": "npx",
"args": ["-y", "@inkog-io/mcp"],
"env": {
"INKOG_API_KEY": "sk_live_your_api_key"
}
}
}
}Global Installation
npm install -g @inkog-io/mcpGetting Your API Key
Sign up for free at app.inkog.io
Copy your API key from the dashboard
Set it as
INKOG_API_KEYenvironment variable
Available Tools
P0 - Core Analysis (Essential)
Tool | Description |
| Static analysis for logic flaws and security risks |
| Validate AGENTS.md declarations match actual code behavior |
P1 - Enterprise Features
Tool | Description |
| Generate EU AI Act, NIST, OWASP compliance reports |
| Get detailed remediation guidance for findings |
| Audit any MCP server before installation |
| Generate ML Bill of Materials (CycloneDX, SPDX) |
P2 - Multi-Agent Analysis
Tool | Description |
| Audit Agent-to-Agent communications |
Tool Details
inkog_scan
Static analysis for AI agent code - finds logic flaws and security risks.
Arguments:
path (required) File or directory path to scan
policy (optional) Analysis policy: low-noise, balanced, comprehensive, governance, eu-ai-act
output (optional) Output format: summary, detailed, sarifExample: "Scan my LangChain agent for logic flaws"
inkog_verify_governance
Validate that AGENTS.md declarations match actual code behavior. This is Inkog's unique differentiator - no other tool does governance verification.
Arguments:
path (required) Path to directory containing AGENTS.md and agent codeExample: "Verify my agent's governance declarations"
inkog_compliance_report
Generate compliance reports for regulatory frameworks.
Arguments:
path (required) Path to scan
framework (optional) eu-ai-act, nist-ai-rmf, iso-42001, owasp-llm-top-10, all
format (optional) markdown, json, pdfExample: "Generate an EU AI Act compliance report for my agent"
inkog_explain_finding
Get detailed explanation and remediation guidance for a security finding.
Arguments:
finding_id (optional) Finding ID from scan results
pattern (optional) Pattern name (e.g., prompt-injection, infinite-loop)Example: "Explain how to fix prompt injection vulnerabilities"
inkog_audit_mcp_server
Security audit any MCP server from the registry or GitHub.
Arguments:
server_name (optional) MCP server name from registry (e.g., "github", "slack")
repository_url (optional) Direct GitHub repository URLExample: "Audit the GitHub MCP server for security issues"
inkog_generate_mlbom
Generate a Machine Learning Bill of Materials listing all AI components.
Arguments:
path (required) Path to agent codebase
format (optional) cyclonedx, spdx, json
include_vulnerabilities (optional) Include known CVEs (default: true)Example: "Generate an MLBOM for my AI project"
inkog_audit_a2a
Audit Agent-to-Agent communications for security risks.
Arguments:
path (required) Path to multi-agent codebase
protocol (optional) a2a, crewai, langgraph, auto-detect
check_delegation_chains (optional) Check for infinite loops (default: true)Example: "Audit my CrewAI multi-agent system for security risks"
Supported Frameworks
Inkog works with all major AI agent frameworks:
LangChain / LangGraph
CrewAI
AutoGen
n8n
Flowise
Dify
Microsoft Copilot Studio
Custom implementations
Configuration
All configuration is done via environment variables:
Variable | Description | Default |
| Your API key (required) | - |
| API base URL |
|
| API version |
|
| Request timeout (ms) |
|
| Log level |
|
| Log format (json/text) |
|
Development
# Install dependencies
npm install
# Build
npm run build
# Run in development mode
npm run dev
# Run tests
npm test
# Lint
npm run lintWhy Inkog?
Security in the Dev-Flow, Not After It
Most AI agent security tools run after the code is written. Inkog lives inside the conversation where you build the agent — so findings get fixed before they land in a PR, not three weeks later.
The Only Tool with AGENTS.md Verification
Inkog is the only tool that can validate your agent's governance declarations against its actual code behavior. This is essential for:
EU AI Act Article 14 compliance (human oversight)
Enterprise governance requirements
Preventing governance drift as code evolves
Purpose-Built for AI Agents
Unlike traditional code scanners (Snyk, Semgrep, SonarQube), Inkog understands AI-specific issues:
Infinite loops and recursion risks
Prompt injection paths
Unconstrained tool access
Missing exit conditions
Cross-tenant data leakage
Multi-Framework Support
Inkog's Universal IR (Intermediate Representation) works with any agent framework. Add one integration, get analysis for all frameworks.
License
Apache-2.0 - see LICENSE
Links
Built with security by Inkog.io
Appeared in Searches
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/inkog-io/inkog'
If you have feedback or need assistance with the MCP directory API, please join our Discord server