Skip to main content
Glama
goklab

guardvibe

deep_scan

Use AI to find hidden security flaws—IDOR, business logic bugs, race conditions—that pattern-based scanners miss. Focus scans on specific vulnerability classes.

Instructions

LLM-powered deep security analysis for vulnerabilities that pattern-matching cannot detect: IDOR, business logic flaws, race conditions, stale auth, mass assignment, privilege escalation. Defaults to Claude Haiku 4.5 (~cents per scan); pass model: 'sonnet' for deeper analysis at higher cost. Requires ANTHROPIC_API_KEY or OPENAI_API_KEY env var.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesCode to analyze
languageYesProgramming language
contextNoAdditional context (e.g., 'This is a payment endpoint')
existingFindingsNoAlready-detected findings to avoid duplicating
focusNoFocus area — narrows the prompt to a specific vulnerability classall
modelNoLLM model. haiku = fast & cheap (default), sonnet = deeper analysishaiku
maxBytesNoMax prompt size in bytes — caps cost. Code over this limit is truncated.
formatNoOutput formatmarkdown
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It discloses the LLM-powered nature, default model, cost, environment variable requirements, and code truncation behavior. It does not mention whether the tool is read-only, output format details, or error handling, leaving some behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph containing all essential info: purpose, vulnerability types, model defaults, cost, and environment requirements. While dense, it could be more structured (e.g., bullet points) for easier scanning, but it remains succinct.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description covers purpose, usage hints, and environment setup. It lacks details on expected output format (though format parameter exists), error scenarios, and performance characteristics, leaving some contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by explaining the default model ('Haiku 4.5 ~cents per scan') and cost implications of using 'sonnet', and mentions required API keys (not in schema). This goes beyond schema descriptions to inform parameter choices.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'LLM-powered deep security analysis for vulnerabilities that pattern-matching cannot detect' and lists specific vulnerability types (IDOR, business logic flaws, etc.), making the purpose precise and differentiating it from simple pattern-matching tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises when to use this tool (for advanced vulnerabilities) and provides context on model selection and environment setup. However, it does not explicitly state when not to use it or compare with sibling tools like scan_file or scan_directory, which would enhance guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/goklab/guardvibe'

If you have feedback or need assistance with the MCP directory API, please join our Discord server