Skip to main content
Glama
goklab

guardvibe

verify_remediation

Run a fresh security audit after remediation to verify all sections are addressed. Compares against before snapshot, flags skipped sections, and returns 'complete' only when every issue is resolved.

Instructions

Compare before/after audit results to verify ALL sections were addressed. MUST be called after completing remediation to confirm success. Runs a fresh audit and compares against the before snapshot. Explicitly flags skipped sections and refuses to return 'complete' status unless every section is addressed. Pass the before audit hash or let it re-run. Example: verify_remediation({path: '.', before_hash: 'abc123'})

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathNoProject root directory.
before_hashNoResult hash from the initial full_audit (for tracking)
formatNoOutput formatjson
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: it runs a fresh audit, compares to a before snapshot, flags skipped sections, and refuses to return 'complete' unless all sections are addressed. It also explains that the 'before_hash' parameter is optional and can be omitted to re-run the audit. This level of detail is comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded: the first two sentences state the core purpose and usage condition. Each of the four sentences adds meaningful information, and the example call provides a concrete reference. There is no extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (verification after remediation), the description is complete. It covers purpose, timing, behavioral constraints, parameter semantics, and provides an example. Without an output schema, it reasonably does not detail return values, but it does mention that the tool will refuse to return 'complete' status if issues remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions, establishing a baseline of 3. The description adds value by explaining that 'before_hash' comes from the initial full_audit and can be left empty to re-run, and provides an example call. This clarifies parameter usage beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to compare before/after audit results to verify all sections were addressed. It uses a specific verb 'verify' and identifies the resource (remediation results). The description distinguishes this tool from siblings like 'full_audit' and 'remediation_plan' by focusing on verification after remediation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states that the tool MUST be called after completing remediation, providing clear timing guidance. While it does not name specific alternatives, the context implies that this is the verification step. The description could be improved by explicitly stating when not to use it (e.g., use 'full_audit' for initial scanning).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/goklab/guardvibe'

If you have feedback or need assistance with the MCP directory API, please join our Discord server